Started By
Message

re: People talking about AI “taking over” always make me laugh.

Posted on 3/23/24 at 5:42 pm to
Posted by GetMeOutOfHere
Member since Aug 2018
697 posts
Posted on 3/23/24 at 5:42 pm to
quote:

Again, that’s not how they work. The training data is not stored in the model.


Ok, fair point, it generates a model off the data and predicts what is most likely the next word as it generates text.

It's not aware of what the words mean.
Posted by lostinbr
Baton Rouge, LA
Member since Oct 2017
9574 posts
Posted on 3/23/24 at 7:02 pm to
quote:

Ok, fair point, it generates a model off the data and predicts what is most likely the next word as it generates text.

I think it’s a pretty important distinction though - a neural network capable of “remembering” information from its training without actually storing the training data is considerably different from a search engine or database that’s just looking the information up, in much the same way that a person with mastery of a subject matter is considerably different than someone with access to a library or Wikipedia.
quote:

It's not aware of what the words mean.

As I’ve said before.. actual “awareness” of anything is nearly impossible to measure. We are seeing developments where LLMs are showing greater understanding of context and connotation. There’s also the fact that the LLM is responding to a prompt - it’s not just stacking words together in an order that makes sense. It’s stacking words together in an order that makes sense given the prompt entered by the user.

That being said, I would not expect an LLM to have true awareness of the meaning of words even if we could measure awareness. LLMs’ entire existence is text-based. They might be able to tell you that green and red are at the opposite ends of a color circle and that red corresponds to light wavelengths in the ~700 nm range, but they don’t know what green or red look like. So how could they possibly understand?

In a similar vein, diffusion models tend to have difficulty with concepts that require an understanding of 3-dimensional space (although they’re getting better). This is not terribly surprising as all of their training data and outputs are 2-dimensional.

I loosely relate LLMs to a person who is blind, can’t smell, can’t taste, can’t feel anything, and has no motor functions. But they can hear and speak (somehow). Would that person ever truly have any understanding of words beyond “pattern matching?” It doesn’t make language processing any less important when you put the rest of the pieces back together.

At some point there will be attempts to unify the AI puzzle pieces and, eventually, connect them to the outside world.
first pageprev pagePage 1 of 1Next pagelast page
refresh

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on Twitter, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookTwitterInstagram