- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
re: People talking about AI “taking over” always make me laugh.
Posted on 3/23/24 at 5:42 pm to lostinbr
Posted on 3/23/24 at 5:42 pm to lostinbr
quote:
Again, that’s not how they work. The training data is not stored in the model.
Ok, fair point, it generates a model off the data and predicts what is most likely the next word as it generates text.
It's not aware of what the words mean.
Posted on 3/23/24 at 7:02 pm to GetMeOutOfHere
quote:
Ok, fair point, it generates a model off the data and predicts what is most likely the next word as it generates text.
I think it’s a pretty important distinction though - a neural network capable of “remembering” information from its training without actually storing the training data is considerably different from a search engine or database that’s just looking the information up, in much the same way that a person with mastery of a subject matter is considerably different than someone with access to a library or Wikipedia.
quote:
It's not aware of what the words mean.
As I’ve said before.. actual “awareness” of anything is nearly impossible to measure. We are seeing developments where LLMs are showing greater understanding of context and connotation. There’s also the fact that the LLM is responding to a prompt - it’s not just stacking words together in an order that makes sense. It’s stacking words together in an order that makes sense given the prompt entered by the user.
That being said, I would not expect an LLM to have true awareness of the meaning of words even if we could measure awareness. LLMs’ entire existence is text-based. They might be able to tell you that green and red are at the opposite ends of a color circle and that red corresponds to light wavelengths in the ~700 nm range, but they don’t know what green or red look like. So how could they possibly understand?
In a similar vein, diffusion models tend to have difficulty with concepts that require an understanding of 3-dimensional space (although they’re getting better). This is not terribly surprising as all of their training data and outputs are 2-dimensional.
I loosely relate LLMs to a person who is blind, can’t smell, can’t taste, can’t feel anything, and has no motor functions. But they can hear and speak (somehow). Would that person ever truly have any understanding of words beyond “pattern matching?” It doesn’t make language processing any less important when you put the rest of the pieces back together.
At some point there will be attempts to unify the AI puzzle pieces and, eventually, connect them to the outside world.
Popular
Back to top
Follow TigerDroppings for LSU Football News