Started By
Message

re: People talking about AI “taking over” always make me laugh.

Posted on 3/23/24 at 4:25 pm to
Posted by GRTiger
On a roof eating alligator pie
Member since Dec 2008
63066 posts
Posted on 3/23/24 at 4:25 pm to
If it changes its action based on prior results, it's AI. If it can provide information it doesn't already have programmed, it's AI. Foundationally, AI is anything that can be taught to do what humans typically do.
Posted by theunknownknight
Baton Rouge
Member since Sep 2005
57368 posts
Posted on 3/23/24 at 4:28 pm to
quote:

If it changes its action based on prior results, it's AI. If it can provide information it doesn't already have programmed, it's AI. Foundationally, AI is anything that can be taught to do what humans typically do.


So we’ve had AI for 23 years then.

With that in mind, this version of AI taking everything over does make me laugh. This version of AI is more akin to an increased mechanism of data exchange.

If AI became an autonomous self-serving mechanism that could not be controlled (which may happen), then, yeah, this could take a lot over quickly.
Posted by Robin Masters
Birmingham
Member since Jul 2010
29824 posts
Posted on 3/23/24 at 4:29 pm to
quote:

work in sales. 90% of the entry level sales work is mass emailing potential customers. We are less than five years than that being done by AI entirely.


Maybe some inside sales jobs but not many. I’ve been in sales 25 years and they’ve been trying to automate sales functions for years because sales people are very well paid and management would love nothing more than recoup 5%-80% commissions. It’s simply not possible though. Sales people are the most important gear in the commerce machine. This country would come to a grinding fault without sales people lubricating the mechanism.
Posted by GRTiger
On a roof eating alligator pie
Member since Dec 2008
63066 posts
Posted on 3/23/24 at 4:30 pm to
quote:

So we’ve had AI for 23 years then.



Definitely. Probably longer, just maybe not commercially.

Unless you think it's the first tech that will ever regress, 2 more decades with a tech that progresses exponentially should at least raise an eye brow.
Posted by theunknownknight
Baton Rouge
Member since Sep 2005
57368 posts
Posted on 3/23/24 at 4:32 pm to
quote:

Unless you think it's the first tech that will ever regress


Well if it’s like a LLM that learns off of human intelligence and input, it will definitely regress in the next 20 years
Posted by GRTiger
On a roof eating alligator pie
Member since Dec 2008
63066 posts
Posted on 3/23/24 at 4:36 pm to
Yea the mental instability of modern humans will slow it a touch, but not forever.

The models in use outside of publicly accessible LLMs are much more rigorous in their development.
Posted by GetMeOutOfHere
Member since Aug 2018
692 posts
Posted on 3/23/24 at 4:46 pm to
quote:

Once AI gets a little more conversational, again maybe five years away,


Just like self driving cars.

Been 5 years away for some time now.

That last 10% in software is a bitch.
Posted by Ric Flair
Charlotte
Member since Oct 2005
13664 posts
Posted on 3/23/24 at 4:51 pm to
quote:

work in sales. 90% of the entry level sales work is mass emailing potential customers


Does this actually get any customers? Unless the email is an invitation to a steak dinner you are hosting (where you sell/demonstrate product in person), I don’t see how a random email blast would generate sales.
Posted by GRTiger
On a roof eating alligator pie
Member since Dec 2008
63066 posts
Posted on 3/23/24 at 4:56 pm to
quote:

Does this actually get any customers?


Of course it does. Success rate varies, but brand awareness is generally positive. An entire industry exists because of its effectiveness.
Posted by fallguy_1978
Best States #50
Member since Feb 2018
48606 posts
Posted on 3/23/24 at 4:58 pm to
Technological shifts usually take way longer than people estimate. If they say 10 years it will be 30.
Posted by GetMeOutOfHere
Member since Aug 2018
692 posts
Posted on 3/23/24 at 5:17 pm to
quote:


It reminds me of the time that someone around here said that GPT is just a search engine. Sounds plausible I guess.. if you ignore the fact that it’s actually incapable of searching the internet for information when you enter a query.


That was probably me, and I'm right.

Is it searching Google or bing? No. It is looking at its training data to come up with something that is the most likely correct value, and in the case of ChatGPT, that set includes a whole lot of the internet.

Is it a useful tool? Of course, I use it almost every day. It's not going to take over the world, and the hype is getting into the crypto realm.

This post was edited on 3/23/24 at 5:18 pm
Posted by lostinbr
Baton Rouge, LA
Member since Oct 2017
9412 posts
Posted on 3/23/24 at 5:32 pm to
quote:

According to the standard definition for the last 40 years.

I don’t think there’s anything standard about a definition of artificial intelligence that requires self-awareness. Self-awareness isn’t even something you can really evaluate, which is kind of the entire point of the Turing Test.
quote:

What else would it be? If it isn’t that then ANY form of querying mechanism is considered AI. Where’s the line?

I think the line is neural networks/machine learning. There’s a reason terms like “AGI” and “ANI” are fairly recent - because it’s only recently that the distinction matters.

Arguably the biggest hurdle facing AI development over the past 50 years has been natural language processing. Transformers and LLMs have shattered natural language barriers to the point where the next question is now how to integrate these LLMs with computer vision, speech recognition, etc. and those steps are happening now.
Posted by lostinbr
Baton Rouge, LA
Member since Oct 2017
9412 posts
Posted on 3/23/24 at 5:35 pm to
quote:

Is it searching Google or bing? No. It is looking at its training data to come up with something that is the most likely correct value, and in the case of ChatGPT, that set includes a whole lot of the internet.

Again, that’s not how they work. The training data is not stored in the model.
Posted by Keltic Tiger
Baton Rouge
Member since Dec 2006
19305 posts
Posted on 3/23/24 at 5:40 pm to
"Just a stats program"? This goes beyond living under a rock. This is pure stupidity. Musk's neurochips just a stats program? There are already stories about AI's going off on their own, beyond what they were programmed to do. Political bots already everywhere on the internet. etc etc etc
Posted by GetMeOutOfHere
Member since Aug 2018
692 posts
Posted on 3/23/24 at 5:42 pm to
quote:

Again, that’s not how they work. The training data is not stored in the model.


Ok, fair point, it generates a model off the data and predicts what is most likely the next word as it generates text.

It's not aware of what the words mean.
Posted by CocomoLSU
Inside your dome.
Member since Feb 2004
150771 posts
Posted on 3/23/24 at 5:58 pm to
quote:

People talking about AI “taking over” always make me laugh.

You realize most people saying that aren’t really talking about the current iteration of AI. Right now it’s still more of an information organizer than anything else (although that’s not all it can do). I think those types of comments are more about AGI or ASI.
Posted by F1y0n7h3W4LL
Below I-10
Member since Jul 2019
1510 posts
Posted on 3/23/24 at 6:04 pm to
Copilot rewrote this:

quote:

The notion that AI will replace all jobs is a misunderstanding of its capabilities. Certainly, AI has its applications, but it’s not a catch-all solution. Often, jobs susceptible to AI replacement are those with repetitive tasks. AI operates on statistical models and requires human input. As AI becomes more prevalent, we’ll see a rise in jobs focused on developing and refining AI technologies, which will likely supplant the roles AI is automating.
Posted by GRTiger
On a roof eating alligator pie
Member since Dec 2008
63066 posts
Posted on 3/23/24 at 6:35 pm to
AI governance would be a good path to take. Code review, auditing, something of that nature. We will certainly create roles to manage things.
Posted by lostinbr
Baton Rouge, LA
Member since Oct 2017
9412 posts
Posted on 3/23/24 at 7:02 pm to
quote:

Ok, fair point, it generates a model off the data and predicts what is most likely the next word as it generates text.

I think it’s a pretty important distinction though - a neural network capable of “remembering” information from its training without actually storing the training data is considerably different from a search engine or database that’s just looking the information up, in much the same way that a person with mastery of a subject matter is considerably different than someone with access to a library or Wikipedia.
quote:

It's not aware of what the words mean.

As I’ve said before.. actual “awareness” of anything is nearly impossible to measure. We are seeing developments where LLMs are showing greater understanding of context and connotation. There’s also the fact that the LLM is responding to a prompt - it’s not just stacking words together in an order that makes sense. It’s stacking words together in an order that makes sense given the prompt entered by the user.

That being said, I would not expect an LLM to have true awareness of the meaning of words even if we could measure awareness. LLMs’ entire existence is text-based. They might be able to tell you that green and red are at the opposite ends of a color circle and that red corresponds to light wavelengths in the ~700 nm range, but they don’t know what green or red look like. So how could they possibly understand?

In a similar vein, diffusion models tend to have difficulty with concepts that require an understanding of 3-dimensional space (although they’re getting better). This is not terribly surprising as all of their training data and outputs are 2-dimensional.

I loosely relate LLMs to a person who is blind, can’t smell, can’t taste, can’t feel anything, and has no motor functions. But they can hear and speak (somehow). Would that person ever truly have any understanding of words beyond “pattern matching?” It doesn’t make language processing any less important when you put the rest of the pieces back together.

At some point there will be attempts to unify the AI puzzle pieces and, eventually, connect them to the outside world.
Posted by Proximo
Member since Aug 2011
15557 posts
Posted on 3/23/24 at 7:14 pm to
On august 29th 1997 it’s gonna feel pretty frickin real to you too, get it?
first pageprev pagePage 4 of 5Next pagelast page

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on Twitter, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookTwitterInstagram