Page 1
Page 1
Started By
Message

General Artificial Intelligence

Posted on 6/3/22 at 10:08 am
Posted by Marquesa
Atlanta
Member since Nov 2020
1542 posts
Posted on 6/3/22 at 10:08 am
Looks like it will be here sooner than later. The hard part for people to understand is that, AGI - will be able to teach itself - at computer speeds - which will lead to exponential growth. It won't be long before AGI is smarter than any human that ever lived.

My concern is this - should we build in some sort of safeguard to allow up to keep control of the beast we've built. Should you be able to communicate a keyword or phrase to an AGI to shut it down. Something like "We eat umbrellas and yellow" A non-sense phrase that would never be uttered otherwise.

Do you think AGI will be an issue in the next 20 years - and if so, what should we do to protect ourselves.
This post was edited on 6/3/22 at 10:09 am
Posted by j1897
Member since Nov 2011
3581 posts
Posted on 6/3/22 at 10:10 am to
It's just regressions bro.
Posted by AmishSamurai
Member since Feb 2020
2671 posts
Posted on 6/3/22 at 10:13 am to
quote:

Do you think AGI will be an issue in the next 20 years


Is this before or after we get flying cars?

No, we won't be battling terminators in our (or our kids or grandkids) lifetime.
Posted by beerJeep
Louisiana
Member since Nov 2016
35165 posts
Posted on 6/3/22 at 10:14 am to
It’s going to tell us that we are a simulation
Posted by Bearcat90
The Land
Member since Nov 2021
2955 posts
Posted on 6/3/22 at 10:16 am to
AI will never have the level of human thinking.

AI is simply a matrix of decision making.

Human brains can think in terms of divergent ideas that AI will never comprehend.

The scary part is if or when AI merges with humanity.
Posted by wickowick
Head of Island
Member since Dec 2006
45828 posts
Posted on 6/3/22 at 10:16 am to
Posted by Landmass
Member since Jun 2013
18196 posts
Posted on 6/3/22 at 10:19 am to
I am increasingly convinced that the Amish may have been right.
Posted by LookSquirrel
Member since Oct 2019
6023 posts
Posted on 6/3/22 at 10:35 am to
Elon Musk says;

quote:

“robots will do everything better than us.” “I have exposure to the most cutting-edge A.I.,” Musk said, “and I think people should be really concerned by it.”
Posted by LSUbest
Coastal Plain
Member since Aug 2007
11357 posts
Posted on 6/3/22 at 10:40 am to
True AI like you are referring to is much more difficult to achieve than you imagine.

We can do it now but it's much too big to be a portable thing.
Posted by Marquesa
Atlanta
Member since Nov 2020
1542 posts
Posted on 6/3/22 at 12:58 pm to
GAI is already doing these things.

quote:

AI will never have the level of human thinking.

AI is simply a matrix of decision making.

Human brains can think in terms of divergent ideas that AI will never comprehend.

The scary part is if or when AI merges with humanity.
Posted by GumboPot
Member since Mar 2009
119074 posts
Posted on 6/3/22 at 1:13 pm to
quote:

The hard part for people to understand is that, AGI - will be able to teach itself - at computer speeds


I'll believe it when I see it. There are physical limitations to the size of a transistor. Electrons currently carry information. I have not checked but we may be at the limits already with commercial transistor size.

We need new technology to make the next leap. Quantum computing may make that leap but you are not carrying that around in your pocket. The hardware needs to be kept near absolute zero.

A photon transistor is another option but I don't know where that technology stands.
Posted by Big Scrub TX
Member since Dec 2013
33644 posts
Posted on 6/3/22 at 1:45 pm to
quote:

Do you think AGI will be an issue in the next 20 years - and if so, what should we do to protect ourselves.
I'm a long term AGI maximalist but medium term "meh". I think it might take 500 years to get there from here.
Posted by JAGuyHeh
Member since Aug 2021
179 posts
Posted on 6/3/22 at 2:46 pm to
Philosophically, AI is impossible.
Posted by Auburn1968
NYC
Member since Mar 2019
19802 posts
Posted on 6/3/22 at 2:55 pm to
quote:

AGI - will be able to teach itself - at computer speeds - which will lead to exponential growth.


Karl Simms did an "evolutionary software" experiment 30 years ago with a then Thinking Machines super computer. He created a basic graphics kernel and then applied small random changes to a set of 300. The one that showed a touch of movement towards a goal was selected as the seed for the next 300 tiny variations. Even then vast number of iterations were executed. It was basically a goal oriented self programming exercise.

https://www.youtube.com/watch?v=JBgG_VSP7f8
Posted by Auburn1968
NYC
Member since Mar 2019
19802 posts
Posted on 6/3/22 at 2:58 pm to
quote:

I am increasingly convinced that the Amish may have been right.


Should we have a massive solar flare like the Carrington Event of mid 1800's, they will be unaffected until the hungry hordes come out of Philly like armed locusts.
Posted by goatmilker
Castle Anthrax
Member since Feb 2009
64539 posts
Posted on 6/3/22 at 3:10 pm to
Serious question. Ever read any science fiction?
first pageprev pagePage 1 of 1Next pagelast page
refresh

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on Twitter, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookTwitterInstagram