- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
re: Let's talk singularity.....
Posted on 5/29/14 at 12:36 pm to Korkstand
Posted on 5/29/14 at 12:36 pm to Korkstand
quote:
That said, I'm not sure it's possible to create a machine that is conscious or capable of actual thought.
That depends on your philosophical views. Is consciousness emergent? Or does it require something else.
If it's emergent then it's merely the result of a powerul processor (brain). If not, then we will do our best to simulate intelligence, which could be even worse as it would lack anything that would qualify as a conscience.
This post was edited on 5/29/14 at 12:37 pm
Posted on 5/29/14 at 12:48 pm to SidewalkDawg
quote:I don't think it depends on my views so much as it depends on the true nature of consciousness, which we don't yet understand and may never understand.
That depends on your philosophical views. Is consciousness emergent? Or does it require something else.
quote:Getting religious here: does it require a "soul"?
Is consciousness emergent? Or does it require something else.
quote:We have clusters that we believe are more powerful than a human brain, but nothing is going to emerge from them that hasn't been programmed.
If it's emergent then it's merely the result of a powerul processor (brain).
quote:This raises new questions:
If not, then we will do our best to simulate intelligence, which could be even worse as it would lack anything that would qualify as a conscience.
Can a non-conscious intelligence be taught the difference between good and evil, right and wrong?
Could it produce original thoughts? Would it ever even think about improving itself?
Posted on 5/29/14 at 12:53 pm to SidewalkDawg
Assuming the AI does develop, and its super-intelligent exponentially beyond human capacity, what does that mean? It would still be a program running on a HDD right? Let's say it determines that humans are a threat, how exactly does it stop that threat? Is it going to turn light-switches into trick electrocution switches so that every human that comes into its server room dies? (Okay, I know that's ridiculous, but seriously what?) Is it going to build a gun? Set off nuclear weapons?
It would seem likely that its main capacity for disrupting human behavior would be through a network which could affect economies. Is it going to electronically forge documents and call in the police to have us arrested? It can run DDoS attacks and create super awesome computer viruses all day long, but that's not going to wipe out humans.
Self-replicating nano-technology. If all humans had nanobots in them AND the nanobots were networked, then yeah, I guess the AI could tell the nanobots to kill us. But again, until the AI has the capacity to actually make physical objects, it's going to need humans to support its infrastructure.
It would seem likely that its main capacity for disrupting human behavior would be through a network which could affect economies. Is it going to electronically forge documents and call in the police to have us arrested? It can run DDoS attacks and create super awesome computer viruses all day long, but that's not going to wipe out humans.
Self-replicating nano-technology. If all humans had nanobots in them AND the nanobots were networked, then yeah, I guess the AI could tell the nanobots to kill us. But again, until the AI has the capacity to actually make physical objects, it's going to need humans to support its infrastructure.
Posted on 5/29/14 at 1:03 pm to Korkstand
quote:
Getting religious here: does it require a "soul"?
Yes the "Something else" I was referring too would be a soul. Traditionally the arguement for the nature of consiousness involved Emergent or Embodied consciousness.
quote:
We have clusters that we believe are more powerful than a human brain, but nothing is going to emerge from them that hasn't been programmed.
Yes, more powerful at its designed task - data processing. Humans are particulary poor at data processing. A simple calculator is superior to us. This again goes back to our poor understanding of the brain and consciousness, it obviously involves more than just processing power.
quote:
This raises new questions:
Can a non-conscious intelligence be taught the difference between good and evil, right and wrong?
Could it produce original thoughts? Would it ever even think about improving itself?
These I cannot answer definitively. We just don't know.
Posted on 5/29/14 at 1:24 pm to SidewalkDawg
quote:This is why I think that producing a true AI might be impossible. It seems very possible, to me, that the brain might not be capable of fully understanding itself.
This again goes back to our poor understanding of the brain and consciousness, it obviously involves more than just processing power.
Posted on 5/29/14 at 3:14 pm to Ye_Olde_Tiger
quote:Well clearly you haven't seen the potential for mayhem that an AI gone amok could create in a computer-controlled "Smart Home" - I mean AI can easily figure out how to do more than turn lights on and off, set thermostats, start ovens, turn on music or TV and unlock doors. For instance it can overload circuits and start wiring fires in the middle of the night for whole plugged in neighborhoods (once we humans get to think ourselves dependent on it). Plus it could disable smoke/fire detectors so no help would come. We're suckers for convenience. How about scalding-hot water pouring on you for your morning shower (it might even start normally so you'd step in, then...)? Just a couple of possibles - think of the chaos and destruction just this couple of things could create in whole communities, cities. Go larger scale - office towers, hi-rise apartments, government buildings, airports, submarines. And that's without firing off a single nuke. If AI wanted to for its own inscrutable reasons, it could plunge us all into a barbaric new stone age, and then let nature take its course.
Let's say it determines that humans are a threat, how exactly does it stop that threat? Is it going to turn light-switches into trick electrocution switches so that every human that comes into its server room dies? (Okay, I know that's ridiculous, but seriously what?) Is it going to build a gun? Set off nuclear weapons?
ETA: I think you're right that AI could disrupt whole economies as we more and more depend on forms of computerized banking, stock trading, purchasing, delivery of goods, etc. Imagine all this simultaneous.
This post was edited on 5/29/14 at 3:18 pm
Popular
Back to top
Follow TigerDroppings for LSU Football News