- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Coaching Changes
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
re: Ray Kurzweil. Google's Director of Engineering predicts Singularity by 2029
Posted on 3/17/17 at 10:24 am to SidewalkDawg
Posted on 3/17/17 at 10:24 am to SidewalkDawg
quote:
I'm saying that we will have to navigate all kinds of social, political, and moral landscapes before we are even allowed to flip that switch.
I don't know about that. AI research is essentially an un-regulated field of study. We don't even know who the hell is working on AI and for what purposes. Hell, some think we will develop strong AI / AGI inadvertently, in a field like finance.
Posted on 3/17/17 at 10:25 am to SidewalkDawg
quote:
In order for the Singularity to hit, we must first develop strong AI.
Just strong enough for it to develop next gen AI, and so on... if each gen develops better and better AI...Skynet coming down the pike at some point.
Posted on 3/17/17 at 10:26 am to Delacroix22
quote:
"The singularity" is when machines are so advanced they can improve their technology at an exponential phase that the interval between changes is not discernible. Ushering in essentially AI, the inability to differentiate reality from non-conscious stimulation.
It goes like this:
You build a computer in 1985. To double the power f that computer take 5 years until a more powerful machine happens. Then you have a more powerful machine 2.5 years later.
Then you apply new computational power to making new machines. Then these machines make new machines to make better machines.
Essentially to the point that the chip or processesor or android or computer you just made immediately channels that power to make a NEW improvement. So incremental tech advances no longer take years, nor months, nor day, but HOURS. Then seconds. Then the limit approaches zero and it is constant exponential growth.
Like using IBM Watson to make a better Watson. Then that makes a better Watson. Then that one then the next then the next. So advanced they can assess and make and produce improvements instantly.
Frightening stuff when you grasp the concept
All of this.
The machine then consuming the human, let's not use the friendly word "merging," is a very real possibility either for energy, resource, or efficiency purposes.
quote:
And an eventual reality
Bingo.
This post was edited on 3/17/17 at 10:28 am
Posted on 3/17/17 at 10:27 am to LordSaintly
quote:
The Singularity is inevitable. Not sure if it will happen by 2029, but it will happen.
Give me pharmaceutical grade LSD & a library card, & we'll reach the singularity by 2020
Posted on 3/17/17 at 10:28 am to EastcoastEER
quote:
AI research is essentially an un-regulated field of study.
Yeah this is currently a problem, so I see what you're saying. But I am seeing more and more people in academia who want to put an end to this. It's only a matter of time before someone does something so "uncanny valley" that the government steps in to regulate.
Posted on 3/17/17 at 10:29 am to SidewalkDawg
Yeah, the book I referenced earlier is basically a plea for someone to do something to try to put safeguards in place, and thankfully more and more high profile people like Musk are preaching the dangers of AGI/ASI.
Posted on 3/17/17 at 10:34 am to Big_Slim
quote:
But seriously 12 years is a laughable timeline. Another poster pointed me to the complexity break, which is basically the idea that not all technological progress is exponential since the more information becomes known about a subject or field the harder it becomes to innovate further within that field, and I think we are approaching that break with computation. Hell I think Moore's law failed for the first time ever a couple years ago and has been progressively slowing down.
Once we can have a conversation with an Alexa type thing (actual conversation, pas Turing test), master quantum computing and start getting really good at mapping the brain, I'll start getting nervous.
Yeah, Kurzweil is an extremist on this subject and always has been. There are a lot of other very well respected scientists, engineers, etc who agree with what you just posted. It's definitely a debate, and no one really knows the answer either way currently. Although Kurzweil is far more extreme in his views than most of the famous names surrounding this issue (Musk, Gates, etc.).
I personally share the not nearly as optimistic as Kurzweil view that we don't know if the exponential growth can continue, and if it does and ASI is achieved, it's fricking scary. I think it's likely the greatest risk to humanity and our future as a species.
Waitbutwhy had a great article on AI a few years ago:
LINK
Posted on 3/17/17 at 10:40 am to EastcoastEER
quote:
AI research is essentially an un-regulated field of study. We don't even know who the hell is working on AI and for what purposes.
I think this is a pretty unfounded fear. The only people with the knowledge and means to work on it are the Googles and the IBMs of the world and their motivations are pretty positive for humanity. I'd be more worried if governments or evil geniuses were more interested in the field. But the people doing this research are more like musk and Kurzweil than Bernie Madoff types. I'm pretty optimistic.
Posted on 3/17/17 at 10:49 am to MusclesofBrussels
quote:
Waitbutwhy had a great article on AI a few years ago:
I literally posted this on page 2. Pay attention
Posted on 3/17/17 at 10:51 am to SidewalkDawg
quote:
In order for the Singularity to hit, we must first develop strong AI.
I'm saying that we will have to navigate all kinds of social, political, and moral landscapes before we are even allowed to flip that switch.
Well who is going to make it slow down? You can't just stop the progress of technology. Someone is going to do it with or without the government's approval, and said person won't know for certain that he did it until it is activated and may create it unintentionally.
When Singularity is actually near, the world will be on pins and needles not knowing the exact day it will hit. But we'll know it when it happens.
Posted on 3/17/17 at 10:53 am to LucasP
quote:
I think this is a pretty unfounded fear. The only people with the knowledge and means to work on it are the Googles and the IBMs of the world and their motivations are pretty positive for humanity.
Unless a lot the articles and books I have read on this subject are total BS, that is not true. At all.
Like I said the inadvertent emergence of strong AI is very real and terrifying possibility. A ton of people in countless fields that do not have the stated goal of "make strong AI" work with programs that could lead to AI. Neural net programming in finance, things like that. People just making a smarter, more efficient computer program to help them make more money in the stock market is just one example of this.
Posted on 3/17/17 at 10:54 am to SidewalkDawg
quote:
Yeah this is currently a problem, so I see what you're saying. But I am seeing more and more people in academia who want to put an end to this. It's only a matter of time before someone does something so "uncanny valley" that the government steps in to regulate.
They'll never be regulated. It impedes progress with us as a species. Too much money is on the line with R&D, and someone will create it behind closed doors regardless, unintentionally or not. The Singularity is inevitable.
Posted on 3/17/17 at 10:55 am to OMLandshark
quote:
When Singularity is actually near, the world will be on pins and needles not knowing the exact day it will hit. But we'll know it when it happens.
You're assuming that a super intelligent computer would just make itself known immediately and start influencing the world. It's more likely that it would just give its creators insights (cures for diseases and engineering solutions) that would slowly be implemented.
It won't be this huge event, it will be a quiet industrial revolution that sneaks up on society.
Posted on 3/17/17 at 10:58 am to LucasP
I mean couldn't we regardless sense its processing power? I don't know how it could hide it, and as previously stated that once it reaches human levels of intelligence, it will reach thousandfold levels of human intelligence within merely weeks.
Posted on 3/17/17 at 10:58 am to EastcoastEER
quote:
People just making a smarter, more efficient computer program to help them make more money in the stock market is just one example of this.
That's already a reality, the stock market is completely run by AI. It's a very narrow version of intelligence, not really a threat to replace humans in anyway outside of it's narrow scope.
Posted on 3/17/17 at 11:01 am to OMLandshark
quote:
Too much money is on the line with R&D, and someone will create it behind closed doors regardless, unintentionally or not.
This scares me - the fact that it will most likely, from things I have read, happen behind closed doors, and the scariest part of all, is that there is decent chance that when AGI/ASI emerge, we might not even realize it until it's too late. People a hell of a lot smarter than me have put forth the fear that when AGI does emerge, it will be smart enough to conceal how smart it is until it has had the chance to replicate itself in order to protect itself from being deleted, since being deleted will be the #1 danger to AGI/ASI. And not in a "fear of dying" kind of way, but in a more practical "I can't complete my programmed goal, whatever that is, if I am deleted" find of way.
Posted on 3/17/17 at 11:01 am to OMLandshark
quote:
mean couldn't we regardless sense its processing power?
Of course we'll know it exists, I just don't think it's going to take over the world Skynet style. It will probably just make life better similar to the impact of smart phones. We could never go back to not having smart phones but they're not somwthing that everyone thinks about all the time.
This post was edited on 3/17/17 at 11:18 am
Posted on 3/17/17 at 11:07 am to LucasP
quote:
That's already a reality, the stock market is completely run by AI.
That's kind of my point - indirect, but advanced, research in making smarter and smarter programs is already taking place in places like Wall Street. And think about that - Wall Street sees advancing the intelligence of machines as a way to make a metric shite-ton of money. That should scare the ever-living frick out of everybody. They have more money to spend in order to advance machine intelligence than almost anybody.
quote:
It's a very narrow version of intelligence, not really a threat to replace humans in anyway outside of it's narrow scope.
That's true, as far as we know, right now. But again, its the speed of advancement, with no oversight, that is the danger. What is narrow today might not be in a year or two. The idea that narrow AI can't inadvertently progress to AGI is the type of thinking people in the field have warned against. It leads to complacency.
Posted on 3/17/17 at 11:13 am to EastcoastEER
quote:
And think about that - Wall Street sees advancing the intelligence of machines as a way to make a metric shite-ton of money. That should scare the ever-living frick out of everybody. They have more money to spend in order to advance machine intelligence than almost anybody.
It's not a bunch of Wall Street types writing code though, it's a bunch of nerds like Kurzweil. So yeah, they can use tools that silicon valley creates for them but they can't create anything and that's encouraging to me.
Look at Watson, that was created in order to provide medical services to people who couldn't afford doctors. On the way to that end, they realized he was good at Jeopardy. That's similar to how stock brokers got their tools.
My point is that nobody at Google or IBM is out to make a doomsday device and I doubt they'll do it on accident. But if anybody can avoid the pitfalls you're talking about, it would be them.
Popular
Back to top


2





