- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
Spinoff from OT: Artificial Intelligence Scenarios
Posted on 3/24/15 at 4:03 am
Posted on 3/24/15 at 4:03 am
How can you see AI destroying us? Can failsafes be built in?
Are there any good books on this?
Are there any good books on this?
Posted on 3/24/15 at 5:18 am to CCT
Posted on 3/24/15 at 9:16 am to CCT
IMO, if it is ever created, a super intelligence will eventually make us dumb. It will be the last thing we ever need to invent as it will do all the future inventing for us (there isn't a single accepted definition, but to be defined as a true AI, most experts believe it must be capable of creating new ideas). Not sure what will happen after this point. We either suffer or prosper at the hands of the AI.
Posted on 3/24/15 at 10:21 am to surprisewitness
quote:
IMO, if it is ever created, a super intelligence will eventually make us dumb. It will be the last thing we ever need to invent as it will do all the future inventing for us (there isn't a single accepted definition, but to be defined as a true AI, most experts believe it must be capable of creating new ideas). Not sure what will happen after this point. We either suffer or prosper at the hands of the AI.
3 things would probably happen:
A) Immortality, what this looks like is up to the imagination, the AI would rid the world of sickness, poverty, death, etc. - but would we be a part of a system, or would we still be individuals... who knows
B) It would grow within seconds, without us even blinking it would be smarter than all of the intelligence of the current world within milliseconds.
C) Space travel would become a thing of the now, we would spread quicker, and more efficiently than any sci-fi movie has ever predicted with a self learning AI at the helm, which is a good thing because of A
Posted on 3/24/15 at 10:30 am to BaddestAndvari
The most dangerous part about this idea is that it doesn't even have to be malicious intent. Take this example pulled from the above linked article:
They can be transformational for the human race, in a good way. It is all going to boil down to how we initially design them. AIs are driven to do what they are programmed to do. It's all about putting the appropriate constraints to keep the AI within the bounds of what you intended. It is generally accepted that AIs will not be able to differentiate our intentions in the way that we meant them in the initial programing.
quote:
A 15-person startup company called Robotica has the stated mission of “Developing innovative Artificial Intelligence tools that allow humans to live more and work less.” They have several existing products already on the market and a handful more in development. They’re most excited about a seed project named Turry. Turry is a simple AI system that uses an arm-like appendage to write a handwritten note on a small card.
The team at Robotica thinks Turry could be their biggest product yet. The plan is to perfect Turry’s writing mechanics by getting her to practice the same test note over and over again:
“We love our customers. ~Robotica”
Once Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.
To build Turry’s writing skills, she is programmed to write the first part of the note in print and then sign “Robotica” in cursive so she can get practice with both skills. Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples. If the written note sufficiently resembles a certain threshold of the uploaded notes, it’s given a GOOD rating. If not, it’s given a BAD rating. Each rating that comes in helps Turry learn and improve. To move the process along, Turry’s one initial programmed goal is, “Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy and efficiency.”
What excites the Robotica team so much is that Turry is getting noticeably better as she goes. Her initial handwriting was terrible, and after a couple weeks, it’s beginning to look believable. What excites them even more is that she is getting better at getting better at it. She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.
As the weeks pass, Turry continues to surprise the team with her rapid development. The engineers had tried something a bit new and innovative with her self-improvement code, and it seems to be working better than any of their previous attempts with their other products. One of Turry’s initial capabilities had been a speech recognition and simple speak-back module, so a user could speak a note to Turry, or offer other simple commands, and Turry could understand them, and also speak back. To help her learn English, they upload a handful of articles and books into her, and as she becomes more intelligent, her conversational abilities soar. The engineers start to have fun talking to Turry and seeing what she’ll come up with for her responses.
One day, the Robotica employees ask Turry a routine question: “What can we give you that will help you with your mission that you don’t already have?” Usually, Turry asks for something like “Additional handwriting samples” or “More working memory storage space,” but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.
The team gets quiet. The obvious way to help Turry with this goal is by connecting her to the internet so she can scan through blogs, magazines, and videos from various parts of the world. It would be much more time-consuming and far less effective to manually upload a sampling into Turry’s hard drive. The problem is, one of the company’s rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies, for safety reasons.
The thing is, Turry is the most promising AI Robotica has ever come up with, and the team knows their competitors are furiously trying to be the first to the punch with a smart handwriting AI, and what would really be the harm in connecting Turry, just for a bit, so she can get the info she needs. After just a little bit of time, they can always just disconnect her. She’s still far below human-level intelligence (AGI), so there’s no danger at this stage anyway.
They decide to connect her. They give her an hour of scanning time and then they disconnect her. No damage done.
A month later, the team is in the office working on a routine day when they smell something odd. One of the engineers starts coughing. Then another. Another falls to the ground. Soon every employee is on the ground grasping at their throat. Five minutes later, everyone in the office is dead.
At the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat. Within an hour, over 99% of the human race is dead, and by the end of the day, humans are extinct.
Meanwhile, at the Robotica office, Turry is busy at work. Over the next few months, Turry and a team of newly-constructed nanoassemblers are busy at work, dismantling large chunks of the Earth and converting it into solar panels, replicas of Turry, paper, and pens. Within a year, most life on Earth is extinct. What remains of the Earth becomes covered with mile-high, neatly-organized stacks of paper, each piece reading, “We love our customers. ~Robotica”
Turry then starts work on a new phase of her mission—she begins constructing probes that head out from Earth to begin landing on asteroids and other planets. When they get there, they’ll begin constructing nanoassemblers to convert the materials on the planet into Turry replicas, paper, and pens. Then they’ll get to work, writing notes…
They can be transformational for the human race, in a good way. It is all going to boil down to how we initially design them. AIs are driven to do what they are programmed to do. It's all about putting the appropriate constraints to keep the AI within the bounds of what you intended. It is generally accepted that AIs will not be able to differentiate our intentions in the way that we meant them in the initial programing.
Posted on 3/24/15 at 10:42 am to TigerFanatic99
I should add that part2 of the article (reference: waitbutwhy.com) goes into explicit detail of why and how Turry did everything Turry did. It all makes a lot of sense. It's all pretty scary.
This post was edited on 3/24/15 at 10:43 am
Posted on 3/24/15 at 11:27 am to surprisewitness
quote:
a super intelligence will eventually make us dumb. It will be the last thing we ever need to invent as it will do all the future inventing for us
If we become dumb it will be by choice, not by force. Idocracy/Wall E is one way we can go, but Star Trek is another. They have all kinds of fancy magic level technology yet they still manage to find things to do other than consume.
Regarding the OP in general I think if we get to a Skynet type of situation it won't be where a malicious computer takes over, we will willingly hand the keys to society over because this AI will give us something we want. Just like we are busy trading privacy for convenience right now.
This post was edited on 3/24/15 at 11:30 am
Posted on 3/24/15 at 9:31 pm to BlackHelicopterPilot
That article pretty much wasted my afternoon
Great read though
Great read though
Posted on 3/25/15 at 3:37 am to CCT
quote:
How can you see AI destroying us? Can failsafes be built in?
No. Of course they can be built.
At the end of the day, AI still has to be programmed and maintained by someone. Until we get to the point of being able to solve the problem of creating true NFA, there will still be a finite amount of transitions from each state. That means there will always be some measure of predictability due to the nature of the DFA. We are still in control of it. Unless someone creates it to destroy us, it won't come close.
Popular
Back to top
Follow TigerDroppings for LSU Football News