Bill Gates, Musk, Hawkings and others have all stated their concern regarding the distressing potential dangers of AI more than once, on we go pell-mell towards self aware/self governing machines.
We can’t even get security updates right without causing severe problems, but somehow think “We can do this. We can win!”.
Just a minor thought…a program, any program (including heuristic ones) are limited by their coding and how, via this coding, they ‘learn’. The same is true about biological systems. Their form, their being carbon based, their being subject to the laws of thermodynamics and sensitivities to environment, and other biological entities all determine and limit how they learn.
Another minor thought, “If something can go wrong, it will.” Just ask God.
Now comes this report by Selmer Bringsfjord (RPI, New York) (inThe New Scientist) regarding a test he ran using the classic “Wise-men Puzzle” on three robots, two of which he silenced, and one he didn’t. All three had auditory sensors.
“In a robotics lab on the eastern bank of the Hudson River, New York, three small humanoid robots have a conundrum to solve. They are told that two of them have been given a “dumbing pill” that stops them talking. In reality the push of a button has silenced them, but none of them knows which one is still able to speak. That’s what they have to work out. Unable to solve the problem, the robots all attempt to say “I don’t know”. But only one of them makes any noise. Hearing its own robotic voice, it understands that it cannot have been silenced. “Sorry, I know now! I was able to prove that I was not given a dumbing pill,” it says. It then writes a formal mathematical proof and saves it to its memory to prove it has understood.” – New Scientist
“In a robotics lab on the eastern bank of the Hudson River, New York, three small humanoid robots have a conundrum to solve.
They are told that two of them have been given a “dumbing pill” that stops them talking. In reality the push of a button has silenced them, but none of them knows which one is still able to speak. That’s what they have to work out.
Unable to solve the problem, the robots all attempt to say “I don’t know”. But only one of them makes any noise. Hearing its own robotic voice, it understands that it cannot have been silenced. “Sorry, I know now! I was able to prove that I was not given a dumbing pill,” it says. It then writes a formal mathematical proof and saves it to its memory to prove it has understood.” – New Scientist
Granted, this isn’t “full consciousness”, but this is conscious thought and shows a conception of ‘self’, or “the first-hand experience of conscious thought’.
There are those who are correct in saying that there’s a big difference between saying, “It’s sunrise.” and being able to enjoy the esthetic experience of knowing who you are and being part of that sunrise and . Perhaps central to the experience is knowing one is mortal and what that sunrise signifies in terms of mortality and the passage of time which generates compassion for others subject to that passage of time and the knowledge that each is at a different point in that passage.
Perhaps what I fear most therefore, is a machine which has no compassion and its actions for self preservation without that essential quality, even if through inaction because it simply has no perception that it is doing wrong since ‘right’ and ‘wrong’ are alien to it.
After all, even though very imperfect, we do have a system of checks and balances, ideas of morality, etc. which function (to some degree) to limit us.
If you don’t believe the craziness of all this, if you don’t believe this is real, read about how ‘killer robots’ was to be discussed at the the U.N. Convention on Certain Conventional Weapons. You can look up the meeting (11/2014) search. You read more here.
Source:
https://www.newscientist.com/article/mg22730302-700-robot-homes-in-on-consciousness-by-passing-self-awareness-test/
http://www.computerworld.com/article/2970737/emerging-technology/are-we-safe-from-self-aware-robots.html
http://www.stopkillerrobots.org/2015/03/ccwexperts2015/
http://www.computerworld.com/article/2489408/computer-hardware/evan-schuman--killer-robots--what-could-go-wrong--oh--yeah----.html
Well fellow meat bag, no machine has voting rights, or will. At least until humans invest in upgrading themselves biologically and artificially. Also what is stopping AI from being a capable intelligence is the hdd/sdd, imho. It's 2D like a book and works in straight lines of coding, our brains are 3d and have many parts disagreeing, agreeing, being silent, or making stuff up faster than any drive. That basically is the gist of it for me, if there are not DNA drives like you were showing a few days ago, then a I will continue to be just that. And I would imagine there would have to be a new language of p=np syntax in 3d to be able to make one that could make human like connections.
So it's like global warming, a long way away and something that only might happen by 2050. Lol
If it feels the other two robots have been wronged in being silenced, then I'll start worrying. A simple logic puzzle is about as indicative of self awareness as software checking for system requirements on install.
I read a article that deals with your Windows concerns (Ed Bott) and his suggestion is if it really gets too bad on Windows or Apple, to go to Linux Mint or Ubuntu. So far, I am okay with Windows 10 although I mind having to turn off a ton of bloat and crap. I couldn't stand Cortana but Ed Bott likes it so we definitely disagree there. The problem is that some of those items we disabled are still running in the background. It is exactly the same issue I have with my Android phone which is very non-secure at this point with two major exploits out there that Google seems to be ignoring. And there's all that sharing..... At some point, I may actually make the move to Linux which still respects privacy at least for now.
"Sharing" is really a euphemism for "invasion of privacy." I am on FB but hate it as everything there is tracked as well. Now add some very smart robots to all of this and there will be no need to worry about your phone or computer. Your faithful Robot will report everything for you or a cute little drone will scan your house and the whole neighborhood including any and all electronic devices. Google will send a robotic taxi to pick you up while smart drones deliver your groceries. Needless to say those wonderful "conveniences" can be used for other purposes as well. The next arms race should be a doozy!
Human beings will no longer be used for combat (or have a job). It will be left to smart robots instead. Let's hope they don't get confused about who their friends are and who they work for. Various counties will be kept busy trying to exploit each other's robots. The sheriff's department here has helicopters that fly at night to "keep us safe." Supposedly, they are equipped with the latest technology allowed by the feds. As for me, I am going to keep on watching the sunrise and sunset and enjoy the sheer beauty of the moon and those glorious stars at night - and the space station is visible as well. At least for now....
Ah, but what about the voting machines? The politicians will declare war on the voting machines that ruin their fixed elections.
Right...what could go wrong?
Artificial intelligence? Like why would mankind attempt to create AI when mankind lacks the intelligence to use its collective brain intelligently.
True intelligence should tell us it is wrong to kill, rape and go to war, yet there are millions of effwits going around committing atrocities to fellow human beings.
No, mankind's attempt to create AI will not end well. What's that about creating something in one's own image? Building a robot to look human is one thing, but to create robots/machines that have human traits and 'think' like humans, what with our own flawed 'intelligence as building blocks, is fraught with danger.
Yes, technology can be grand, and it can enhance our lives to no end, but it can also be abused and used for no good - just look at all the scammers and proliferaters of malware on the internet - so just because we can create it doesn't mean we should. Sadly though, there will be those in positions of power and influence who would seek to exploit such technology, so it will be developed whether it's smart to do so or not.
Indeed, Mark. Just imagine the AI standing before the judge claiming innocence because it was hacked.
As of now, the whole "internet of things" is open to hacking (every home device imaginable) because "brilliant" humans dash off some insecure code and then it's off to the "bigger, better, shinier and newer" model.
Righto!
Like I said, if we can't use our intelligence intelligently, how could we expect a creation of human intelligence to function any better?
I mean, if it is human to err, then is it safe to assume that artificial intelligence would produce human errors?
Sadly, intelligence will not prevail.
Wonder at what point AI will get basic rights, and turning off your computer will be considered murder.
I hope somebody invents a kill switch to go along with it.
That has exactly nothing to do with intelligence [or lack of].
Morality, maybe, but definitely not intelligence....
That has exactly nothing to do with intelligence [or lack of].Morality, maybe, but definitely not intelligence....
Yeah, intelligence! Our intelligence tells us it is wrong to kill/rape, that there can be/will be consequences/repercussions, yet there are those who still proceed with the crime. For me, that's unintelligence. Morality, on the other hand, is the definition of good [socially acceptable standards and shows us the difference between right and wrong.
You're both right. Morality and intelligence are not mutually exclusive...an act might well be moral and based not solely on religious values.
That's exactly right! Morals are what society deems as being socially acceptable... and intelligent people abide by those standards/expectations.
At the end of the day, intelligence isn't what one knows or how much, rather it is how well one uses what they know.
I always imagined it as a race between biomechanics/biomedicine to beat out AI devrlopment or we all die and have the world end up like it did in Wall-E or Terrminator. Because upgrading our collective gene's and capibilities is better than making a AI to do the same task.
DARCA
Simply make it a requirement that all sentient machines be programmed with the 3 laws of robotics hard coded in their processor.
Problem solved.
Three laws, ala Issac Asimov? In the movie remake "I Robot," the three laws were interpreted by the uber-AI in a manner most of us humans would disagree. I'm not sure hard coding the 'Three Laws' would protect us. ...
True. And the laws will never be written to begin with. They don't make economic sense in our corporate world. There are too many production types pushing for the code to go out as soon as possible.
Money will always win over morality. We won't even think about putting the brakes on AI code until it's too late. That's our nature.
But rest assured, those who survive will point their collective fingers at those considered to be at fault. That's also our nature.
Sleep tight.
True. And the laws will never be written to begin with. They don't make economic sense in our corporate world. There are too many production types pushing for the code to go out as soon as possible. Money will always win over morality. We won't even think about putting the brakes on AI code until it's too late. That's our nature.But rest assured, those who survive will point their collective fingers at those considered to be at fault. That's also our nature.Sleep tight.
Truely sentient AI is a long way off. I don't think well have anything to worry about for quite a while.
You are right about money (mammon) having a higher loyalty than 'the milk of human kindness.' I was reminded of this quite strongly when watching reruns of Lost Girl, in anticipation of finally getting access to the new season. In the episode where Kenzie goes to call on the Norn, (an uber hybrid of the trickster god, and Earth Mother Gaia's number Two) wielding a chain saw. The Norn is in total disbelief that Kenzie, a mere human, not a Fey, would even consider hurting the big Tree. Kenzie's reply goes something like this: Humans? We pollute every lake we find ... We'll happily burn the planet to a crisp if it means just one more cheeseburger." Yes, sadly, You are right. Kenzie told me so...
I love that series... on of Canada's best. I won't tell you how the final [and last] season goes, but sufficed to say it is a doosey and right up there with plot twists and excitement.
Yes, Lost Girl show really sucks....
Now, apologies for attempted thread hijacking, back to AI concerns....
Yep! Sucks viewers in for the long haul... cos there's no escaping once you're in.
Yeah, one of the concerns I have is when AI is as perfected as it could be... and politicians get AI transplants so they finally have intelligence.
I mean, they do enough damage when unintelligent.
Eventually, AIs will become self-actualizing. Human engineers are already developing ways for the proto-AIs we now have to make redesign themselves 'better.' Since they replicate many generations to each of our human generations, they will evolve way faster than we do. Indeed, based on what I have seen and learned about human history, we haven't really evolved in the past 20,000 years - and our most decisive 'advantage' (intelligence) is also our Achilles's heel, because we subvert our intelligence, via 'rationalizations,' to do really stupid, evil, and self centered things. Planet earth is just a bigger Easter Island, and destroying the web of life that supports us will take longer than it did on Easter Island - but we will. And the 'artificial' AI's we leave behind will long outlast us. Who knows, maybe in the far future, when the universe winds down, they will proclaim "42" and then say "let there be light?"
I'm not just concerned about AIs evolving and replicating much faster than we humans do, I'm concerned that AIs will see humans as irrelevant and superfluous to their needs, therefore deciding to kill us all off to enable wiser use of the world's resources. I mean, AIs that have been programmed with all human intin elligence and more will take one look at us and think most of us are too thick to be spared... and with 'human' intelligence they'll sure know how to kill, won't they.
Frankly, many inventions and leaps in technology have been purely made so mankind can be lazy... and AI is just another step along that path of mankind becoming more slovenly and sloth-like, only this time it will come to bite mankind in its rear... considerably harder that it's ever been bitten before.
You need to read my old fave comic [was a Gold Key one] - Magnus the Robot Fighter ...
There are many great features available to you once you register, including:
Sign in or Create Account