Thursday, December 10, 2015

Is Artificial Intelligence Dangerous?

I recently listened to a podcast on Science Friday discussing the concerns over the dangers of artificial intelligence.  It has been a hot topic with people like Bill Gates and Stephen Hawking being widely quoted in the media speaking of the dangers of AI.  The guest speakers were Stuart Russel, Eric Horvitz, and Max Tegmark. 

This has been talked about for many decades, but is now being discussed seriously because it is no longer just in the realm of science fiction.

Isaac Asimov’s The Three Laws of Robotics is often referred to:

1.   A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.   A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3.   A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The two questions are: Can human beings make a super intelligence that is also unable to deviate from these three rules?  Is it worth the risk?

I think we can and it is, but we need to proceed with caution.  We need to take our time and examine things from every angle.  Rushing to create something just because we can is where the danger lies.  With many previous technologies mistakes were made and damage was done, but it was not irreparable.  The guest speakers’ made a good point about how we may only have one chance to get artificial intelligence right. 

Artificial intelligence is going to happen regardless of who is against it.  It is too big of a discovery/accomplishment for humanity to push aside.  It just is not in human nature.  Now the discussion needs to be:  How do we do this responsibly?

Check out the podcast for yourself:


No comments:

Post a Comment