Quote:
Originally Posted by aussiblue
|
Another thought provoking article.
Even the Sci-Fi writers of the 30s, 40s and 50s had these concerns. Ray Bradbury was probably the most notable in this area, especially with an early short story I Robot. The "posiytonic brain" that all robots had were instilled with several fundamental laws overrode all other commands they could be given. What those laws mandated in these robotic positronic brains was an impossibility for a robot to cause any harm to a human.
I wonder is these AI experts have this concept? Even it they did, could it ever be implemented correctly, without flaws and paradoxes? Think about it. The simplest software "upgrade" will introduce all kind of "I gotch ya's".