Read Google's five rules for human-friendly AI
Google updates Asimov's Three Laws of Robotics for AI developers
Google has come up with five rules to create human-friendly AI - superseding Isaac Asimov's Three Laws of Robotics.
The tech giant, whose DeepMind division recently devised an AI capable of beating the world's best Go player - believes AI creators should ask themselves these five fundamental questions to avoid the risk of a singularity in which robots rule over humankind.
Google Research's Chris Olah outlined the questions in a research paper titled Concrete Problems in AI Safety, saying: "While possible AI safety risks have received a lot of public attention, most previous discussion has been very hypothetical and speculative.
"We believe it's essential to ground concerns in real machine learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably."
Published in collaboration with OpenAI, Stanford and Berkley, the paper takes a cleaning robot as an example to outline the following five rules.
Avoiding negative side effects: Ensuring that an AI system will not disturb its environment in negative ways while completing its tasks.
Avoiding reward hacking: An effective AI needs to complete its task properly without cutting corners.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Scalable oversight: AI needs to learn from feedback, and should not need continuous feedback from a human programmer.
Safe exploration: AI needs to avoid damaging objects in its environment as it performs its task.
Robustness to distributional shift: AI should be able to adapt to an environment that it has not initially been conditioned for, and still perform.
Google has thrown much of its resources at developing deep learning and AI, amid a backdrop of fear of robots, voiced by luminaries including SpaceX founder Elon Musk and scientist Stephen Hawking.
DeepMind is working on a failsafe that would effectively shut off AI in the event it attempted to disobey its users.
Other firms including Microsoft are exploring AI, getting AI to tell stories about holiday photos, and debuting its tween chatbot, Tay, which spouted rude replies on Twitter.
-
What does modern security success look like for financial services?Sponsored As financial institutions grapple with evolving cyber threats, intensifying regulations, and the limitations of ageing IT infrastructure, the need for a resilient and forward-thinking security strategy has never been greater
-
Yes, legal AI. But what can you actually do with it? Let’s take a look…Sponsored Legal AI is a knowledge multiplier that can accelerate research, sharpen insights, and organize information, provided legal teams have confidence in its transparent and auditable application
-
Modern enterprise cybersecuritywhitepaper Cultivating resilience with reduced detection and response times
-
Where will AI take security, and are we ready?whitepaper Steer through the risks and capitalize on the benefits of AI in cyber security
-
Building a strong business case for GRC automationwhitepaper Successfully implement an innovative governance, risk & compliance management platform
-
Three ways risk managers can integrate real-time controls to futurize operations at the bankWhitepaper Defining success in your risk management and regulatory compliance
-
insideBIGData: Guide to energyWhitepaper How big data can help energy companies manage intense disruption
-
Death of the tick markWhitepaper How to prevent internal audit becoming obsolete
-
Recommendations for managing AI risksWhitepaper Integrate your external AI tool findings into your broader security programs
-
Cambridge University boffins to combat rise of the machinesNews Scientists to look at ways to see off Terminator-style threat