Read Google's five rules for human-friendly AI

Google has come up with five rules to create human-friendly AI - superseding Isaac Asimov's Three Laws of Robotics.

The tech giant, whose DeepMind division recently devised an AI capable of beating the world's best Go player - believes AI creators should ask themselves these five fundamental questions to avoid the risk of a singularity in which robots rule over humankind.

Google Research's Chris Olah outlined the questions in a research paper titled Concrete Problems in AI Safety, saying: "While possible AI safety risks have received a lot of public attention, most previous discussion has been very hypothetical and speculative.

"We believe it's essential to ground concerns in real machine learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably."

Published in collaboration with OpenAI, Stanford and Berkley, the paper takes a cleaning robot as an example to outline the following five rules.

Avoiding negative side effects: Ensuring that an AI system will not disturb its environment in negative ways while completing its tasks.

Avoiding reward hacking: An effective AI needs to complete its task properly without cutting corners.

Scalable oversight: AI needs to learn from feedback, and should not need continuous feedback from a human programmer.

Safe exploration: AI needs to avoid damaging objects in its environment as it performs its task.

Robustness to distributional shift: AI should be able to adapt to an environment that it has not initially been conditioned for, and still perform.

Google has thrown much of its resources at developing deep learning and AI, amid a backdrop of fear of robots, voiced by luminaries including SpaceX founder Elon Musk and scientist Stephen Hawking.

DeepMind is working on a failsafe that would effectively shut off AI in the event it attempted to disobey its users.

Other firms including Microsoft are exploring AI, getting AI to tell stories about holiday photos, and debuting its tween chatbot, Tay, which spouted rude replies on Twitter.