How businesses can avoid dangerous AI

robots

Artificial intelligence (AI) is one of the technologies that will dominate the business, consumer and public sector landscape over the next few years. Technologists predict that, in the not-too-distant future, we will be surrounded by internet-connected objects capable of tending to our every need.

While AI development is still in its early stages, this technology has already shown it's capable of competing with human intelligence. From challenging humans at chess to writing computer code, this technology can already outperform people in many areas. Newer AI systems can even learn on the fly to solve complex problems more quickly and intuitively.

But while AI presents many exciting opportunities, there are also plenty of challenges. Doomsday scenarios predicting that smart machines will one day replace humans are scattered across the internet. Speaking to CNBC, respected Chinese venture capitalist Kai-Fu Lee said AI machines will take over 50% of jobs in the coming decade.

Although businesses are ploughing billions of dollars into this lucrative market, many of the world's most prolific figures in innovation and science have called for regulation. Tesla founder Elon Musk and renowned physicist Stephen Hawking are among those who have voiced concerns over the rise of artificial intelligence.

How real these concerns turn out to be remains to be seen, but even now there are ways in which AI used in business can pose risks not just to the companies that use them, but to the public at large.

While organisations at the cutting edge of AI development should at spend at least some of their time preventing the rise of the machines, everyday organisations have their role to play as well in protecting us all from artificial intelligence gone awry.

One solution doesn't fit all

Automated technologies are incredibly diverse and span a range of use cases. As a result, it's quickly apparent that there isn't one simple answer to ensuring the safety of AI. Matt Jones, lead analytics strategist at technology consultancy Tessella, says keeping AI safe comes down to the data a business possesses. "It's important for businesses to remember that there is never a 'one-size fits all' solution. This all depends on the data at the company's fingertips this will influence the risk involved, and therefore how dangerous the wrong decision can be," he says.

"For instance, using AI to spot when a plane engine might fail is a very different matter to trying to target consumers with an advert for shoes. If AI for the latter goes wrong, you may lose a few potential customers, but the damage isn't long term. However if the former goes wrong, it could lead to fatal consequences.

"There is however a series of steps businesses can take to ensure that AI works for the specific application it is required for. This includes having access to the right people to initially turn the data you're using into organised and correctly structured data that will help avoid issues once the AI platform is up and running."

To get the most out of data and analytics, Jones explains that companies need to invest in the right talent. By doing this, companies can avoid disaster scenarios and avoid human errors. "Understanding the risks involved and partnering with AI experts to describe basic governance processes to ensure safe decisions are continuously made. Human oversight of any decision an AI makes is vital as it's this oversight that will determine if corrective measures are required such as retraining or remodelling. For example, a company might take random samples of AI outcomes and cross-reference them against the corresponding human decision in order to keep it in check," he explains.

Security is vital

Another big fear surrounding artificial intelligence is the potential of cyber crooks getting into systems. Car manufacturers have learned very public lessons on this issue, particularly at the hands of Charlie Miller and Chris Valasek, two security researchers who demonstrated it was possible to remotely take control of vehicles from various manufacturers through their onboard computers. These systems, of course, host increasingly complex AI software not just in fully self-driving vehicles, but also those that have semi-autonomous systems like Tesla's autopilot feature presenting a different kind of AI threat.

Ross Thomson, principal consultant at security firm Amethyst, says businesses must implement sufficient safeguards to fight real attackers trying to gain access to AI systems.

"While the threat of AI killing machines has hit the headlines, we should not forget about industrial robots. AI will increasingly be used in workplaces, heightening security risks that are already more serious than most industrial robot users realise. Operators must make security a key factor when sourcing new industrial robots, selecting a manufacturer that shows commitment to the issue and provides frequent software updates with security patches," he tells IT Pro.

Companies should also control who can actually use AI systems, or at least regulate usage."Limiting who has access to robots and segmenting machines from networks where possible can reduce the risk from hackers, though AI adds a new dimension to the problem. Ultimately, one of the most effective precautions is also one of the most prosaic, and may also comfort those who fear their jobs will be stolen by robots. It's hard to imagine a time when we dare leave robots to get on with it, so until and unless that day comes, we need humans to keep watch on robots at work," adds Thomson.

New Regulations

Paul Auffermann, vice president of corporate strategy at DataRobot, agrees with Matt Jones that artificial intelligence technologies offer many possibilities. "Machine learning and AI will have a transformational impact on every industry and business function. The stakes are high - huge gains will accrue to the leaders that successfully implement large-scale machine learning programs, while those that fail to do so risk being rendered obsolete," he says.

He adds, though, that AI and machine learning technologies will need to be regulated properly if they're to succeed in the business world. He points to GDPR, which is set to come into force in just a few months. "It's certain that in some areas, regulations will have to be written or amended to consider the implication of AI and machine learning. In fact, the European Union has approved the General Data Protection Regulation (GDPR), which is aimed at protecting all EU citizens from privacy and data breaches in an increasingly data-driven world, and it is probable that other nations will follow," he says.

When it comes to complying with these regulations, he says that firms must be transparent about their AI activities. "One certainty is that organisations that maintain transparency, interpretability and control in their machine learning programs will be best suited to navigate the regulatory waters. These key characteristics will help organisations understand how their models work, explain the decisions to constituents, and efficiently iterate or change course as necessary," he concludes.

It couldn't be clearer that artificial intelligence will become one of the most defining technologies over the coming years. Not only is AI capable of speeding up business processes, but it's also a helpful tool around the home. But at the same time, there are potential dangers and firms need to be sure that they have the systems in place to ensure this technology remains a helpful, not harmful.

Nicholas Fearn is a freelance technology journalist and copywriter from the Welsh valleys. His work has appeared in publications such as the FT, the Independent, the Daily Telegraph, the Next Web, T3, Android Central, Computer Weekly, and many others. He also happens to be a diehard Mariah Carey fan. You can follow Nicholas on Twitter.