How may I help you? Your next co-worker might well be a robot

A man shaking hands with a robot

"I thought you were dead," Detective Del Spooner says to robot Sonny at one point during the 2004 film I, Robot. Sonny responds: "Technically I was never alive, but I appreciate your concern."

It's hard to imagine a robot that can talk in such an emotive way isn't somehow alive' even if such life is given only through technology. At the same time, though, it's also potentially very scary to think about what that means in any detail. For too long anyway.

Humans are meant to make demands of computers and said computers - in whatever form - comply. They shouldn't be capable of, or allowed, independent thought and action or else the balance of power is completely skewed. And it's got many of the world's brightest brains, including Professor Stephen Hawking, feeling more than a little concerned.

"The development of full artificial intelligence could spell the end of the human race," he told the BBC at the end of 2014.

"It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

I, Robot depicts a world where AI-enhanced robots are there to protect humanity. But they are plotting a mutiny. Sonny is different. He doesn't toe the robotic line as expected of him and his peers. This robot is not like the others. He has dreams and feels emotion. But he's also pretty darn useful to and protective of Will Smith's Spooner character.

Surely such use isn't just limited to saving lives or being a good companion? The debate continues to gather momentum as us puny humans work out whether we are willing to risk a world where we might not be in charge versus a world where we are free from the shackles of a multitude of tasks raging from the boring and mundane to the complicated and time-consuming.

Fact or fiction

It's not just the movie world predicting bad things for humans if robotics and AI becomes evermore sentient.

Hawking is not alone in his thinking. Far from it. Back in the summer of 2015, Elon Musk CEO of SpaceX and Tesla, stressed his concerns about machines becoming too intelligent and, ultimately, taking over life as we know it.

"The AI researchers are all racing toward creating [super intelligence] without wondering what's going to happen if they succeed," he said during a panel discussion held at Google's Quad campus in Silicon Valley.

"I think AI risk is the biggest [existential] risk that I can see today by a fairly significant margin and it's happening fast - much faster than people realised."

To try and counter this, Musk is investing $10 million in AI projects around the world that are aimed at non-human threatening developments.

He added: "Here are all these leading AI researchers saying that AI safety is important. I agree with them, so I'm committing $10 million to support research aimed at keeping AI beneficial for humanity."

The line between smart and too clever

To ensure developments remain in human beings' interest rather than opposed to them, Musk is also ploughing 1 million into the creation of an AI research centre in partnership with Oxford and Cambridge Universities' Open Philanthropy Project.

"There are reasons to believe that unregulated and unconstrained development could incur significant dangers, both from 'bad actors' like irresponsible governments and from the unprecedented capability of the technology itself," said Oxford University's Nick Bostrom.

"The centre will focus explicitly on the long-term impacts of AI, the strategic implications of powerful AI systems as they come to exceed human capabilities in most domains of interest, and the policy responses that could best be used to mitigate the potential risks of this technology."

Musk has been vocal on the subject of AI for some time. He also voiced his concerns at an MIT symposium in 2014.

He said: "I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it's probably that. So we need to be very careful with artificial intelligence.

"I'm increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish."

He added: "With artificial intelligence we are summoning the demon. In all those stories where there's the guy with the pentagram and the holy water, it's like yeah, he's sure he can control the demon. Doesn't work out."

Does the good outweigh the bad?

There are pros and cons to just about everything in life. The internet has opened up new ways of communicating, working, learning and interacting for billions of people around the world. It's also opened up opportunities for the bad guys who want to steal from our online bank accounts, defraud us through fake inheritance or lottery scams or just troll.

But, we haven't switched the internet off. It offers us so much more than it takes away. Is there a chance the same could be said of AI?

Scary Hollywood what if' depictions and doomsayers aside, what if AI could be put to good use in the real-world in a business context?

The role of such digital assistants in many people's lives right now is to aid not obstruct. If you don't know the answer to something, need a reminder or alarm setting or can't use your hands to call someone, Siri can help you. The assistive nature of Cortana, Siri and the like, doesn't end there. That can only be seen as a positive thing for employees and employer alike, surely?

Apple's digital assistant remains at the stage where humans are still needed to help enhance Siri's knowledge base. But other technologies, IBM's Watson being an example, are able to do some of that learning and knowledge gathering themselves once taught the process.Is this just one step away from total independence?

AI and the mobile worker

Let's put aside the scaremongering for a moment. We live in an increasingly connected world The IoT means we'll be dealing with more devices talking to one another and consuming and transferring data than we can possibly handle. We need all the help we can get, surely?

We also need to remember we live in a cloud-focused and highly mobile world. Work is the thing you do, not the place you go to. As such, surely it would be highly advantageous to ensure that your work assistant is with you wherever you happen to be 24/7. We haven't yet perfected the art of teleportation so getting your co-workers or PA to the same place as you remains a pipe dream. But having a digital assistant on your wrist, phone, tablet or another form factor is surely the next best thing?

What if you're about to take a flight for business and you're somewhat scared of flying? You might want to check what the weather is like. The internet can provide some information here, but what if your digital assistant could deliver that insight to you alongside some calming tips and techniques to help you reduce your anxiety levels? It would certainly make you feel a lot less alone during an otherwise lonely situation, right?

There are plenty of other scenarios where having an intelligent digital assistant is far more beneficial than just typing or speaking into a computer. Help with pronunciation when travelling for business or during a business meeting with international contacts, is one example. Another would be helping you get through your work-related homework when you're in a hotel in the middle of nowhere and need to urgently prep for that important meeting, but your co-workers are in a different time zone and can't help unless you wake them up at 3am.

And that digital assistant isn't just assisting and helping you. We used to live in a world where people would default to search engines such as Google if they didn't know the answer to a question. Firing up Google on a laptop in the middle of a meeting can be a bit obtrusive though. You could just ask Google on a mobile device instead, but wouldn't it be much quicker to just ask the human-esque computer residing on that same form factor? Wouldn't that feel a little bit like you had another team member albeit a virtual one in the same room as you assisting with the meeting rather than being reliant on somewhat dumb technology?

But what if

Still, visionaries like Apple's co-founder Steve Wozniak aren't sure about this increasing intelligence though.

"I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they'll think faster than us and they'll get rid of the slow humans to run companies more efficiently," he said in an interview with The Australian Financial Review earlier in 2015.

"When I got that thinking in my head about if I'm going to be treated in the future as a pet to these smart machines well I'm going to treat my own pet dog really nice."

But the UK government sees the potential more than the risks associated with AI, it would seem. It is investing 150 million to ensure the UK can keep pace with other leaders (such as Korea and the US) in the robotics and autonomous systems (RAS) industry an industry expected to be worth 70 billion by 2025.

Some feel it's not simply a case of replacing what humans do now. It's more about complementing existing processes and allowing machines to take on some of the grunt work to enable the supposedly, at least - more intelligent humans to work on higher value tasks elsewhere in organisations. The UK government's robotic plans, for example, involve sending robots in to clear up mines or nuclear plants.

"I'm less worried about the role of educators in universities, but I am worried about everyone else's jobs," Dan O'Hara, philosopher of technology and literary historian, said earlier in 2015 during a panel discussion debating the merits and issues associated with AI.

"The paradox is that the consensus of the research is that what we need to be investing in over the next couple of decades is in reskilling people in higher level cognitive abilities rather than replaceable, automatable skills."

There is a distinct difference between science fact and science fiction, according to many in the industry, including Microsoft's chief research officer Eric Horvitz.

"There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences. I fundamentally don't think that's going to happen," he said when he was awarded the AAA Feigenbaum Prize in recognition of his contributions to the field of AI research.

"I think we will be very proactive in terms of how we field AI systems, and that in the end we'll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life."

What's more, there's a finite capacity to what humans can take in and absorb. With technology you're dealing with very different boundaries of almost infinite proportions.

That's why the likes of Apple, Microsoft et al are embracing AI and machine learning to help digital assistants such as Cortana and Siri become ever-more intelligent and helpful.

Given the hype and heightened sense of fear around AI, are we in danger of becoming scared of our own shadows? People feared bank and credit cards when they first emerged, preferring instead to stick to cash. Such talk would be laughed at in a modern world of contactless payments and e-commerce.

We're also all used to telephone and internet banking and customer service helplines where we speak directly to a computer (via our phone keypads or keyboards) rather than to a human. We don't seem so fearful when we're considering the speed and ease of use then, do we? Surely the rise of the mobile concierge/digital assistant is only a few steps removed from this level of immediacy and convenience? Or perhaps, we're only OK with such convenience when we don't think it poses the risk that, by using such tools and technologies, we, as humans, might become the inconvenience?

There are a number of things to consider around the complex subject of AI and, in addition to those who fall into either the anti or pro camps, there are experts offering advice as to how to manage the situation.

Analyst firm Gartner is one such advisor. Back in March 2015, Frank Buytendijk, research vice president and analyst at Gartner, said: "Clearly, people must trust smart machines if they are to accept and use them.

"The ability to earn trust must be part of any plan to implement artificial intelligence (AI) or smart machines, and will be an important selling point when marketing this technology."

He added: "CIOs must be able to monitor smart machine technology for unintended consequences of public use and respond immediately, embracing unforeseen positive outcomes and countering undesirable ones."

Gartner has defined five levels to avoid things getting too out of control.

  • Level 0 Non-ethical programming
  • Level 1 Ethical oversight
  • Level 2 Ethical programming
  • Level 3 Evolutionary ethical programming
  • Level 4 Machine-developed ethics

Level 2 is likely where we are currently with Cortana, Siri and other digital assistants.

"One smartphone-based virtual personal assistant would in the past guide you to the nearest bridge if you told it you'd like to jump off one. Now, it is programmed to pick up on such signals and refer you to a help line," Gartner stated in its report.

"This change underlines Gartner's recommendation to monitor technology for the unintended consequences of public use, and to respond accordingly."

There are two sides to every story and, ultimately, the rise of AI will not please everyone. Jobs will be lost as a result of this evolution (Forrester Research predicts such losses will be around the 16 per cent mark between now and 2025), but other areas will flourish as a result.

Indeed, Forrester predicts the rise of automation will fuel job growth elsewhere, including maintaining and repairing these intelligent machine.

"There's a lot of talk these days about the bleak future of employment: Claims that robots will steal all the jobs are commonplace in the media and in academia. These concerns are driven by a host of new technologies that automate physical tasks (robotics), intellectual tasks (cognitive computing), and customer service tasks (everything from self-help kiosks to grocery store scanners)," Forrester's JP Gownder says in the summary of his report on the subject.

"While these technologies are both real and important, and some jobs will disappear because of them, the future of jobs overall isn't nearly as gloomy as many prognosticators believe. In reality, automation will spur the growth of many new jobs including some entirely new job categories. But the largest effect will be job transformation: Humans will find themselves working side by side with robots. Infrastructure and operations (I&O) leaders will be at the forefront of efforts to choose, pilot, implement, and evaluate these technologies and to make sure these technologies don't merely cut costs but drive customer value."

In short, there's not much to be afraid of just yet. The positive and assistive potential of AI and smart machines at present far outweigh the negatives.

However, as Gartner and others have advised, it is definitely something we need to continuously monitor and adapt our stance accordingly in order to ensure we continue to live in a world where technology and humans happily co-exist.

Maggie Holland

Maggie has been a journalist since 1999, starting her career as an editorial assistant on then-weekly magazine Computing, before working her way up to senior reporter level. In 2006, just weeks before ITPro was launched, Maggie joined Dennis Publishing as a reporter. Having worked her way up to editor of ITPro, she was appointed group editor of CloudPro and ITPro in April 2012. She became the editorial director and took responsibility for ChannelPro, in 2016.

Her areas of particular interest, aside from cloud, include management and C-level issues, the business value of technology, green and environmental issues and careers to name but a few.