As the prominence of artificial intelligence (AI) technology becomes widespread, it’s becoming more useful to understand the pros and cons of AI. This technology has previously been seen as something futuristic, especially with its prominence in popular science fiction. In these works, it’s seen as something that can revolutionise the way we live, or even takes the form of robots or systems like HAL 9000. With AI becoming more widespread, it’s worth examining its use cases and what it offers for the future.
Although it’s taking its tentative steps out into the limelight, it isn’t fully developed and can be improved greatly. This isn’t to say, however, that the use of AI hasn’t increased in recent years, as it’s now able to carry out basic functions, with its use becoming more common in plenty of sectors. Examples of this include smart home assistants, but it also has a home in industrial applications.
The use of AI by businesses has been both impressive and worrying at the same time. Business use of the technology provides examples of how the tech is used in positive and negative ways.
What are use cases of AI?
The uses of AI include systems like AlphaGo or Watson. These are well known due to their ability to win against professional human rivals at specific games. An example of this is when an AI beat five expert players at the same time in a game of poker.
Most of the AI technology in use today tend to get on with their functions in the background of processes, instead of the flashy feats focused on by many news headlines.
The Total Economic Impact™ of IBM Watson Assistant
Cost savings and business benefits enabled by Watson Assistant
In Gmail, for example, sometimes you might see text pop up when you type with suggestions of what to include in your email. If you use it from a mobile device, you might see buttons you can press to reply to emails with short sentences too. You’re able to hit one of these suggestions and make changes if needed.
A key use case of AI is mining data to help businesses, NGOs, governments, and others make informed decisions on everything from strategy to product development, and to do so much more quickly than ever possible before.
But that's just scratching the surface of AI's potential. Indeed, it's being used in myriad sectors and scenarios, many of which are explored below.
Like all technologies, however, it's not a neutral force, and there is always the potential for negative outcomes and benefits in equal measure. The late theoretical physicist and cosmologist Stephen Hawking famously believed that AI presents an existential threat to humans, and many experts have voiced concerns over the severe risk presented by improper use.
So is AI a force for good? Or is it something we should be inherently distrustful of?
What are the pros of AI?
AI is going to proliferate over the next few years and likely beyond that, too. The applications of the tech are far too impressive, efficient, and cost-effective for businesses to ignore which means the amount of AI we interact with daily will increase in all areas of life. What’s more, it’s becoming safer. Algorithmic biases are still problematic, for example, but innovative work is being done to make AI better for everyone.
Data is now as important to business as oil once was, and there is a necessity to process this data accurately and quickly for real-time results. A great example of this type of artificial intelligence is being utilised by DeepMind to diagnose sight-threatening eye conditions with the same level of accuracy as the world's top clinicians.
Alongside UCL's Institute of Ophthalmology and London-based Moorfields Eye Hospital, their research could lead the way for the rollout of AI systems in hospitals throughout the UK. Thanks to the AI system, doctors can spend less time studying thousands of eye scans and can help diagnose patients within seconds.
Eradicating human error
Even the best of us are prone to errors, whether it's a lapse in concentration or a simple mistake. However, an artificially intelligent machine built to carry out a specific task does not display these idiosyncrasies.
Technology giant Amazon has recently begun to roll out fully autonomous robots in its fulfilment centres, which can work alongside humans to perform physically-difficult manual labour and sorting of packages. It’s possible that this could make Amazon warehouse work a lot safer. A 2021 study by the union coalition the Strategic Organizing Center (SOC) concluded that 5.9 out of every 100 Amazon warehouse workers experienced serious injuries in 2020, a rate just under 80% higher than at other warehousing employers.
Similar to Amazon, AI will be utilised in the future to power many of our automated services. These could be smart cities that are predicted to improve our environments, or self-driving cars that use AI to navigate roads and assess obstructions.
An AI machine's ability to process large data sets quickly and accurately will be vital for many smart technologies and environments to operate. An example of this is already in operation on many top-range smartphones, where AI operates in the background constantly tweaking the phone's settings for maximum performance or battery life.
What are the cons of AI?
It’s natural to be fearful of powerful technology. Recent history with data scandals, malware, and social media has made that clear. AI is no different and many of the concerns held by onlookers are, in some areas, justified. But that doesn’t mean great work isn’t being done to mitigate the drawbacks.
In 2016, an industry-wide organisation including five Silicon Valley giants was formed, known as the Partnership on Artificial Intelligence to Benefit People and Society. This body works to promote the fair and ethical development of artificial intelligence technologies that have the potential to bring as much disruption as it will benefit.
Decision-making AI in the workplace
The speed and efficiency of certain AI applications make them appealing to executives looking to find more value across their organisation.
IBM's Watson has been used to decide if employees are worthy of a pay rise, a bonus, or a promotion by looking at the experience and past projects of employees to indicate the future qualities and skills individuals could bring to the company.
Decision-making software used in this way has caused some concern. The Trades Union Congress, the federation that represents the majority of trade unions in the UK, called for legislative changes last year to safeguard employees against this kind of technology. It is also recommended that employers consult trade unions before deploying such systems.
"Our prediction is that left unchecked, the use of AI to manage people will also lead to work becoming an increasingly lonely and isolating experience, where the joy of human connection is lost," TUC general secretary Frances O'Grady said.
The potential for human job losses is widely regarded as the number one downside to AI, the implementation of which could set in motion a wave of lay-offs as employees struggle to outperform machines.
However, while this scary scenario is often presented as just over the horizon, AI is expected to create more jobs than it takes. The World Economic Forum’s (WEF) 2020 Future of Jobs Report predicts that by 2025, automation will have affected 85 million jobs around the world but 97 million jobs will be created in industries such as artificial intelligence.
“No matter what prediction you believe about jobs and skills, what is bound to be true is heightened intensity and higher frequency of career transitions, especially for those already most vulnerable and marginalized,” stated FutureFit AI CEO Hamoon Ekhtiari.
AI for customer service
IBM Watson Assistant solves customer problems the first time
The report also stated that, although the pandemic has accelerated the automation of many repetitive and dangerous tasks, “around 40% of workers will require reskilling of six months or less”. One area of skills worth developing in time for the AI-based future is data, but soft skills shouldn’t be ignored either. John Whittingdale OBE, former minister of state for media and data, described soft skills as “hugely important”, adding that, “without them, there is the potential for data to be misread or miscommunicated, which can have significant implications for businesses and the decisions they make”.
Although AI can virtually remove human error from processes, its code is still subject to bias and prejudice. Being largely algorithm-based, the technology can knowingly or unknowingly be coded to discriminate against minorities or fail to cater to groups that its programmers failed to consider.
If security measures are not followed carefully, hackers can exploit AI seeking to collect public data. For example, Microsoft's ill-fated chatbot Tay Tweets had to be taken down after only 16 hours as it had started to tweet racist and inflammatory content driven by input from other Twitter users.
Importantly, Tay Tweets was purposefully fed hateful content in an effort by Twitter and 4chan users to break it. But other examples of AI going astray have come despite the best efforts by its developers.
For example, in 2018 Amazon decided to retire a recruitment algorithm after it was discovered to discriminate against non-male candidates. The AI system was intended to provide hiring recommendations and had been fed ten years of application data to help train its decision-making. However, as the majority of submissions have been handed in by men, the conclusion the AI came to was that men were preferred candidates.
Responsible use of AI
There is a great deal to be positive about when it comes to AI. Any emerging technology that has the power to disrupt the existing structures of individuals and organisations must be assessed for its potential risks.
But being mindful of the downsides does not mean becoming blinkered to the benefits. Indeed, decision-makers have been warned against doing just this, or else risk losing out on the clear improvements that careful use of AI can bring.
"Look at how you are using technology today during critical interactions with customers - business moments - and consider how the value of those moments could be increased. Then apply AI to those points for additional business value," said Whit Andrews, distinguished vice president analyst at Gartner.
"AI projects face unique obstacles due to their scope and popularity, misperceptions about their value, the nature of the data they touch, and cultural concerns. To surmount these hurdles, CIOs should set realistic expectations, identify suitable use cases and create new organisational structures."
Gartner advises that business and IT leaders should endeavour to cut the AI hype away from reality by carefully considering and weighing up the opportunities vs risks. Obsessively focusing on automation, rather than the bigger picture, will only obscure the wider benefits, the analyst firm warns.
In July 2022, the UK government and Alan Turing Institute jointly announced the establishment of the Defence Centre for AI Research (DCAR). Its goal is to develop areas of AI research that are currently proving challenging to implement, such as training without the need for large data sets, AI ethics, and war gaming.
"Everything we love about civilisation is a product of intelligence," said Max Tegmark, president of the Future of Life Institute.
"Amplifying our human intelligence with artificial intelligence has the potential of helping civilisation flourish like never before as long as we manage to keep the technology beneficial."
Organisations can also check if their use of AI systems breaches data protection laws using a risk assessment toolkit launched by The Information Commissioner's Office (ICO). The AI and Data Protection Risk Assessment Toolkit, available in beta, draws upon the regulator's previously published guidance on AI, as well as other publications provided by the Alan Turing Institute.
It contains risk statements that organisations can use while processing personal data to understand the implications this can have for the rights of individuals. Based on an auditing framework developed by the ICO’s internal assurance and investigation teams, the toolkit also provides suggestions for best practices that companies can put in place to manage and mitigate risks.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2023.
Bobby Hellard is ITPro's Reviews Editor and has worked on CloudPro and ChannelPro since 2018. In his time at ITPro, Bobby has covered stories for all the major technology companies, such as Apple, Microsoft, Amazon and Facebook, and regularly attends industry-leading events such as AWS Re:Invent and Google Cloud Next.
Bobby mainly covers hardware reviews, but you will also recognise him as the face of many of our video reviews of laptops and smartphones.