Do we have too much faith in technology?

The words ‘Do we have too much faith in technology?’ overlaid on a lightly-blurred, macro image of a computer chip. Decorative: the words ‘faith in technology’’ are in yellow, while other words are in white. The ITPro podcast logo is in the bottom right corner.
(Image credit: Future / Unsplash - Laura Ockel)

Computers and technology have well and truly permeated our professional and private lives. While this has led to great strides in efficiency, opened up new opportunities for businesses and individuals, and helped us become more connected than ever before, there are also downsides.

As the world becomes more digital and even menial tasks are increasingly outsourced to computer systems, a simultaneous shift in accountability and oversight needs to take place. Trust in computers is all too often automatic, with the pitfalls of this brought to life through the Horizon scandal in the UK. As we move to a future where AI is ubiquitous and computing permeates every level of life, how do we avoid a repeat of this kind of tragedy and move forward using AI in the most ethical way possible?

In this episode, Jane and Rory discuss the fallibility of tech and why business leaders would do well to approach the data they receive from computer systems with a healthy dose of skepticism.

Highlights

“If you were in a self-driving car, a lot of car companies have said, “Oh, but there's still someone behind the wheel. Well, you'd hope that the person behind the wheel knows how to drive so if they veer off course, they can take control. And it's the same thing when it comes to AI systems. The person there, who's approving this flagging this, the buck has to stop with them at least when it comes to accountability. And for that to be the caset hey have to know what they're doing.”

“People who are perhaps a little bit more into this might know that generative AI isn't prone to hallucinations and calling these false outputs hallucinations is kind of erroneous in its own right. They are false outputs. But what generative AI does is hallucinate.”

“It's a genuine concern in academia, a genuine concern in journalism, that somebody might not be doing their own research, this might not be their own work… But ironically, these professors are now putting their full faith in something that alleges it could detect this stuff, even when they normally themselves, by the way, actually can't detect anything.”

Footnotes

Subscribe

Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.