Is AI the future of everything?

A woman staring at code in a room lit in blue and purple
(Image credit: Getty Images)

“Can you tell this influencer selling you a cosmetic product is not real, but an AI-generated avatar?”

This was the question posed by Zeyi Yang, a writer at MIT Tech Review at the top of my feed on X this morning.

I watched the video carefully and there were some clear giveaways: the background is blurred, but not in a way that would occur either with normal foreshortening or a Zoom or Google Meet background effect. 

Her hair didn’t move even a millimeter when she bent forward. Her eyes were a bit dead and didn’t seem to be focusing on anything in particular. When I focused on her mouth it was a bit “off” in a way I can’t quite put my finger on. Finally, and perhaps ultimately most obviously, while her eyes and mouth were emoting, her skin and eyebrows weren’t moving.

How can you know if anything is real if you don’t see it in the flesh for yourself?

READ MORE

There was a niggling question I couldn’t shake though: would I have noticed any of these things if I hadn’t been told this was an AI-generated avatar first?

Now let’s be clear. Most of the influencer world is largely smoke and mirrors anyway. The clothes they wear are loaned, as are the cars. The trips to exotic or ‘exclusive’ destinations are either sponsored or paid for in kind. I think we’ve all also seen examples of influencers going begging to independent cupcake or accessory makers asking for free samples and offering to pay in ‘exposure’ (and then getting stroppy if they’re refused).

This also isn’t the first attempt at an AI or avatar influencer. Imma Gram burst onto the Instagram scene in 2018 and has subsequently appeared in real fashion campaigns – well, as real as they can be when the face of the campaign is computer generated. However, being a ‘virtual girl’ has always been part of Imma Gram’s schtick, and that of her equally CGI brother and friend, Zinn and Ria.

This is more insidious, though. The deepfake woman in the video Yang posted is posing (or being posed) as a real human being and it’s this element of advancing AI that I feel is most concerning. How can you know if anything is real if you don’t see it in the flesh for yourself?

Sure, there are reliable visual media outlets such as the NBC and NPR in the US, the BBC, ITN and Channel 4 in the UK, and ABC in Australia – not to mention ITPro, of course. But at what point can we no longer rely on alleged first person footage or audio if (or when) this technology becomes widespread?

Progress can’t be stopped

This isn’t an original observation, I will admit, but it also reflects concerns that are found more generally with regard to generative AI.

The part of the human brain that trusts whatever can be seen with its own eyes and has been conditioned to put its faith in technology – particularly information retrieved through search engines – quickly became a stumbling block in this new age. The story of the lawyer who presented a historical case as evidence of a legal precedent only to find out such a case didn’t exist is well known, but the problem has reared its head elsewhere, too.

The frequently confidently incorrect nature of generative AI answers led to Stack Overflow temporarily banning the submission of answers created by the technology in December 2022. It’s unlikely those submitting error-ridden answers were being malicious – in fact, they probably thought they were being helpful – but that doesn’t make the outcome any less damaging.

The machine never stops and can never be stopped. Progress, they argue, is inevitable. I’m not sure I agree.

As time passes, however, these technologies will likely become more refined and less likely to create these errors as time passes. Handy for someone not wishing to waste time writing an original essay themselves or doing their own research, but it opens the door to almost everything you see, read or hear being fake unless you see it yourself in person. The delightfully named meatspace becomes the only reliable source of information.

The relentless march of generative AI may even force a step backwards in technology. An academic recently told me that enforcing a return to handwritten work is likely “the only way” to avoid ChatGPT-facilitated cheating.

RELATED RESOURCE

A webinar from Cloudflare on APIs, what influence they have, and the risks they pose to businesses today.

(Image credit: Cloudflare)

Learn how APIs affect everything from transactions to mobile apps to supply-chain relationships.

DOWNLOAD FOR FREE

Which brings me back to the title of this article. Is AI – specifically generative AI – the future of everything? If it can take the role of an influencer, a model, a programmer, a journalist, is it the ultimate destination for everything? 

Some answer no, because it will increase the value of ‘artisanal’ produce in the same way that the explosion of Starbucks et al led to a ‘real coffee’ push back, or how handcrafted furniture and decorations are more highly prized than a Kallax bookcase from Ikea. This is a fine idea, but we can’t all be artisans producing bespoke or small run pieces with a long lead time.

Maybe we should pull back from our current path instead? After all, the creative, problem solving and pattern recognizing skills of the human brain are unmatched in nature or computing and probably never will be. 

Why create a potentially dangerous technology that’s a pale shadow of the real thing? Those in the technology industry who I have spoken to on this subject not only say no, but declare that it’s impossible. The machine never stops and can never be stopped. Progress, they argue, is inevitable. I’m not sure I agree.

READ MORE

Brain hovering above a chip on a motherboard, denoting AI and hardware

(Image credit: Getty Images)

Examples of generative AI in action today

AI can be useful; thanks to my iPhone’s built-in capability to analyze photos I take, I know the creature sitting on my lap as I write this is indeed a felis catus and the creature in my kitchen cupboard is, more concerningly, a false widow. 

Autocomplete, whether on my phone, in my email client or in the software I’m using to write this can help speed up tasks. It can also change “felis catus" to “felis cactus” and introduce other silly but easily overlooked mistakes, however.

Putting our faith, fortune, and future in generative AI – treating it as some kind of faceless electronic oracle or savior – is a mistake. Generative AI is the epitome of the saying “If your only tool is a hammer, everything looks like a nail”. With this tool, we may be about to bring our hammers down gleefully on reality iteslf.

Jane McCallion
Managing Editor

Jane McCallion is ITPro's Managing Editor, specializing in data centers and enterprise IT infrastructure. Before becoming Managing Editor, she held the role of Deputy Editor and, prior to that, Features Editor, managing a pool of freelance and internal writers, while continuing to specialize in enterprise IT infrastructure, and business strategy.

Prior to joining ITPro, Jane was a freelance business journalist writing as both Jane McCallion and Jane Bordenave for titles such as European CEO, World Finance, and Business Excellence Magazine.