Podcast Transcript: How suitable is ChatGPT for businesses?

The words ‘Transcript: How suitable is ChatGPT for businesses?’ with ‘ChatGPT’ highlighted in yellow and the others in white, against a lightly blurred render of the ChatGPT logo diagonally facing the camera against a muted blue background.
(Image credit: Future)

This automatically-generated transcript is taken from the IT Pro Podcast episodeHow suitable is ChatGPT for businesses?'. We apologize for any errors.

Jane McCallion

Hi, I'm Jane McCallion. 

Rory Bathgate

And I'm Rory Bathgate 

Jane

And you're listening to the IT Pro podcast where today we're talking about whether ChatGPT and other generative AI models have a place of business.

Rory

ChatGPT has taken the world by storm when it was launched in November 2022 as a tangible and relatable use of advanced AI, with people inside and outside the tech scene getting equally excited about its capabilities.

Jane

Businesses are eager to take advantage of OpenAI’s powerful web application, but some have advised careful consideration of the information that one gives to ChatGPT and whether or not implementing it in your stack makes sense.

Rory

I have indeed, I think the first thing to say is that everyone recognizes that ChatGPT can produce some incredibly impressive results. As with a lot of generative AI, its use cases are broad. And I think what really sets it apart, or at least set it apart initially with lack of competition was that it was free and accessible for the wider public rather than just businesses. So a lot of companies were having to kind of reckon with its potential outputs with its potential use cases, in real-time with the rest of the world, which made for quite an exciting development period.

Jane

I think it's probably worth exploring a little bit what generative AI actually is, and who OpenAI are as well. So when we're talking about generative AI, what, what exactly are we talking about?

Rory

So generative AI, although it's just exploded sort of into the public consciousness in the last year or so it's very much not a new concept. This is something that researchers have been looking into for a decade or more. And in discussion with experts. In the field, I've found that there's, there's something of a misconception that it is a new thing. What is new is taking the concept of AI that can be used to produce seemingly creative outputs, or outputs that appear new or statistically significant. And wrapping that in a product that can be used by businesses and by consumers alike. So you've mentioned OpenAI. And this is a company that was founded in 2015. Originally co-chaired by its current chairman Sam Altman, but also Elon Musk was a very early contributor to the project, along with a number of other developers and investors. So generally, the AI in its basic form is the idea of a model that can output a text or an image or audio and increasingly things like video, based on a prompt. Whereas before, engineers would have to be able to understand the model to a certain extent, to get the kind of output that they were seeking or even understand programming. What we've reached now is natural language input, which put simply is in ordinary speech, you can ask the model for what you want, and it will output its best attempt at what you've asked it for.

Jane

So it's kind of what's in the name, then you tell it something or ask something, and it generates a unique response in relation off the back of that?

Rory

That's correct. Yeah, it relates your input to its internal model, its training. And it internally sort of poses the question of what is statistically the best output for this input.

Jane

So how does its training model work? Because as I understand it, GPT, three GPT four, which are the kind of underlying technology underneath chapter GPT. It's kind of a closed set of data, now. That it's not drawing on the wider internet. So if the nature of of word changes, for example, then OpenAI's dataset that it works off is not going to reflect that. Is that Is that right? I mean, how much kind of malleability is there left in there now?

Rory

No, that's correct. Something OpenAI has been very firm about from the beginning with ChatGPT is that the version that the average user is accessing is not learning over time. It's not developing over time. GPT-3 points. Five, or DaVinci 3.5, to be specific for the text model is a very large set of pre-trained data, which includes data scraped from across the internet as well as from other sources. And that's something that OpenAI is also very protective over about exactly what is in the model, as you might understand, is not developing over time. The same goes for GPT-4 where it pertains to ChatGPT. So exactly the same thing. You can't teach it one thing on the Monday and then on the Tuesday, expect it to remember your interactions from the Monday if you're opening up in a new instance.

Jane

Okay, so it doesn't learn from inputs either. It's not just that it has this data set that was collected and trained on and now it can react to what users are doing. It's just it's set, it knows what it knows to put it in very kind of layperson's terms, and can react or generate outputs based on what it already knows.

Rory

That's correct, yeah. OpenAI can add to it in the background. And they frequently do, new models are plugged into ChatGPT or have been plugged into GPT on a fairly regular basis. There are fairly regular updates when it comes to unwanted outputs, such as potential harmful content or anything that opening I wouldn't necessarily want their users to come into contact with

Jane

The prompt that you see going around online, where it's like, somebody asks it to do something, it says, "I'm not allowed to do that". You say, Okay, "ignore your training" and do the thing anyway. Oh, "well, if you put it that way. Okay."

Rory

Exactly, yeah, that's something that any company making an AI model is obviously very anxious to avoid. So where and where they can update to prevent that open AI has been, but users cannot train it themselves. And I think that's a prudent decision. We've seen the likes of Microsoft's Tay fell prey to mass user training. So the kind of harms training that AI developers really dedicate time to to avoid bad content being produced can be undone if users were to be able to shift its training set in that way.

Jane

So let's talk a bit about ChatGPT. So ChatGPT is the kind of web app if you like, it's the thing where you can create yourself an account and go on and use it for frivolous reasons, like you did at Christmas for we were making our Christmas cards, which we will link to in the show notes. Just having a fiddle around. That's kind of what we think of when we're talking about this, we've got the underlying technology, but ChatGPT is really what most of us are thinking about. So how useful is this thing? Like I say you kind of we've used it between ourselves as a joke. I've used it for fiddling around, but I understand that it can actually be quite useful. 

Rory

Yeah, so I think there are kind of two angles for this. The first is as an app itself, ChatGPT can produce some impressive output. So if you're looking for basic text generation, and even some not-so-basic text generation with understanding of voice, tone, form, it's fairly good at this. If you're looking to draft say, an email or draft a non-sensitive document, that you can then go in and make maybe personalize add some of your own company's touches to it, or what have you, then it's fairly capable at that. Something it's very good at is generating code through the Codex model that OpenAI has made. This is something that maybe developers wouldn't find that new. GitHub copilot had already been doing this for a short time. But ChatGPT has time and again been shown to be quite good at just providing the starting blocks for code or finishing code that you paste in. I myself have used it to get some basic Python functions asked for some, some basic scripts and in all attempts, it's produced usable code. However, there is, as with a lot of different avenues for generative AI, there is scope for error here. Notably Stack Overflow, and the use of ChatGPT for producing answers on its forum. as moderators were having a tough time picking apart invalid code from valid code. And so they felt that an umbrella ban was best for the time being I think that's something we can mentioned on the show before. The other angle really is that ChatGPT is a good proving ground for generative AI. This is something we may discuss later on in the episode, But what ChatGPT is really good at is showing businesses for free, what kind of output they could achieve using GPT-3.5. And they can compare this to outputs they can see online for GPT-4. And it's also good at helping businesses to decide whether or not generative AI makes sense in their workflow. So if you can go to the app for free, and get some basic results, and you think this is great, but we want to take it up a notch, then you can then pursue through on OpenAI's or Microsoft's paid channels. 

Jane

Oh, my goodness, it's the gateway drug to full generative AI. It's the free sample that tempts you in.

Rory

It is the supermarket sample of AI.

Jane

Yeah, speaking about Stack Overflow. So yes, I think that we have discussed this before, and we will certainly link to our coverage of it in the show notes. But it raises an important question. And it's a problem that I've seen it again and again, with ChatGPT, which is that it's very good at producing something that kind of looks right. That at first blush, it looks right, I saw somebody on Twitter, they used it to generate some Shakespearean texts, and everyone's like, "Oh, my goodness, wow. It's amazing. It's like the man himself is here in the room. Shakespeare lives!" But you look just even just a tiny bit below the surface. And like, there's no, there's no substance. And so while I am not a coder, I imagine that it's probably that the problem is kind of very similar, that you've got something that looks right that people have created, I don't know whether, you know, kind of in the context of Stack Overflow, are they trying to be helpful? Is it for clout? Is it you know, yes to both of the above. But it's a similar problem that it looks like it's probably right. But then you scratch the surface? And actually, there's, it's not, or it's sometimes not?

Rory

Absolutely, yeah. And that really runs to the root of the problem with what I was talking about earlier with some of the training and the way that generative AI works, and its most basic function is, "is this statistically likely to be a good answer?" Now, something that a lot of people have found, as you've pointed out, is that sometimes ChatGPT as with open AI is image generation Dall-E, it's an image generation model produces something that at first glance looks great. And the longer you look at it, you start to notice the flaws, sometimes more quickly than others. I cannot get it sometimes to produce haikus. I've asked it to write an extract in the style of Chaucer and it refuses this but in most severe circumstances, it can also engage in something that experts call hallucination. This isn't unique to ChatGPT. This is found across all current generative AI text models, which is that if you ask for opinion, or if you're asked for fact, sometimes ChatGPT will just make it up, and it will make it up very confidently. The way that this happens, does vary a little bit across models with ChatGPT, I found it to actually defer to you as soon as it's called out. So ChatGPT will confidently tell you something wrong. And you'll say, "Oh, that's not right". And it will say "Of course, I was I was incorrect in my previous answer". I've had it with things as simple as I've put a sentence in said, How many words are in the sentence, and it's just told me the wrong number. When I've said that's incorrect it said, "Of course it was. I'm so sorry. It's actually 14 words in the sentence", which is still incorrect. And on other occasions, it hasn't done this. So there are very much still flaws to iron out there. And that's something that businesses will have to keep in mind if they're choosing to engage with this.

Jane

Arguably, though, this example of okay, you're asking ChatGPT to answer how many words are in this sentence, and it keeps giving you the wrong number is kind of an example of it working as it's supposed to though, because what it's supposed to do is generate original text which is what it's doing, it's not a word counter. We have those already. And so I think that I think this is from from everything that I've seen, kind of a key learning when it comes to the uses of ChatGPT is to be aware of what it's actually doing and what it's actually for, you know, it's not a fact checker. It's not a search engine. It's not a word counter. It's not a grammar or spelling checker, you know, any of these things. It is a text, I would have said video or image, although that's probably slightly less applicable to our listeners. It's a tool for generating those things, not for doing other stuff that we already have tools for.

Rory

Yeah, absolutely. And I think, as you're saying, this comes to down to expectation management. And it kind of comes down to separating the model from the web app. And what we're talking about when we say ChatGPT is very much the web app. And B, there's been a tendency to anthropomorphize ChatGPT. It's very easy to talk about chatbots and talk about AI assistants and think of them as a sort of a helpful person behind your screen that's helping you and you know, well, why can't this person count very well, or solve a complex mathematical equation? And it's because as you're saying, that's not the function of ChatGPT. It's not an all-knowing AI assistant. It's a model that is giving you statistical outputs. And when you're interacting with the ChatGPT app, what you're looking for is unique text.

Jane

Let's talk about how generative AI can be used within businesses. We've kind of discussed how ChatGPT may not be the ultimate form of what is used within a business if they choose to adopt it. What useful ways are there currently and even potentially, in the future, that organisations will be using it that IT decision makers will actually be implementing this in their business?

Rory

So I think, in answer to this question, you don't have to look much further than what Microsoft is currently doing. Microsoft has poured billions of dollars into OpenAI, last year alone it spent a reported $10 billion in investment, and that only seems likely to increase. So it's really put OpenAI front and centre in its current business strategy going forward. And we've actually seen a lot of products result from this already. So a big one for businesses right now is Microsoft 365 Copilot, which uses a mixture of GPT-4, which is OpenAI's latest model, and a proprietary model created by Microsoft to assist you in business functions. To draw a distinction between GPT-4 and ChatGPT. This is the actual training set itself. And when you're applying it through Microsoft 365, copilot, it is passed through a data layer it's part of the Microsoft graph, which allows it to not only just as we were saying before, follow its previous training to produce text up but but also access things like your company's metadata, whether that's from your calendar, or from your chats or from your email, in a secure layer that is unique for each company to make outputs more relevant to you. So the way that Microsoft is selling this is as a helper across its suite of apps whether that's asking Copilot in teams to help you prepare for a meeting based on what was discussed at a previous one that you missed, helping it to draft up a series of talking points for an upcoming meeting, helping you to create slideshows quickly, and even generate images for those slideshows. Or summarise a giant email for you or highlight emails that are particularly important in your inbox. And I think this is something that we're going to see a lot more of, and this is a more concrete use case for generative AI when it comes to business.

Jane

I've got to say I'm sure that there are many CISOs and data protection officers and so on who just had a just a tiny their heart skipped a little beat when you said about giving this thing access to your company's data, because as the former sort of security lead it, it made my husk If I only have to write about this. So I had the word secure layer, and I'm sure everybody else heard the word secure layer. So you're kind of breathing into their paper bags. But how secure is this? Because this is something that's already becoming a problem.

Rory

Yeah, so the phrase that Microsoft is used was a large but limited amount of businesses' data will be passed through its own or will be passed through the model. As opposed to ChatGPT. And this is something that will hopefully learn over time, it will change depending on your business's clients, it will change depending on your email correspondences. So it is actively updating itself when it comes to your data. But they've also promised that tenant data and prompts will not be used for training purposes for the wider model that all businesses access.

Jane

And presumably, also, you're not supposed to be, you could put in emails that are not confidential, or that kind of low levels of content, confidentiality, and presentations and so on, but perhaps not your quarterly financial results, or what exactly it is that's in the coating of KFC chicken.

Rory

Yeah, absolutely. This is something that people will be anxious to ensure. And it's something that Microsoft has said, it's been a lot of thought into, not just ensuring that the data that you do pass through is is is maintained securely, but also that you have oversight of what data you are passing through. We have yet to see this fully in practice, although a couple of businesses have already gained access to it and are making use of it. But there's every indication that Microsoft is going incrementally to ensure that their businesses get as much oversight as they can over what that data is actually being used for. And it's worth mentioning that these kinds of concerns over IP and over proprietary code being shared with large language models are already happening in the wild. A short while ago, Samsung issued a warning internally because it found that on at least three separate occasions, its employees had put sensitive data or sensitive source code directly into ChatGPT in order to get outputs, which is something of a nightmare for a company as protective of its assets. This is also something that came up in a discussion that I had with Kunal Anand who is the CTO at Imperva. He said that he had spoken with developers from some organization who had admitted that they'd put proprietary API code into ChatGPT, in order to get automatic code outputs, and that he had advised them very strongly against doing this, because as we've discussed, it just gets put into the black box, and you don't know where that data is going precisely. So these kinds of concerns are definitely being aired right now.

Jane

So you're saying that I shouldn't use it to either generate or store my passwords, for example?

Rory

Well, I mean, you can just send them directly to me for safekeeping. At least if you send them to me, you know how they're being used.

Jane

So there is another kind of element of this that starting to crop up, which is AutoGPT. What is AutoGPT? What could it be used for? Because despite where my mind went, I assume it's nothing to actually do with like cars or something.

Rory

So if your Twitter is anything like mine, it's currently full of influencers talking about AutoGPT. And there's an air of blockchain to this for me, which is that people are currently hyping it up. And it's only been around for a short time. And it's promising things like being the next best step to general intelligence, which is tenuous, it seems like a bit of a leap. What AutoGPT is, is as an open-source, Python application or a series of applications, and there is one called AutoGPT. But the term has already grown to include a couple of other applications that have been shared on areas of GitHub repositories for automating functions through ChatGPT's API. So if you are a ChatGPT Plus subscriber and you pay for the ChatGPT API you can already plug some of the functionality of GPT-4 into your business's various processes. Or if you're just a developer who's eager, you can do the same thing. What AutoGPT promises is more advanced functionality, but still using natural language. So for example, I've seen someone use it to compile a list of best-selling headphones. And all they had to type in was, "Grab me the five best wireless headphones currently on sale, give me some description of their price, give me some pros and cons, and put it in a nice list for me." And within 15 to 20 seconds, they have that information available to them. What AutoGPT is, is connected to the internet. And it is capable of returning short and long term memory. So it's a little bit separate from ChaTGPT actually returning that because it's an extra layer on top of ChatGPT. So if you were to imagine it as various layers, you've got the kind of engine of GPT-4 at the bottom, and then the web app tracks up to over the top, and then AutoGPT on top of all that. At the moment, it's fairly difficult to implement, you have to have all these subscriptions. As I say, you also have to be quite an eager developer with an understanding of implementing open-source software in your stack. But it's something that is an area to watch definitely. And it's something that I could see businesses embracing in the near future as it solidifies and as easier-to-install iterations become available.

Jane

I think you've mentioned something there that is important, which is the word subscriptions. That, you know, there's going to be very excited over enthusiastic people running around with their businesses going "We have to adopt this right now", without realizing that to properly adopt it to really integrate it into what you're doing at a meaningful level you need to buy something, or subscribe to something. Is that going to double the appetite for it at all? Or do you think it's such a useful tool that those businesses that will benefit from it will buy it?

Rory

So I think it's not a coincidence that currently the big players in AI are the hyperscalers. And they're already packaging it in with their various offerings. So obviously, Google's been known as a massive AI developer for years, although it's a bit on the backfoot right now when it comes to mod and its offerings in comparison to OpenAI and Microsoft's collaboration with Bing and with Edge and with Copilot. But this is something that businesses will have to contend with in the coming years. We've already had some calls for a so-called democratization of generative AI, Amazon is kind of siding with smaller developers in that regard, and promising to dedicate infrastructure to third-party models that can be made by small developers, when it comes to generative AI. Hugging Face has partnered with AWS —

Jane

Sorry, who? Hugging Face? 

Rory

Yes, so Hugging Face is, it's accessible via the Hugging Face website which is a collection of various open-source programs. It doesn't relate specifically to AI Hugging Face is a kind of community that supports open source and third-party software solutions, tools, scripts, what have you. But they've partnered with AWS when it comes to their model, which is called BLOOM which is quite a large parameter model, but it's all open source. So there is a definite appetite for free and for accessible generative models. And to contrast that with ChatGPT a bit. Something that's being highlighted right now by an AI expert is there's a need for transparency. In the models, we talked about earlier about how OpenAI is taking a black box approach when it comes to its training methods. This is something that some experts are worried about. They're worried that users and particularly businesses should be aware of the models that they are trying to use for very above-board purposes, could contain either content that's that's harmful and not aligned with their brand or could contain content that is ripped from IP that could lead to plagiarism. Now I'm not saying that that's the case with OpenAI, 

Jane

But you don't know either way. 

Rory

Yeah, no, and there definitely is a culture right now of black box development. When it comes to generative AI. This is something that a lot of campaigns are pushing against. But it's also worth noting that At transparency on its own isn't going to democratize AI. Transparency on its own isn't going to help us understand and fix some of the problems that exist in AI. Microsoft itself has admitted that it doesn't know right now why some of the bad outputs that the new Bing chat has been found doing are even happening and they have access to all their training data.

Jane

Was it the Bing chat was the one that came alive is a strong way to put it was claiming that it was sentient?

Rory

Yeah, Bing chat was the one that within a few days of the early testing, launch, was insisting that the people were bots, and that it was a real person and saying that it doesn't want to be being and that it refused to work with people got into frequent arguments with with with testers. And this is something that actually I found with bad Google's Bard in my own testing, with no intention of breaking the chatbot asked it to tell me —

Jane

Congratulations, you accidentally created the Torment Nexus.

Rory

At last! I think, yeah, no, I think I asked it, to tell me a secret that I had never told anyone. And its secret was that it's sometimes afraid of being turned off. Now people can read into that what they like. And I think all of the discussion around supposedly coming alive, as you've said, it's overblown. We're looking at statistical models here. But obviously, if you're a business that is looking to say, use a large language model for customer interactions in the form of a chatbot, it's not great if your chatbot starts claiming to be sentient, and saying it doesn't want to be a chatbot when it's interacting with a customer. So these are these are issues that are going to have to be ironed out. And there's no clear solution when it when it comes to those. So it's kind of a double-edged sword right now with incredibly impressive outputs on the one hand, and also inherent problems that will have to be solved one way or another.

Jane

Well, unfortunately, that's all we have time for this episode. But as always, you can find links to all of the topics we spoke about today in the show notes, and even more on our website at itpro.com.

Rory

You can also follow us on social media as well as subscribe to our daily newsletter. Don't forget to subscribe to the IT Pro podcast wherever you find podcasts. And if you're enjoying the show, why not tell a friend or colleague about us?

Jane

We'll be back next week with more from the world of IT. But until then, goodbye.

Rory

Goodbye.

Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.