noun_Email_707352 noun_917542_cc Map point Play Untitled Retweet Group 3 Fill 1

ChatGPT and my grandfather

What are the ethical pitfalls of ChatGPT?

Simon Kavanagh / March 24, 2023

My grandfather was a great storyteller, but not always a truthful one. His mantra was “never let the truth get in the way of a good story”.

Figure 1: Not actually my grandfather. Actually not anyone. Generated by Deepai.org

I’m reminded of him as I’m talking to ChatGPT on Openai.com. It seems the essence of my grandfather has been condensed and fed into this strange machine. ChatGPT also wants to tell good stories and also isn’t so worried about their veracity or accuracy. Indeed on the landing page, it admits exactly that:

Figure 2: Ain’t that the truth!

Still, interacting with ChatGPT is fun. I ask it to explain String Theory in language a ten-year-old would understand. It does well. I still don’t really get it, but at least I get it as much as a 10-year-old.

An impossible task. What now?

Then I ask it to explain Weak Electromagnetic Force in language which Shakespeare would understand. That doesn’t work out so well. Apart from a lot of doth and ’tis, the explanation isn’t very different from the language we use today, over 500 years after Shakespeare died.

Of course, it’s an unfair task, as the concepts needed in any discussion of modern physics wouldn’t have been available in the late 1500s. So then why try? ChatGPT could have just said “No, that’s not possible.” and I would have been fine with that, even a little impressed at the honesty.

But this is the issue with both ChatGPT and my Grandfather. They both hate answering “I don’t know”.

Figure 3: Rare portrait of Shakespeare Googling modern physics (https://hotpot.ai/)
 

Instead, in the absence of the correct answers both will not only ‘generate incorrect information’, they will generates complete rubbish.

,

Instead, in the absence of the correct answers, both will not only ‘generate incorrect information’, but they will also generate complete rubbish.

An easy way to test this is the field of etymology – word roots and their relations. I ask ChatGPT to show me some examples of Finnish words which are related to Hungarian words. It first dutifully tells me that these two languages are related as they are both Finno-Ugric languages. I didn’t ask that, but thanks. I guess. Then it gives me some examples of related words, like the word for garden which is kert (Hungarian) and puutarha (Finnish).

This is odd as these words don’t seem related at all. So I ask ChatGPT to show me how the words are related. It then tells me: “The Hungarian word ‘kert’ comes from the Old Turkic word ‘ker’, which means ‘to enclose’ or ‘to fence in’. The Finnish word ‘puutarha’, on the other hand, comes from the Swedish word ‘botar’, which means ‘cure’ or ‘remedy’.”

But that answer just makes them look even less related! So I persist. I ask ChatGPT to show me exactly the relationship between these words. ChatGPT answers: “the words themselves are not related, as they have different roots”. I make a facepalm emoji, with my face and my palm.

So, what is ChatGPT?

It doesn’t really seem to know itself. It’s kind of like a search engine, kind of like a chatbot, kind of like an encyclopedia. But under the hood, it’s a text prediction engine with some fancy lossy compression. It doesn’t understand anything about the concepts it is talking about. That is really important to understand.

ChatGPT identifies patterns but has no idea what the patterns mean. In overly-simple terms, when you ask ChatGPT a question it tries to answer something which matches the pattern of your question. It wants to satisfy you and the pattern of your question. ChatGPT is ALL about satisfying you. And this is why it very rarely says “I don’t know”.

This is a problem?

Well, there is an ethical question about how much responsibility ChatGPT has for the answers it gives.

How can you know what is made up and what is based on real learning? And does ChatGPT itself even know the difference between the two?

Then there are the issues of bias and veracity. I ask ChatGPT how it handles questions of bias in its training data and how it ensures the veracity of its answers. ChatGPT expresses its views around ethics and responsibility and they are well articulated but complete nonsense. It first says: “As an AI language model, I do not have the ability to make moral or ethical judgments”.

OK, but does that means ChatGPT considers itself free of any ethical responsibility? “No, I am programmed to comply with ethical standards”.

So how exactly does it manage to be compliant with ethical standards when it can not make ethical judgements?

,

This answer comes very quickly:

Figure 4: Well played ChatGPT, well played.

It is obvious when interacting with ChatGPT, and when reading about it on OpenAI’s webpage, that its creators believe it will have a positive impact. Of course. You need to believe that what you create is positive, otherwise, no reasonable person would create it. But hubris can lead to blind trust and when that happens you miss potential risks, or maybe you stop caring about them.

The sad reality of human nature is that if a technology can be misused then it will be. How will ChatGPT be misused? All sorts of fantastic ways. Whatever ChatGPT is today, with all its potential benefits and even with its current limitations, it’s not going to be this way in 3 years’ time. If ChatGPT can be successfully monetized (and make no mistake, that’s the only thing that counts) then it’s going to become so pervasive we won’t even notice we’re using it.

The same can be said for image generation AIs like Dall-E or music generation like MuseNet (both from OpenAI, by the way). Having that technology everywhere isn’t necessarily a bad thing but if we don’t get the ethics right today then we’re sleepwalking into all sorts of risks.

And that’s surely something we should try to avoid. Right?

,

I like to think my grandfather knew when he was stretching the truth. And I’m fairly sure his captive audience did too. But ChatGPT doesn’t know when it’s lying and takes no responsibility for spitting lies anyhow. The consequences of this right now are not so serious. ChatGPT is a toy. It’s funny when it lies, maybe a little frustrating when it argues with you. But what about the scenario where ChatGPT is baked into loads of our online interactions? How funny will it be then?

UPDATE: In an interview with ABC News on March 16th, 2023 the CEO of OpenAI admitted that we are right to be scared about this technology as it comes with real dangers and the impact on society is still unknown. You can FIND IT HERE.

Simon Kavanagh
Chief Designer, Tietoevry Banking

With over 20 year experience working with healthcare IT, Simon is now heading Innovation in Tietoevry's innovation unit called d|lab. He’s passionate about raising awareness around the ethics of technology, and the impact technology has on people’s lives. He’s originally from Ireland but has been living in Oslo for the past ten years

Author

Simon Kavanagh

Chief Designer, Tietoevry Banking

More from the author

Share on Facebook Tweet Share on LinkedIn