While I might be a little slow on the draw, and I’m adding this ChatGPT post to the proverbial Great Pacific Garbage Patch on social media (Google it, it’s a problem!), I’ve concluded that if you can’t beat them, join them and I’m adding my tuppence worth to the ChatGPT debate!

By now you likely know what ChatGPT is, but for those of you who don’t, it’s an artificial intelligence (AI) chatbot that can respond to user input almost as if it were human. You can use it to ask questions, generate text, translate languages, and even assist in writing or composing content which I promise, I have not done here! (Or have I?) This brings me to one of the questions I think is at the crux of the matter:

“Should I care whether or not this content was generated by AI?”

I’ll cut to the chase why I believe you shouldn’t care about who or what wrote what you are reading.

Undoubtedly, by this point you (yes, YOU), have consumed content generated by AI, whether that was produced by ChatGPT or any of the other AI writing tools out there.

If you can’t tell whether the creator was human, machine, or a combination of both, then it shouldn’t matter.

I’ve a caveat though. I think that there is a right and a wrong way to use these AI writing tools.

We already live in a world where misinformation created and distributed by humans is rife. When it comes down to it, what’s essential is that content is engaging, original and factually correct. People may deem AI as cutting corners, but I would wager 90% of readers wouldn’t be able to tell the difference and you can’t judge what you’re not aware of.

Just chucking in a few words and questions, copying, and pasting whatever comes out the other end and then slapping your name on it is, in my eyes, not the way to do things.

Primarily, these tools should be used to help refine your ideas or as a starting point when you’re suffering from writer’s block. For amplifying our creativity and problem solving they are astonishing. But used maliciously these tools have the potential to cause societal harm through disinformation and I understand the concerns in the open letter signed by Musk and Wozniak, why people are nervous about this technological leap forward and why others may even be against using tools like ChatGPT altogether.

It’s easy to get hung-up on things that you don’t understand. For example, I still get headaches when I think about the ending of Fight Club. But that doesn’t mean Fight Club was a bad film that should be feared and over-scrutinised! It just means that I don’t have all the answers, and I’m fine with that.

The point I’m trying to make with this laboured analogy using one of David Fincher’s greatest hits (sorry, not sorry) is this: when used properly and with human input AI writing tools should be leveraged to all our benefits. It’s time for accelerating international collaboration to deepen understanding around the capacity of these tools and determine the safe, appropriate parameters of how AI can be deployed.

I’m convinced this technology is the natural progression of things.

Imagine if the tools could be used to identify and eradicate misinformation. Ignoring technology as exciting as this and viewing it as the Skynet (another cult film I’d recommend watching) might land people behind the curve, and I for one will not allow myself to become like my Grandad, every Christmas, trying to convince people that having your phone in your pocket makes you infertile.