Co-writing an article with ChatGPT
A first conversation with AI, exploring collaborations, some use cases, and some risks to address.
Hello there!
I have been busy towards the end of the year, but I decided to start 2023 writing more often. At least once a week. I mean, once a month. …and there goes my New Year resolution…
I’ve been experimenting with ChatGPT for a few weeks now and find it captivating. People are coming up with new ways to leverage the ability to generate content in creative (sometimes downright funny) ways.
Experimenting with AI — so convenient!
Although some are using it just like a toy, it’s a tool and a very powerful one at that. And what better example of that than to ask ChatGPT about 10 creative uses? Here’s what it came up with:
Virtual assistant: ChatGPT could be used as a virtual assistant to handle tasks such as scheduling appointments, sending emails, and managing calendars.
Human’s note: Yes! This sound like a very efficient use of AI, in taking care of those little tasks that we always forget to do.
Language translation: ChatGPT could be used to translate text or spoken words in real-time, making it easier for businesses to communicate with clients and customers in different languages.
Human’s note: You can always use Google Translate, but this is great to streamline the process, and also be able to make the translation a bit more contextually accurate. Of course, if you are translating something into a language you don’t have a clue about, you’re likely to end up with the equivalent of the ubiquitous Chinese tattoo fail.
Content generation: ChatGPT could be used to generate content for websites, social media, or marketing materials by prompting it with a topic or theme.
Human’s note: This sounds great. However, we already suspect the majority of the information online is either obsolete or false. While AI can help you create more content, faster, this doesn’t always equate to better, more relevant (or even well-intended) content. Imagine this prompt: “write 5 examples of Nigerian prince scam letters” (Note to the note: OpenAI DOES give you a notice that “This content may violate our content policy.”, although it also generates the content for you.) An additional risk is that AI can sound convincingly authoritative while feeding you a load of baloney. So we’ll need to be extra cautious when consuming information online and not make the slogan true: “Artificial Intelligence. Putting the I.T. in bull**it since 2022.”
Social media management: ChatGPT could be used to manage a company’s social media accounts by responding to comments, messages, and reviews in real time.
Human’s note: Social media is the first experience most people will have of your brand. While it is convenient to be able to respond quickly, it can also be a fast way for the response to become viral -for all the wrong reasons.
Virtual reality customer service: ChatGPT could be integrated into a virtual reality environment to provide customer service in a more immersive and interactive way.
Human’s note: Sure, whenever the Metaverse becomes a thing, AI will be there to assist. It could happen.
Chatbot therapy: ChatGPT could be used to provide therapeutic support to individuals by generating responses that are empathetic and encouraging.
Human’s note: Ermmm…we already know that AI can be sexist and racist and lack empathy, providing a synthetic, uncanny valley version of it at best. So I would probably be eeeextra cautious with it, lest we end up with Kevorkian-esque AI bots.
Personalized recommendations: ChatGPT could be used to provide personalized recommendations to customers based on their past purchases and browsing history.
Human’s note: Yes! Amazon has been doing it for years with mixed success, and so has Netflix. Granted, they have been improving over time. Although sometimes you still get a recommendation to buy a product you’ve just bought or watch a movie you’ve just finished. Hopefully, as AI gets smarter, recommendations become more accurate.
Personalized education: ChatGPT could be used to provide personalized education and training to employees or students by adapting its responses to the user’s needs and learning style.
Human’s note: Yes! Helping people learn is a great use for this. Mind you, this may also result in essays becoming a thing of the past. But this is part of a larger evolutionary process: Socrates famously complained about writing being the end of memory (as recorded in his student Plato’s books, in a blatant act of pedagogical treason), and I am sure some teachers and professors have complained about typewriters instead of handwriting, and of computers instead of typewriters, and so on. AI-assisted writing is here to stay, so we’ll have to deal with it.
Personalized nutrition: ChatGPT could be used to provide personalized nutrition recommendations to users based on their dietary needs and preferences.
Human’s note: Have you ever been trapped in this conversation? ”-Honey, what type of food should we order? -I don’t know, anything you want, I don’t care. -OK, how about Chinese? -Nah, too fatty -Indian? -Nah, too spicy -Italian -Nah, we had it last night. etc etc. Enter AI, and you can have a 3rd party handle the selection process. I’d say it’s a win.
The downside of convenience is atrophy
I will let ChatGPT explain this one:
Convenience can lead to atrophy in the sense that when we have access to things that make our lives easier or more convenient, we may be less inclined to use our physical or mental capabilities to the fullest. For example, if we have a car that can take us anywhere we want to go, we may be less likely to walk or ride a bike, leading to a decline in physical fitness. Similarly, if we have access to information at our fingertips through search engines or other online resources, we may be less likely to engage in activities that require us to use our memory or problem-solving skills. Over time, this lack of use can lead to a decline in physical or mental abilities, a phenomenon known as atrophy.
That pretty much sums it up. Oddly enough, that prompt (”explain how convenience can lead to atrophy”) also results in a warning that “This content may violate our content policy”. Hmmm…it’s almost as if ChatGPT suspected I was trying to make it look bad…
Don’t know, just in case I’ll leave this here…