Learning to Collaborate with Artificial Intelligence
Artificial Intelligence (AI) has been rapidly advancing in recent years, and it is now an integral part of many businesses and industries…
Artificial Intelligence (AI) has been rapidly advancing in recent years, and it is now an integral part of many businesses and industries. From chatbots and virtual assistants to self-driving cars and medical diagnosis, AI is becoming ubiquitous. It is inevitable that AI will become even more prevalent, and that you will run into it at your work, so it is crucial that individuals and organizations learn to leverage its potential instead of being afraid of it.
Playing with, not playing against 💪🦾
Working with AI requires a change of mindset about work and the value that humans bring to the table. AI can automate repetitive tasks, process large amounts of data, and provide insights that can inform decision-making. However, humans still have unique strengths that AI cannot replicate, such as creativity, empathy, and critical thinking. Humans and AI can work together to achieve better outcomes than either can achieve alone.
But this is more than just about identifying what skills each one has and what humans and AI can do better. Instead, this requires a good understanding of how to optimize collaboration, both from the technical and behavioral perspectives.
This is similar to the story of how Betty Crocker’s cake mix was first marketed. When the cake mix was first introduced in the 1950s, it included powdered eggs and milk, requiring only water to be added to make a cake. However, it didn’t sell well because it was too easy to make, and people felt like they weren’t contributing to the process. Betty Crocker then removed the powdered eggs and milk, requiring people to add their own eggs and milk, which gave them a sense of ownership and participation in the process. Similarly, working with AI requires people to find the right balance between automation and human involvement to ensure that people feel valued and engaged.
Parlez-vous prompt? 🗣️🤖
An essential skill for working with AI is prompt engineering. In a nutshell, it is sort of a mix between language learning and coaching. You need to learn to craft the most insightful questions, in a language and tone that the AI understands, so that the AI algorithms generate great answers. By asking the right questions, setting the right context for the responses, and refining the answers through further questioning, you can gain valuable insights that can inform decision-making and improve outcomes. Otherwise, it’s garbage in, garbage out.
Sauce, please 🤓🍝
Also, although AI-generated answers may look great and sound authoritative, it is always important to double-check the sources, as AI can hallucinate responses. In essence, AI has a hard time saying “I don’t know” and tends to work too hard to provide an output that sounds plausible, with little concern for (or understanding of) the truth. AI can help with that by providing links and references, but humans must still curate the information and evaluate its quality. You should remember not to abdicate your responsibility to AI. It is still your job, and you will remain ultimately accountable for the results. AI should be your co-pilot, rather than your autopilot.
A word of caution (or three) ⚠️🚨
A critical aspect of working with AI is being mindful of data governance and privacy. Organizations must establish clear AI usage rules for their teams and educate them about the risks of misusing it.
Yes, AI can make your organization more productive, but if that productivity comes at the expense of trust, privacy, compliance, and even ethics, is the risk worth it?
Vanderbilt university found it the hard way when communication to students about a shooting included clear marks of AI generative tools.
OpenAI has a very detailed privacy policy, and it recommends you “Please don’t share any sensitive information in your conversations.” Everything that is entered in the conversation is also used to train ChatGPT, which means that some of what you or anybody in your team provides as input to the AI, has potentially compromised privacy.
It is also important to note that the AI responses are going to be as biased as the information it has been trained on. And even that training raises some eyebrows as regards intellectual property, if the AI creators do not disclose what information was used to train, how it was trained, and whether they had permission to use that information in the first place.
Finally, there is a number of open questions and concerns about the impact of AI use on human neuroplasticity (will it help it develop or will it cause atrophy?), the impact on career development (if AI can do ALL the lower-level tasks, does that leave enough room for junior employees to join the workforce and grow? Are the recent layoffs going to be replaced by AI?) and even of general human creativity (what happens when most art, most content is generated by AI? What original work is further AI going to be trained on?).
Summary
AI is a force to be reckoned with and can be used for good, but it can also be misused, or forcefully implemented without enough consideration for the long-term consequences. It is important to develop measures, policies, and governance to protect personal data and ensure that AI is being used safely, ethically, responsibly, and sustainably. Only then we will fully realize the benefits of collaboration.
(Note: This article was built in collaboration with OpenAI’s ChatGPT and Google Bard, with a human carefully crafting prompts, refining and curating the results, proofreading and curating the output, and checking all the sources. Used like that, AI is an invaluable tool for content creation.)