All Generative AI tools are not created equal
A peek into my GenAI workflow and tool stack
Generative AI is not generic
There are so many GenAI tools out there.
As of this writing, Futurepedia has 5850 tools in 8 categories, and There’s an AI for That (TAAFT) lists 11,656 AIs for 16,604 tasks and 4,847 jobs.
Some would say that’s too many. Some would say it’s not enough. I think both camps have a valid point.
While most tools are useful, they are not always useful in the same way and for the same purposes, as they were designed with specific strengths in mind.
Anthropic's Claude.ai, for example, excels at summarizing and getting insights out of a piece of text, as compared with OpenAI's ChatGPT.
Case in point: I recently used the exact same prompt on different GenAI tools, applied to a +40-page document:
"Please help me summarize this document, highlighting the obvious and non-obvious qualitative and quantitative insights that would be relevant and interesting to an audience of HR professionals interested in technology and the future of work."
ChatGPT gave me a rather lazy block of 133 words, summarizing the document with no real insight.
Claude, instead, gave me some 400 words of highly organized content, including both quantitative and qualitative insight (that I, of course, had to validate and found to be quite accurate).
But when I need to brainstorm an idea to expand one fuzzy concept into a more organized set of concepts (to write about, to teach, etc), my go-to tool is Inflection AI's Pi. It excels at understanding what you're trying to say and asking questions that help you gain more clarity about the concept. They clearly put a lot of thought into building safety into the tool, and in the process achieved one of the most amenable AI agents out there. You can even download an app that allows you to talk to it, making the brainstorming process even closer to a conversation.
Then, after I have summarized with Claude and brainstormed with Pi, I usually go to ChatGPT to expand, validate, and organize the content.
Another valuable use case for ChatGPT is that with the recently added ability to tag GPTs, I can invoke specialized agents (created by myself and others), to tackle specific tasks without the need for intensive reprompting.
But ChatGPT is known for being remarkably unreliable in accurately citing the sources of its information, so when I need hard sources, I use Perplexity.ai, which combines the best of traditional search and AI. Some people in my network tell me it has replaced using Google search, as you can provide more context in the prompt than you would in a search box, resulting in a more focused set of results. I find myself using it instead of Google search quite frequently. Additionally, the resulting response links the related sources, which makes it much easier to ensure the AI is not hallucinating an answer.
When the research involves looking at scientific papers, Consensus and https://scite.ai/ are some of the best tools, trained specifically on articles, papers, and books.
There are some situations where you need to focus not just on the generated content, but on the relationships between the different concepts covered. This is an excellent use case for tools like ChatMind, which allows you to present the results of your prompt as a mind map.
And, of course, there are lots of other use cases for multimodal generation (e.g. Text-To-Image, Voice-To-Text, Text-To-Video, etc), but that’s beyond the scope of this article (I will write more about multimodal GenAI and how it can help increase the productivity of HR teams soon).
❓What does YOUR AI tool stack look like?
Tell me about it in the comments!