How to Talk About Generative AI without Sounding Like a Hype Maniac

GenAI without Hype

TLDR

I’m going to engage in an experiment; My hypothesis is that when I write about using Generative AI in my day-to-day work, it will come across as grounded in reality without making me sound like a lunatic who’s hyping the technology. But be warned, I don’t actually know if it’s going to work or not.

The Hype Cycle and Generative AI

For the past year, I’ve been using Generative AI of the Large Language Model (LLM) variety, on a daily basis. I’ve used company tools for work and consumer tools outside of work. I’ve traveled the journey from, “I don’t get what the hype is about,” to “Oh, that’s interesting,” to “Oh, this might change everything”. At the same time, it’s easy to see how the technology can be used poorly; The joke I keep seeing is a person who uses AI to turn three bullet points into a mid-sized email, only to have the recipient use AI to summarize it down to three bullet points. It’s a funny joke, and one that perfectly illustrates the garbage-in, garbage out nature of the tool. I’ve also seen some commentaries pointing to Generative AI as high on the hype cycle, like Modern MBA’s video essay, Why AI Is Tech’s Latest Hoax or NeetCode’s less bombastic, This video will change your mind about the AI hype. The former mostly focuses on the past hype cycles and equates Generative AI to them (Generative AI isn’t really addressed until minute 36 of 38). The latter is much more moderate, pointing out the hype, discussing why companies are investing and hyping, but starting from a premise that Generative AI can become transformative. The point of view that GenAI [note: I’m tired of typing out “Generative AI”] is 99% hype is a stark contrast with my personal experience. I’m seeing real benefits every day. My goal is to show my work and hopefully stimulate some useful discussion on the positive uses of GenAI and LLMs.

Choosing Better Problems

Let’s re-visit the bullet point to email example. Again, this is most likely to generate what we all recognize as “AI Slop”. Here’s an example of that (easily recognizable) slop.

Write a paragraph with the thesis that you have to choose better problems to use GenAI effectively.

Effective utilization of Generative AI hinges on selecting appropriate challenges; simply transforming bullet points into verbose emails exemplifies “AI Slop” and squanders the technology’s potential. To avoid contributing to the hype and instead achieve meaningful outcomes, one must identify problems where Generative AI can offer novel insights or significant efficiencies, rather than merely automating trivial or counterproductive tasks. The key to demonstrating genuine value, as opposed to fueling empty rhetoric, is to apply GenAI to sophisticated challenges that highlight its transformative capabilities.

I mean, it’s a bit meta, but there you go, AI Slop on how to avoid generating AI Slop. The point stands: Ask the tool to do a slop task and you’ll get slop out. But how about using GenAI on much more valuable problems? Vicky Zhao spoke about this in her video, 3 ChatGPT Prompts I Use to Standout At Work:

We currently see two types of people when it comes to using LLMs. There are those who put 10 words in and get 1,000 words out. They think that’s an efficient use of their time—things come out in seconds, and they can move on to the next task. Then there are those who put 1,000 words in and get 1,000 words back. On the surface, that might not sound great. You might think, “Well, I could’ve just written those 1,000 words myself.” But the result you get from ChatGPT can be of much higher quality than what you can write on your own in the same time. Most knowledge workers aren’t using AI this way.

Let me re-frame that: There are those who are trying to 100x the words they’re outputting. Then there are those who are trying to 100x the quality of the work they’re delivering. I’m looking for ways to do the latter.

What problems?

So how do you use GenAI to work on better problems than generating text? Ms. Zhao gives some examples in her video, saying we should be using GenAi to help us refine our understanding of the problems we’re working on. Her suggestions:

  • Use GenAI to examine the hidden/embedded assumptions in the problem and your solutions in order to make them explicit
  • Use GenAI to push you through asking the Five whys to move from a general understanding to a deep understanding.
  • Use GenAI to explore alternative explanations that might cause the problem you’re facing.

Have you used GenAI to do any of those things? Here are some others:

  • Use GenAI to role play from the point of view of the consumers of your work to see if it can be improved
  • Use GenAI to catalog the problem-solving frameworks in a given domain, then have it discuss the potentially appropriate parts of those frameworks to the problem you’re attempting to solve
  • Use GenAI to interview you about your personal biases and have it probe for others which might fit your patterns
  • Use GenAI to help you understand a point of view that differs from yours
  • Use GenAI to examine a problem or task from multiple points of view (maybe multiple stakeholders?) to look for your blind spots on the issue
  • Use GenAi to help you break down large problems into smaller ones in multiple ways and probe you for context that might help decide which path to choose

Hopefully you can see the pattern. Large Language Models (LLMs), the technology at the core of text-based GenAI, are incredibly flexible tools which can be used in a number of ways to improve the quality of one’s thinking, processes, and resulting work.

Which are the best tools?

I’m tools neutral. OpenAI’s LLMs and its ChatGPT interface pioneered the space and are what most people think of. Businesses might have access to it through a Microsoft Copilot Pro subscription. Or maybe they pay for Google Workspace with the Gemini family of LLMs and tools. Or maybe they’ve settled on Perplexity. Or Claude. To be honest, I don’t think the tool vendor choice is as important as some think. They all have a free tier of use which works well. Each vendor has some features they work on to differentiate themselves which inevitably get copied by the other vendors. I happen to use Google Gemini tools outside of work, but that vendor choice shouldn’t matter. Used well, most tools should give roughly similar quality output.

What next?

I’ve discussed what I think of as “low value” use of GenAI, made some suggestions on types of problems one can tackle with GenAI tools, and briefly talked about my neutrality on which vendor’s tools one can use. My goal in future writing is to be a lot more specific on crafting prompts with LLMs (prompt engineering) as well as dive deeper into the types of problems I use LLMs for at work. If you have specific questions about anything I’ve written here, please feel free to reach out to me on LinkedIn.

Comments

Leave a Reply