Reading List

How to avoid that your post about AI helps the hype from hiddedevries.nl Blog RSS feed.

How to avoid that your post about AI helps the hype

If we're not cautious, we may accidentally feed the AI hype by talking about it in specific ways.

When we hype up the technology, we mostly help the people who put money into it. This post isn't about those people or that money, maybe they could use the help… my point is, they are irrelevant when we want to understand the merits of AI. They muddy the waters and overshadow the important questions.

There's plenty of questions to consider. Are LLM's helpful, can they solve specific problems well, should we use them? Sometimes the answer is yes, sometimes it is no. There are grey areas, some find use and others don't.

Do they increase productivity, can they do what humans do? It really depends. And that means we should weigh the options before we hype.

When we hype up and discuss merely what is or seems great, we help the powerful billionaires who consistently pour money into the technology. In addition, we might forget to do justice to the many ethical problems inherent to the technology. Especially around the implementors and implementations it has today, where there are problems from sourcing rare metals for chips to traumatising human data classifiers, from magnitudes larger climate footprint during training and use, to the mass theft of people's creative works.

So, when do we risk accidentally overhyping AI?

Forget that it's a machine

We might say things like ‘I'll ask [tool]’, ‘he/she said’, ‘he/she came up with’, ‘he/she told me’. Or ‘he/she thinks’.

Such phrases humanise the machine. When we humanise our pets, that's cute (and not just, animal cognition is a genuine field of philosophical enquiry). When we humanise machines, we help the billionaires.

This is too important to not be pedantic: an LLM can respond to words with words based on statistical likelihood, and while that's sometimes incredibly impressive and can seem human-like, any intelligence that reveals itself is an illusion. It's unlikely to let billionaires do scientific discoveries in fields outside they don't have a background in

The term “artificial intelligence”, was made up as a way to make a branch of scientific research more attractive to potential funders. A lot of the tech we see today is neither artificial, nor intelligent. It's powerful and impressive technology, sure, but it's machines.

Say “it is inevitable”

Those who've put endless amounts of cash into the tech, like Microsoft, who put 100 billion into OpenAI, may feel AI is inevitable. They invested, they need returns and they use everything in their power to get there, including their dominance in the market.

Inevitability suggests some kind of universal appetite for the tech. And there's appetite, for sure. But the fact that a lot of software today is begging users to start using their AI features, suggests otherwise.

Like Google Workspace, that will not let you have any smart features if you don't also use their AI.

Unlock Al for 50% off Slack's Pro plan now includes Al features. Upgrade by July 18th to get 50% off your first 3 months. Learn more Instantly summarize channels and threads 3) Unlock unlimited message history Work with people outside your organization Upgrade Today Compare Plans Slack offers a 50% discount if you enable AI.

AI is not inevitable for us, the people. Not at home, but also not at work, when we're making decisions about technology.

Again, AI could be helpful. Granted, AI could be the only way to achieve something. But AI could also be an unnecessary, unsuitable or needlessly extravagant solution to a specific problem. We've seen a lot of that too.

Mandate the use (without qualifying how or why)

Increasingly, C-suite is demanding AI use, without qualifying how or why (maybe they never hear no). Without substantiation and doing proper analysis of AI vs non-AI usage on a case by case basis, this is merely hype.

At Shopify, developers must use AI, their CEO said:

Before asking for more Headcount and resources, teams must demonstrate why they cannot get what they want done using AI

(From Shopify CEO Tobi Lütke's memo “Reflexive AI usage is now a baseline expectation at Shopify”, posted on 7 april 2025, on a social media site I won't link to)

The CEO of Axel Springer, the company that owns Politico and Business Insider said his employees need do explain when they don't use AI.

Microsoft President of Developer Division and GitHub, Julia Liuson, said “AI is no longer optional” and should be used in performance evaluations, emailing top management to say:

AI should be part of your holistic reflections on an individual's performance and impact.

Anecdotally, I'm hearing from friends all over big tech that they are rewarded if they do more with AI. Not doing so could be seen as bad performance and therefore threaten their jobs, especially in countries where employees aren't protected well.

Predict that “it saves time”

You'll only really know if it saved time afterwards. Predicted time savings is purely marketing if applied to real world scenarios.

Clearly, writing a 1000 word essay takes longer than asking a chatbot to generate it, but most organisations would require a lot of editing and review before they can publish the end result. Vibe-coding a business critical app may take a few days instead of months of years, but cleaning up the (security) bugs could take longer, and cost more.

An experiment from METR, a non-profit founded by a former OpenAI researcher, showed developers who thought they were saving 24% time with AI, actually took 19% longer when using AI. Simon Willison suspects it may be due to learning curve, it will be interesting to see their next findings.

The prospect of time saving may well warrant the time and effort spent experimenting, and hearing about actual savings from organisations who did, seems valuable. Claiming time savings based on predictions alone, however, merely adds to hype.

And even with time saved…  “productivity isn't value”, as Salma explains in her post The promise that wasn't kept. Like Salma says in her post, real value is where it's at.

“You'll stay behind”

Some AI marketing suggests that those who don't use it (or not a lot), will be left behind, miss the boat. While the rest of the world moves on and enjoys technological bliss, you'll struggle without.

First, it's doubtful that the technology is just bliss, or that missing out is a struggle. Salma's post explains that, and so does Heather Buchel's thoughtful reply, asking when she can move to the more creative and fulfilling parts of her job.

Second, financially, an organisation could ‘win’ by avoiding AI (if we want to go as far as to see the world as a tournament). Third party AI prices are likely to go up, as those who invested billions will want returns.

It's unclear what level of AI adoption will get folks to stay ahead or behind. Time will have to tell. Before we know, these suggestions mostly help the hype.

Wrapping up

I agree with Declan Chidlow that we need constructive AI criticism. I'm hoping to offer that here, and, as always, I am very much open to hear what others have to say.


Originally posted as How to avoid that your post about AI helps the hype on Hidde's blog.

Reply via email