Reading List
We tend to overestimate computing from hiddedevries.nl Blog RSS feed.
We tend to overestimate computing
I love computers, and I love what they can do. The possibilities are endless. At the same time, tech leaders like to make big promises. They don't always come true. As humanity, we tend to overestimate computing. While underestimating things that matter more, arguably.
In 2013, Elon Musk said he'd build a hyperloop connection between two US cities within 5 years. “If it was my top priority, I could probably get it done in one or two years”, he added. His hubris isn't unique. Also in 2013, Jeff Bezos promised same day package delivery by autonomous drones, in “four, five years”. In 2004, Bill Gates promised to end spam “in two years”. Nike promised NFTs that would last forever. Ten years ago, Geoffrey Hinton said AI would replace radiologists within 5 years. If you've worked in tech for a while, you've probably seen your fair share of examples of technology that got overpromised by its creators.
More than a decade later, we don't have hyperloops or autonomous drone delivery. Spam still exists. Nike's NFTs return an error message as the project switched to a cheaper hosting plan. We are facing history's largest shortage of radiologists, while they've not been replaced by machines. Even while some of the most resourceful people and companies on the planet tried.
Overpromising is part of the toolkit of business people. It can serve purposes like marketing and fundraising. Business goals aren't bad per se, but we owe it to our human intelligence to judge merits with a wider lens. Or we risk undervaluing what makes life special, including creativity and the arts. We're capable of more than transactions.
The undervaluing of arts is often monetary. When Mark Zuckerberg was asked if he didn't think creators should be paid for their work, he said:
I think [creators] tend to overestimate the value of their specific content in the grand scheme of this.
(From: Why Mark Zuckerberg thinks AR glasses will replace your phone (The Verge, 2024))
And sometimes it all seems a big misunderstanding. In an interview, Sam Altman said “creativity has been easier for AI than people thought”.
What could he mean by that? And what output did he see… the “creative” things I've seen LLMs produce, I found dull, bland, unsurprising and not creative. Which made me think… what exactly has been easier for AI? Do we have a different definition of creativity?
In the interview, Altman continued:
You can see Dall-E generate amazing images, write creative stories with GPT-4, whatever…
(From: interview with Sam Altman at WSJ Tech Live (2025))
Ah, okay. This positive remark isn't as much about AI in principle, or about creativity. It's about that the output of products he sells. Fair enough. But do you see how, for his purpose, he diverges quite a bit from what creativity really is?
We'll talk more about creativity in the other posts, let's focus on computing first. What is computing?
Computing numbers
For long, “computing” has merely meant the manipulation of numbers. This activity goes back to 2000 BC, when in Babylonia (~present-day Iraq) calculation happened on clay tablets, like this one:
if that looks like a pie-chart, this kind of resembles an Excel-sheet:
This comparison isn't far fetched, computer science started as mathematics, which started as philosophy. Today, computing is much more about the manipulation of data. Computing is often completing tasks, automatically. Computing is, more generally, to throw technology at a problem.
Computing provability
Throughout computer science history people set out to compute a lot of things. Charles Babbage wanted to make an engine to compute mathematical equations and, after that, a machine that could also store information, the Analytical Engine. They existed as concepts in essays only, and later were documented and use case-ified by Ada Lovelace. She warned:
it is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine
Years later, people tried to solve more mathematical problems with machines. Alan Turing wanted to make a machine that could decide for any mathematical equation whether it was provable or not (see Turings' Vision for more on that).
A few years later, then assistant professor John McCarthy at Dartmouth College in the US, decided to propose a summer project on a new field that he, in that proposal, called “artificial intelligence”, allegedly because it would be more intriguing than “automata studies”. A marketing phrase, in other words, as Karen Hao explains in Empire of AI.
McCarthy suggested 10 men should get together (yeah, I know). They got together for six weeks and some didn’t even stay the full six weeks. The plan was to solve computer science and some of its major problems. They listed 7 in the proposal, the 7th being “creativity” (true story).
They made some progress, but of course, the field wasn't done. Decades more research would follow. Grace Hopper invented COBOL and compilers, Karen Spärck-Jones invented inverse-document-frequency (used by search engines), and we're still in the middle of it.
Throughout the history, there's been plenty of overestimation. Maybe it is
Today, when some claim artifical intelligence is going to this or that, I think about what Ada Lovelace said: ideas about capabilities may be exaggerated. This still happens.
Originally posted as We tend to overestimate computing on Hidde's blog.