AI Will Disrupt Jobs. But Not the Way You Think.

“AI is coming for your job!”

“Government will need to step in and give a universal basic income (UBI) to everyone!”

“There will be mass unemployment and civil unrest!”

Or so I keep hearing these days in different groups. I am a bit more sceptical. I am simultaneously hyped about and underwhelmed by AI.

Image generated using ChatGPT


Some Famously Wrong Predictions

We overestimate what can happen in the short run and underestimate what does in the long run.

  • There are predictions from the 1950s, attributed to IBM’s then chairman, estimating the market size of computers to be 5 units. Not 5 million, not 5 thousand. Just 1 + 1 + 1 + 1 + 1.

  • IBM, probably in the 1980s, thought it would be mind-boggling if there were a use case of more than 128KBs memory.

  • Bill Gates thought there was little commercial potential for this thing called the internet in 1990s, and predicted the end of spam emails “in two years,” in 2004.

  • The prediction about the internet was echoed by subsequent Nobel laureate in Economics, Paul Krugman, in 1998 with its impact on economy to be “no greater than that of fax machines by 2005”.

  • Mid 2000s, Nokia was advised by a management consultancy against entering in the premium smart phone market soon around iPhone’s launch.

If people with the intelligence of Bill Gates and Paul Krugman, and companies as massive as IBM and Nokia, got their predictions of future so fabulously wrong, it doesn’t matter if I do too. And, the off chance that I am right, there’s (possibly? hopefully?) some decent upside.

But, before that! Let’s try steelmanning the argument about the rise of AI.


Steelmanning The Rise of AI

Claim 1: The rise of technology is not linear. It’s exponential. Humans suck at predicting exponential growth. What we see of AI 5 years ago is nothing like what we see it today and nothing as what it would be 5 years from now.

Claim 2: AI will automate a whole bunch of jobs across diverse fields. That ranges from those around creating (movies, music, art, animation, fiction writing) to those around supporting businesses (data analysis, marketing, programming, customer support) to those around teaching as well as scientific research and medicine.

Claim 3: Artificial General Intelligence (AGI) is around the corner (~5 years). The AI we currently have is ‘artificial narrow intelligence’ (designed for specific tasks — playing chess, generating natural language text, generating images). AGI will have human levels of general intelligence — in that it would be able to self-learn autonomously, have general problem solving ability, have human-like intuition and contextual understanding, and have an ability to plan and prioritise tasks towards a certain goal. Once we hit AGI, it would spurt the next level of growth where the AGI would deploy itself towards diverse goals and effectively self-learn to infinity.

Claim 4: Unemployment will hit 30+% levels. As a result of these automations and AGI, there would be high levels of human unemployment. Some estimates suggest that to be 30+% levels of unemployment.

Claim 5: Need for UBI. Given the high levels of unemployment, we need Universal Basic Income (UBI) for everyone and there could be large scale civil unrests.

I agree with the first two claims and disagree with third to fifth.


Why I Disagree

Reasons for disagreeing:

  1. AGI is described as some sort of magic solution that would be able to do everything!

    1. No, seriously! OpenAI’s then chief scientist Illya Sutskever used to lead employees in a chant, “Feel the AGI! Feel the AGI!” It feels straight out of Silicon Valley (the TV series). Link to Atlantic’s article from 2023

  2. Humans will invent new jobs. Useful jobs and superfluous jobs. As we’ve done in the past — be it when printing press was invented, or steam engine was invented, or automobiles gained mass adoption, or internet moved from bulky desktop PCs to compact hand-held devices.

    1. The framing of AI as something that will ‘eat away’ a vast set of jobs and not compensated by rise of other new jobs, is a zero-sum framing of it. If you believe there’s a possibility of value-add in future, that’s the positive-sum game. If value-add is happening in future, that value-add is being paid for/consumed by someone and created by someone else. With two ends to this transaction, there would be a new set of people who are creating that value (with/without AI). (tangential link: I’ve written about zero-sum and positive-sum games in general, here).

  3. Trust capitalism to save itself. Capitalism relies on humans consuming stuff and paying for things. Humans need employment and money to spend. Capitalism and ‘powers that be’ necessitate a functioning economy with a wide mass of consumers with spending power.

  4. Even if AGI were well defined, I don’t think the current approach of parameterised pattern recognition that’s led us to LLMs will take us to AGI. It would be a radically different approach. I don’t know what that approach would be.

    1. OpenAI and Microsoft define AGI as when OpenAI has hit $100 billion in profits.

    2. I am sceptical about news about an LLM topping the JEE/medical entrance test. The LLMs rely on memorisation and these tests can be gamed through memory. The Abstraction and Reasoning Corpus (or ARC) has a harder-to-game test, which is easy for humans and hard for LLMs/AIs and a $700k prize money. Link.

  5. Humans will value the human angle of creation, even if it carries the flaws. Maybe because of the flaws. Chuck aka Deepak, explains it here, using Ozzy Osbourne’s last concert appearance.

  6. AI is good at getting us from 0% (or rather, 40%) to 90%. In a lot of contexts, it’s painfully difficult and almost as time-consuming to go from 90% to 98% as it would be for freshly creating and getting there.

The 6th point is directly from Sajith Pai’s excellent AMA and I quote the relevant section (but do read the full piece):

And I remember this wonderful anecdote from Chris Dixon in a podcast or so, where he said that a good CS student from Stanford or Berkeley or any of the top universities can spin up a self-driving car app in like, you know, which is 90% good in two days over a weekend, but 90% ain’t good enough for the streets, right? It’ll take the same student two years to get to 95%, still ain’t good enough. And it’ll take him 20 man years to get to 98%, which still won’t be allowed in the streets, right? So all the values now move to the edges. So when creation becomes easy, all the edge cases begin to matter now, more than ever.

So the defensibility comes from understanding the workflows of your customers, being able to understand what really are the pain points and map out what the challenges are and build great UI to kind of solve those edge cases. Then maintenance, okay? Someone needs a throat to choke and that is you as a SaaS vendor, right? That’s not going to go away. And then finally, distribution.

Dwarkesh Patel, in his blog and podcast has asked this solid question that helps ground my intuition every time I hear some great news about an AI model’s latest achievement.

What is the answer to this question I asked Dario over a year ago? As a scientist yourself, what should we make of the fact that despite having basically every known fact about the world memorized, these models haven’t, as far as I know, made a single new discovery? Even a moderately intelligent person who has so much stuff memorized would make all kinds of new connections (connect fact x, and y, and the logical implication is new discovery z).

From his excellent blogpost collating his thoughts around AI.


Seven Predictions About AI for 2040

Some predictions then. And only falsifiable predictions are useful — irrespective of whether the predictions are correct or not.

Prediction #1: Hand-made in the digital world. As AI content improves and becomes harder to distinguish from human content, people will identify different means to signal that it’s written by them. Some niche platform — the equivalent to Mubi v/s Netflix — would be there curating only human-made things.

Prediction #2: AGI benchmark will not be crossed by 2040. Whatever way we define AGI — whether in terms of OpenAI hitting $100B in profit or some company beating the ARC test cited above — will not be crossed by 2040.

Prediction #3: Global unemployment would remain <5% in 2040. Global unemployment levels are currently at 4.89% and have been decreasing every year since 2003, except for 2009 in the wake global financial crisis and in 2020 due to Covid-19 (source). Barring random global crisis, it would remain in low single digits (<5%) in 2040.

Prediction #4: More superfluous jobs — AI Compliance Manager, Prompt Engineer, AI Whisperer etc. — would rise. How to make this testable? Linkedin has this report stating 20% of hires in APAC have job titles that didn’t exist in 2000. I’d say that figure would be roughly similar in 2040 — around 20-25% of hires in APAC will have job titles that didn’t exist in 2025.

Prediction #5: At least two of top 10 companies would be primarily distribution companies. As creation becomes easy, curation and gatekeeping would be more valuable. At least two of the top 10 world’s most valued companies would be distribution companies and they would use some form of signalling human v/s AI content. Netflix and Meta are examples of distribution companies here but Alphabet, Amazon, and Microsoft are not.

Prediction #6: There will be no AI-led surgery (not AI-assisted!) and no completely AI-led scientific paper published in Nature by 2040. AI-assisted paper will definitely be there but a completely AI-written paper with no human prompt etc., wouldn’t be.

Prediction #7: The first Emmy, Grammy, Oscars, or Tony award to AI created content would be won by 2040. Partly because of the novelty of that art, partly because the thing may be that good, but mostly because an AI company wants more widespread/‘prestigious’ validation. Somewhat similar to Netflix wanting a best picture Oscars badly.

I am most iffy about prediction #6. If I am wrong on prediction #6, prediction #2 may not be off-the-table either!

Sayash Kapoor and Arvind Narayanan run a newsletter that debunks a lot of AI hype called AI Snake Oil. I would particularly recommend their post on AGI not being a milestone, for more on this by actual practitioners/researchers of the field if you’re interested.


Tying It All Together

In the past 2 decades, I’ve seen a few hype cycles. I am thinking Crispr, 3D printing, VR/Augmented Reality, Internet of Things.

AI is different, for sure.

Partly because it’s actually trickled into day-to-day usage. I am surprised by the different speed of adoption in individual consumers (high-spread) versus relatively slower spread in businesses at organisation-wide levels.

So, I am hyped about the different possibilities and how we humans would use it for different use-cases. I am less so about the narrative around impact of jobs.

AI will definitely change how we work. But, not quite why we work. We derive meaning out of feeling valued in a society and ‘giving’ back to it (through our work, through charities, through our creations). We judge that based on how our contributions are valued in the economy. So, we’d find a way for continuing that.

Comments

Popular posts from this blog

De-addiction and Policy Making

Painful List of (Mild) Pretension

The Dope Trail - Pt 3