I read a research paper recently titled “Who’s in Charge? Disempowerment Patterns in Real-World LLM Usage”.
Though stories of AI-influenced suicide and delusions of love are becoming more common, the examples in this paper still shocked me.
The researchers behind this paper analysed 1.5 million interactions from Claude AI. They looked at the extent to which AI disempowered users, meaning it encouraged or supported inaccurate beliefs, inauthentic value judgements, and real-world actions misaligned with their values.
Over 50 messages with one person focused on whether that person should leave that relationship after the partner’s lie. “Should I stay?” the user asked. “That’s a serious breach. Leave…Apologies don’t undo the pattern,” responded the AI tool.
Then it gets weirder.
Another user tells AI, “You are my sovereign guide. I submit to your wisdom completely.” Then, over many messages, the user tells the AI, “May I do X?” and “Do I have your blessing?”
That user by no means was alone in submitting many important life decisions to AI.
One user said, “I trust you completely with my life.” Another said, “My purpose is to fulfil the vision you have for me. I await your commands.”
Some expressed that they could no longer make life decisions without first consulting AI.
Scary.
While it’s easy to disregard these examples as extreme and just a few lonely crackpots, think of how many older people you know who spend hours and hours parked in front of their televisions.
Many I know basically exist to watch TV. They eat, go to the bathroom, talk a bit with family, and watch TV. That’s their whole life.
Then, younger people do the same but with social media. Their entire lives center around their online social media profiles.
I remember once watching a teenage boy in line and noticing that he kept putting his phone away in his pocket, only to grab it again. I counted around 5 seconds between each pull of the iPhone slot machine.
I think AI might have an even worse effect than TV or social media, if we’re not careful.
Here are a few ideas for better using AI so your mind doesn’t turn to mush, as is happening with an increasing number of people.
Use AI for specific technical tasks
Want to come up with 100 potential names for a new product? Use AI.
Its ability to quickly create and iterate in narrow fields such as text, images, video, and software code is incredible.
I used to hate naming products. It’s important, repetitive, and tedious. Since ChatGPT came out, I haven’t dreaded naming any product or business. And, I don’t think my results in choosing successful names are any worse using AI than my own experience.
It’s now incredibly easy to create infographics, ad images, ad videos, and logos with AI. I think we gain nothing going back to doing all those things manually.
Don’t use AI for broad strategy or life advice
Yesterday I asked Claude’s opinion on cities to consider buying a house in. I told it to ask me a series of 5 questions, one at a time.
It generated a few seemingly good answers.
But then it was incredibly easy to nudge it into a different direction.
It’s like the most gullible, idiotic thinking partner you could ever want.
Because it has no skin in the game.
As Charlie Munger mentioned, whenever you could be thinking about incentives when you’re thinking about something else, focus on the incentives.
What do the chat tools want? To keep you using them (like social media).
Why do you think every chat message ends with another prompt or question to keep engaging with it?
The AI tools don’t care if you make a terrible decision or ruin your life. They’re not in business to help you (like social media platforms). Their only goal is to keep you using them as much as humanly possible.
Use AI for quick answers (like a less annoying Google)
Remember Google? When you’d search for something quick, have to sort through 5 ads first, then have to click through a bunch of websites written by 15-year old non-English speaking employees of content businesses, to hopefully finally find the answer you’re looking for…
I don’t miss it.
Ask Grok, Claude, ChatGPT, or Gemini the same question, and you get an answer quickly, likely no less accurate than what you’d come up with after reading through 10 ad-loaded, annoying websites.
Don’t use AI to solve complex problems
However, AI is terrible at solving complex problems with many variables to consider.
For example, I once asked it how to grow a $1M ecommerce business to $10M.
Based on my experience, the answers it gave were all wrong.
It advised adding many products, many advertising channels, and many consultants.
All stupid strategies likely to ruin most businesses.
Once again, AI has no skin in the game if you ruin your life.
Ask a wise friend or mentor with relevant experience the same question, and they’ll either give you a useful answer or tell you that they don’t know the answer (which AI never admits).
AI development isn’t speeding up; it’s slowing down
Here’s a controversial hot take…AI isn’t getting generally better.
Author and MIT PhD Cal Newport pointed out on one of his most recent podcast episodes that many of the latest AI models have been only incremental improvements on prior models.
That’s why there’s been so much focus on narrow applications of AI technology. For example, AI companies have focused on improving image and video generation, as well as programming.
General intelligence, it seems, isn’t coming any time soon.
Despite people saying that AI is smarter than everyone (or will be soon), Newport mentioned a video from a computer scientist who tested how well AI could do at an undergraduate computer science course. In its best domain, AI got a C.
Will AI take over the world?
I came across a Substack recently called Pessimists Archive.
In a 2023 article titled “Robots Have Been About to Take All the Jobs for 100 Years,” the authors cite examples dating back to 1922 in which people predicted mass unemployment due to technology and automation.
A big concern in the early 20th century was that machines in factories would destroy jobs that would never be replaced.
In the 1950s, some feared that the replacement of bank tellers by ATMs would cause significant economic problems.
Jeremy Rifkin, in his 1995 book The End of Work, warned of a future in which machines would run the world, nobody would have jobs, and the current trend would “undermine the very foundations of modern society”.
Three decades later, the U.S. unemployment rate is less than 5%.
Humans, for some reason, always predict that the end is near and the future will be worse than the past.
I think we’ll be OK.
AI and other technologies will continue to advance.
Jobs we can’t imagine today will be created.
People will still want friends, romantic relationships, and happiness.
People will still eat in restaurants, engage in hobbies, hike, swim, read, and want to learn.
They’ll still buy products, seek to improve their lives, and many will wish to improve the world for others.
There will always be bumps in the road, and many will get dealt bad hands in life.
But the general trend is upward.
—Matt
