The Problem With Big Tech


Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.

How should media organizations cover artificial intelligence and the giant technology companies that have hitched their wagons to it? More interrogatively, according to Ed Zitron.

As an Englishman who lives in Las Vegas and runs his own public relations firm, he is an unusual candidate for becoming one of the internet’s most popular A.I. skeptics. But Zitron established himself as one of the most pugnacious critics of Big Tech after he penned a 2023 newsletter about tech products’ drift from quality toward mindless growth. Headlined “The Rot Economy,” the piece quickly went viral. Zitron’s newsletter now has more than 50,000 subscribers. More than 125,000 accounts follow his posts on Bluesky, plus 90,000 on X. He hosts Better Offline, an iHeart podcast that questions “the growth-at-all-costs future that tech’s elite wants to build.”

Oftentimes Zitron takes aim not just at the tech companies trafficking in an A.I.-focused vision for the future but the media organizations and star technology reporters that cover them. Some journalists believe in covering A.I. as an ongoing and potentially larger breakthrough with profound, dangerous ramifications for society and enormous profit potential for tech companies. Then there is a sizable camp, of which Zitron is one of the most prominent members, that reacts with deep skepticism and hostility to the tech industry’s embrace of A.I and messaging around it.

While A.I. boosters jostle with each other for the levers of federal power over the next four years, I anticipate that intense outside criticism of the whole sector will only grow. And in that world, few commentators have found a more engaged audience than Zitron.

Recently, Zitron and I talked about the state of tech coverage, the questions he doesn’t think enough people consider, and an ongoing fight over A.I. skepticism. Our conversation has been edited and condensed for clarity.

Alex Kirshner: I don’t like to ask questions about “the media,” because we are not a monolith—but I wonder what the most frequent bad practices are that you see in coverage of A.I.

Ed Zitron: It starts with the will of the markets. There’s a worryingly large amount of reporters who write with the immediate acceptance that A.I. will be artificial general intelligence, or A.I. will be good, or that this stuff is already proven and already powerful, because there’s so much money behind it. This is a huge mistake, because it assumes the premise that anything OpenAI or Anthropic says is important.

On top of that, I don’t know if people really do enough work to fully understand how much these models can do. These reporters understand what they’re talking about, but they don’t go to the next step to say, “Is this actually important?”

“Our lives aren’t really being changed, other than our power grids being strained, our things being stolen, and some jobs being replaced.”

But I think that the biggest thing is that it feels like we’re living in this insane cognitive dissonance. We have the largest generative A.I. company burning $5 billion or more a year for a product that is yet to really prove itself. Now, some would argue, “Well, ChatGPT has proven itself; 200 million people use it a week.” That doesn’t mean it’s proven. It’s the most prevalently discussed tech right now in every outlet everywhere all the time. A bunch of people are being driven by a media campaign. Of course ChatGPT would have that many users. It’s still not that useful.

You wrote recently about how Microsoft predicts A.I. will become a $10 billion annual business, but there’s no “A.I.” business unit on Microsoft’s earnings reports, so measuring that is hard. Along that line, I saw Jeff Bezos speaking to Andrew Ross Sorkin last month, and he called A.I. a “horizontal enabling layer.” It strikes me that this lack of division might make it easier for companies to get away without showing their work. Why do you think they present the issue this way?

Well, I’m going to use a technical term: Jeff Bezos is talking bollocks.

His statement was, “Modern A.I. is a horizontal enabling layer. It can be used to improve everything. It will be in everything.” Jeff Bezos, what the fuck are you talking about? It doesn’t mean anything. A horizontal enabling layer “like electricity” means nothing. The actual thing to say about this is “Jeff Bezos says complete nonsense on stage.” A.I. is like electricity? Electricity has immediate use cases.

On top of that, we are years into generative A.I. Where is the horizontal enablement? Where is the thing it’s enabling? Two years. Show me one thing which you use that you go, “Oh, damn, I’m so glad I have this.” Show me the AirPlay; show me the Apple Pay. Show me the thing that you’re like, “Goddamn, I’m glad this is here.”

I can’t think of one, and I’m exactly the little pig that would want it. I am an enthusiast. I love this crap.

Do you worry that you’re missing someone’s use case, even if the product at issue sounds downright freakish to you or me? For example, what if there’s a person who struggles to talk to people and might gain confidence to be more social in the future from chatting with bots?

Sure. And I hedge my bets fairly clearly. The problem with the “I might be missing something” thing is that you can acknowledge that, but the thing you’re not missing is how much these companies are making. And also, how there’s not a thing yet. Nobody using ChatGPT can pretend that this is the future. Basic utility isn’t there. People like it, fine, but it’s not this revolution.

If the immediate thing you do when you think “Maybe I’m missing something” is to say, “Well, I best trust what the tech company is saying,” you’re failing your readers. By all means, go and talk to some of the many, many experts out there who will explain these things to you.
But also, find the product. Find the thing. If you can’t find the thing, say you couldn’t find the thing. Don’t do the work for the companies.

When you say that we need to “find the product,” when it comes to generative A.I.
companies like OpenAI, what do you mean?

Find the actual thing that genuinely changes lives, improves lives, and helps people. Though Uber as a company has horrifying labor practices, you can at least look at them and go, “This is why I’m using the app. This is why this is a potentially world-changing concept.” Same with Google search and cloud computing.

With ChatGPT and their ilk—Anthropic’s Claude, for example—you can find use cases, but it’s hard to point to any of them that are really killer apps. It’s impossible to point to anything that justifies the ruinous financial cost, massive environmental damage, theft from millions of people, and stealing of the entire internet. Also, on a very simple level, what’s cool about this? What is the thing that really matters here?

A few weeks ago, Casey Newton, the prominent tech journalist at Platformer, wrote critically about A.I. skepticism in a piece called “The Phony Comforts of AI Skepticism.” It made a lot of social media waves for describing a prominent school of thought about A.I. as being that it “is fake and sucks” without thinking about A.I.’s potential, for good or ill. Do you think that is an accurate description of a school of thought? If so, are you in it?

There is nothing comfortable about saying A.I. is going to collapse. There is going to be a meaningful hurt put on the stock market. Tens of thousands of people in these tech companies will lose their jobs. There will be a contraction in tech valuations and likely a depression that comes as a result of this. There is no “phony comfort.” What I and others are talking about is bordering on apocalyptic.

I’m interested in media coverage of this sector because I’m media myself and can’t look away. But you’ve gained a big audience talking about this stuff, and I bet most of your readers aren’t reporters. Why do you think this beat has compelled them?

So a lot of my listeners and readers are not tech people. I have people who are from all sorts of walks of life, and everyone is being told artificial intelligence is the future. It’s gonna do this, it’s gonna do that. People are aware that this term is being drummed into them repeatedly.

I think everyone, for a manifold amount of reasons, is currently looking at the cognitive dissonance of the A.I. boom, where we have all of these promises and egregious sums of money being put into something that doesn’t really seem to be doing the things that everyone’s excited about.

We’re being told, “Oh, this automation’s gonna change our lives.” Our lives aren’t really being changed, other than our power grids being strained, our things being stolen, and some jobs being replaced. Freelancers, especially artists and content creators, are seeing their things replaced with a much, much shittier version. But nevertheless, they’re seeing how some businesses have contempt for creatives.

“Why is this thing the future? And if it isn’t the future, why am I being told that it is?” That question is applicable to blue-collar workers, to hedge fund managers, to members of the government, to everyone, because this is one of the strangest things to happen in business history.

You have a football in a case behind you. I write and podcast a lot about football, and sometimes I think about applying a football coverage lesson to A.I. Every coach in the NFL forgets more about football in a week than I will know in my life. But they can’t all be good coaches, let alone be right all the time. I accept that the CEOs of Nvidia and Apple and Microsoft and OpenAI know more about A.I. than I do, but they can’t all be right all the time. How should the media balance a goal to not assume we know more about these companies’ plans than the companies themselves do with the possibility that a CEO could, in fact, just be selling you a bad idea?

A big starting point for me has been to ask: What is it I don’t know about these people? Is there some greater design, some genius, some intricacy behind their businesses and the people they talk to, or their companies’ structure? There must be something.

After probably a good hundred hours of looking through stuff, there isn’t. These are regular people. They are people that are well accomplished in the sense that they’ve been heads of businesses or units. But behind the curtain, from what I’ve seen, there is no intricacy. There’s nothing hiding. There’s nothing magical about Satya Nadella, Sundar Pichai, or any of these people. If there was, they’d be showing it by now.

There are good football coaches who understand personnel and know how to use the things and get the most out of them because they truly understand football. The worst coaches in the NFL are usually the ones that believe they’re mega-geniuses that have seen everything. I think we’re in the Josh McDaniels era of tech CEOs.

It’s these people that believe that they’ve been part of massive legacy systems. Things that have happened around them have them saying, “I understand how good works.” No. Josh McDaniels was the offensive coordinator for Bill Belichick in a nearly perfect system. And I think you can tell, looking at Bill Belichick at this point, it might have just been Tom Brady.

Who is the Tom Brady in this situation? Is it just the markets?

That’s a great question. The answer is multiple Tom Bradys. It’s Azure, Google Cloud, and smartphones.

Everything that has driven tech stocks up and up forever.

Yes. There may just be no more lands to conquer. There may just not be things to draw value out of. These people are all Josh McDanielsing, in the sense that they found all the easy stuff. They found all the stuff that they could pull out just by throwing more money at it. Zoom is a company that grew based on the fact that, “Hey, I want to easily talk to someone on video and audio.” Now they’re adding A.I. bullshit because they don’t know what else to do because they have to grow forever. That’s where they all are.

These aren’t companies run by people that build products. These aren’t companies that win markets by making a better thing than the competition. These people are monopolists. They’re management consultants. They’re people that only know how to extract value by throwing money at stuff. Except now we’re at the end of that.

This is actually genuine sympathy for the media: How the fuck do you cover that? Even when I write this stuff, I feel a little insane. Because you have to look out the window and be like, “Hey, A.I. is the future. It’s going to be amazing.” Then you look at the numbers and products, and it’s not remotely like this. Everyone’s kind of in on this and trying to keep it going, because once everyone doesn’t, everyone knows there’s a collapse. Everyone knows deep down that something’s wrong.





Source link

About The Author

Scroll to Top