After a lot of turmoil and their most vocal user base protesting about how Mozilla keeps pushing “AI” into the browser in many weird ways they now released the “AI Killswitch” they have been talking about for a while.
Which is good. Those features should have been “opt-in” from the start and it’s kinda weird how users of a Browser that frames itself as the resistance against “Big Tech” need to fight it tooth and nail to not push Big Tech’s vision of a slop future onto them. But I digress.
Mozilla’s blog post reminded me of why I always but “AI” in scarequotes: I do not think that “AI” mean any specific or defined technology or type of artifact but is mostly an empty signifier. It means whatever you want it to mean at some point. An LLM. An Excel macro. A bunch of people in a call center in India. A bunch of slides in a slide deck full of false promises.
(If “AI” means anything it means the assignment of agency to a supposedly existing piece of technology. So a disenfranchisement of human beings mostly.)
Mozilla outlines the different kinds of features (all called “AI”) that the kill switch allows you to disable:

And this made me think. Because these features are very much not the same.
Sure, all might technology-wise have similar bases in neural networks but from the user perspective they are qualitatively different.
“Translations” and even “Alt text in PDFs” are basically accessibility features: They try to empower you the user to be able to access information you otherwise might not be able to, at the cost of it probably being a low quality version of it. I think that there is a good case to be made for integrating that kind of functionality into a browser: Browsers are tools for information retrieval. (Let’s leave the question of whether it is possible to built these systems on LLMs or similar models ethically aside for a second. Even though I don’t believe it is possible without exploiting the work of many Internet users against their consent.) But is that “AI”? The “wonderful” future of the “agentic web” and all the retro-futurist leanings about smart fridges that entails? It feels smaller, less grandiose. As I said, it’s a feature that in general makes sense. A button you can click to get some version of the web page in front of you that otherwise you wouldn’t have had. And it saves you from having to paste the URL into one of the many translation engines we’ve had on the web for decades.
Let’s jump to the last point: “AI Chatbot in the Sidebar”. Not much I need to tell you there, that is what many people will call “AI” and “intelligence” because the biggest marketing campaign ever has turned a stochastic word generator into the Avatar of what many people consider intelligence. There are very different statistics about how popular these things actually are, how much people really use it but let’s even say that these things are very popular and used a lot (which I am not convinced of looking at how people at work or in my social circles use “AI” – if they do at all): This is not a lot of work for Mozilla. Extensions that integrate other tools into the Firefox Sidebar have existed for a long, long time. This is just another one of those, just way more insecure than what’s usually going on there.
But “AI-enhanced tab grouping” and “AI link previews” are a bit of a different beast IMO. They are more deeply integrated into the browser itself and want to shape more of how you use it in general. But: I wonder who actually uses that kind of stuff? Who uses tab groups? It’s a very advanced feature that’s also not very discoverable. I use it and I know a few very technical people who do but most do not even know of its existence. And sure, you can try to make it a bit easier to slap a label on a tab group but is that worth the effort? For that small a user base? I also made a quick poll on mastodon (a very tech-savvy and -interested crowd) asking how many people even know that that “AI link previews” even exists and more than 60% had no idea.
So of the three “AI-ish” features one is basically just an embedded external tool but two are something Mozilla actually works on and does product development and implementation. But those are features that feel like they only target very small groups. And not in a positive “this is a marginalized group that we are trying to support” but in a “a few power users might even be aware that this exists” kind of way.
Now features sometimes grow and take time to find an audience. I still remember how long it took for people to embrace tabs in browsers for example. (And many people still have horrible workflows with them. Where the tab bar just grows with the same tabs and if I have to watch some people use a browser I need a few rounds of therapy afterwards.)
But I wondered, why all these things are being summarized as “AI” when they are so radically different. Firefox having a Sidebar that allows you to interact with Mastodon does not make Firefox a “Fediverse Browser” for example, why does a chatbot sidebar get to define what the browser is?
Of course it’s a bit of marketing. Mozilla hasn’t for a long time been able to stand firm against hype and focus on normal engineering and development. It’s FOMO to a degree.
But it also reminded me of the way that conversations with Mozilla about their “AI” focus keep going: Whether it’s on Mastodon or reddit or in some other very Firefoxy/open source aligned community the polls always point to users dominantly not wanting Mozilla to focus on “AI” but on improving the browser, on picking up the policy work that they dropped a few years ago. On wanting Mozilla to fight for the open web and the people living in it. The Mozilla response is usually: But the users want “AI”.
But do they? And if they do, what “AI”?
Do most users really ask Mozilla to build an “AI” label generator for a feature they don’t even know exits? Is that what their data shows?
Or is it some some people use the built-in translations and by labeling those “AI” Mozilla claims that people really want “AI link previews”?
This is another example of the term “AI” hiding more than it explains. But it also is a pattern I see more and more where “AI” companies point at one specific feature people actually might use (for better or worse) and using that to legitimize all kinds of other shit that has nothing to do with it but maybe implementation details.
If we want to have useful conversations about systems, features, their uses and their impacts we should just drop the “AI” label. It’s not useful, it’s only poisoning discourse and making us all dumber – even if we do not use chatbots. (But double if you do.)