Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2362 stories
·
127 followers

Report: Roblox Is Somehow Even Worse Than We Thought, And We Already Thought It Was Pretty Fuckin’ Bad

1 Comment

'Moderators described being paid $12 a day to review countless instances of child grooming and bullying'

The post Report: Roblox Is Somehow Even Worse Than We Thought, And We Already Thought It Was Pretty Fuckin’ Bad appeared first on Aftermath.



Read the whole story
tante
8 hours ago
reply
"Anyone [...] probably already has a dim view of Roblox. Whether it's for the child labour stuff [...] or a host of other issues--from customer service to loot boxes to child predators--I think we'd all agree that it's a pretty shitty platform run by a pretty shitty company. But not even I, an avowed hater, was prepared for the depths of Roblox's reported shittiness until I read through a paper released by Hindenburg Research earlier today."
Berlin/Germany
Share this story
Delete

Lilium: Volker Wissing dringt auf schnelle Staatshilfe für Flugtaxifirma

1 Comment
Das Flugtaxi-Start-up Lilium ruft nach staatlichen Krediten, andernfalls könnte die Pleite drohen. Verkehrsminister Volker Wissing fordert nach SPIEGEL-Informationen eine Zustimmung des Parlaments. Doch dort gibt es erhebliche Bedenken.

Read the whole story
tante
2 days ago
reply
Wenn man sich fragt: Was macht Volker Wissing eigentlich so? Staatliche Kredite an Flugtaxifirmen lancieren.

Hat wohl nix zu tun den Tag über der Mann.
Berlin/Germany
Share this story
Delete

Five qualities of A.I. apps

1 Comment

Greetings from Read Max HQ! I was on NPR last week discussing Zyn. And if you haven’t read it yet, let me re-plug my New York magazine feature on A.I. slop.

In today’s newsletter:

  • Assessing Google’s cool new A.I. product NotebookLM;

  • creating A.I.-generated podcasts out of your group chats;

  • the problem with A.I.-generated summaries;

and more!

A reminder: Read Max is 99.5 percent funded by the generosity of paying readers. I treat this newsletter as my full-time job, and spend dozens of hours every week researching, reading, reporting, thinking, procrastinating, and writing, in the hopes of creating something that helps people understand the world, or at least helps them kill 10-15 minutes entertainingly. If you gain something from this newsletter, and if you want to support its weekly public availability, please consider upgrading your subscription for the price of about one-third a fancy cocktail ($5) a month, or three or four fancy cocktails ($50) a year.

Subscribe now

This week’s hot A.I. app--among both the kinds of people who have “hot A.I. apps,” and among college kids on TikTok--is NotebookLM, a Google Labs product to which you can upload “sources”--links, PDFs, MP3s, videos--that then form the basis of a narrowly specific LLM chatbot. For example, if I upload the complete works of a great author of fiction, I can ask questions about characters and themes that span the entire oeuvre, and its answer will come with clear citations:

I can also use it to generate a graduate-level study guide, FAQs, a “briefing,” etc.:

And so on.

NotebookLM has been available for a year or so now, but what’s made it suddenly popular over the last week or so is the discovery of its “audio overview” feature, which creates a short, fully A.I.-generated podcast, in which two realistic A.I.-generated “hosts,” speaking in the chipper and casual tones we associate with professional podcasters, cover whatever’s in your notebook. Here, e.g., is the audio overview for my Curious George notebook:

I like NotebookLM, or, at least, I don’t hate it, which is more than I can say for a lot of A.I. apps. It has a fairly clear purpose and relatively limited scope; its interface is straightforward, with a limited emphasis on finicky “prompting,” and you can imagine (if maybe not put into practice) a variety of productive uses for it. But even if it’s a modest and uncomplicated LLM-based app, it’s still an LLM-based app, which means its basic contours, for better and for worse, are familiar.

The five common qualities of generative-A.I. apps

By this I mean that NotebookLM shares what I think of as the five qualities visible in all the generative-A.I. apps of the post-Midjourney era. NotebookLM, for all that it represents a more practical and bounded LLM experience than ChatGPT or Claude, is in a broad sense not particularly different:

  1. Its popular success is as much (and often more) about novelty and entertainment value than actual utility.

  2. It’s really fun to use and play around with.

  3. Its product is compellingly adequate but noticeably shallow.

  4. It gets a lot of stuff wrong.

  5. It’s almost immediately being used to create slop.

Let me try to explain what I mean, one at a time.

Its popular success is as much (and often more) about novelty and entertainment value than actual utility.

Generative-A.I. apps are almost always promoted as productivity tools, but they tend to go viral (and gain attention and adopters) thanks to entertaining examples of their products. This is not to say that NotebookLM is useless, but I think it’s telling that the most viral and attention-grabbing example of its use so far was the Redditor who got the “podcast hosts” to “realize” that they’re “A.I.,” especially once it was shared on Twitter by Andreessen Horowitz V.C. Olivia Moore.

Subscribe now

My hunch in general is that the entertainment value of generative A.I.--by which I just mean the simple pleasure of using and talking to a computer that can reproduce human-like language--is as underrated as the productivity gains it offers are overrated, and that often uses that are presented as “productive” are not actually more efficient, just more fun:

it seems pretty clear to me that these apps, in their current instantiation, are best thought of, like magic tricks, as a form of entertainment. They produce entertainments, yes--images, audio, video, text, shitposts--but they also are entertainments themselves. Interactions with chatbots like GPT-4o may be incidentally informative or productive, but they are chiefly meant to be entertaining, hence the focus on spookily impressive but useless frippery like emotional affect. OpenAI’s insistence on pursuing A.I. that is, in Altman’s words, “like in the movies” is a smart marketing tactic, but it’s also the company meeting consumer demand. I know early adopters swear by the tinker-y little uses dutifully documented every week by Ethan Mollick and other A.I. influencers, but it seems to me that for OpenAI these are something like legitimizing or world-building supplements to the core product, which is the experience of talking with a computer.

Is “generating and listening to a ten-minute podcast about an academic paper” a more efficient way to learn the material in that paper? I would guess “no,” especially given the limitations of the tech discussed below. But is it more entertaining than actually reading the paper? Absolutely, yes.

It’s really fun to use and play around with.

The first thing I did with NotebookLM was, obviously, upload the text of the group chat for my fantasy football league, in order to synthesize the data and use the power of neural networks understand how bad my friend Tommy’s team is this year:

I can’t say the podcast hosts fully understood every dynamic, but they were very clear that Tommy’s team is bad:

Fantasy sports are obviously fertile territory for NotebookLM, but you want to really fuck up some friendships, I highly recommend uploading as much text as possible from a close-friends group chat and unleashing the power of large language models to analyze the relationships, settle disputes, and just generally wreak havoc on the delicate dynamics of a long-term friend group:

Its product is compellingly adequate, but it’s shallow and gets a lot of stuff wrong.

The answers and syntheses that NotebookLM creates are legible and rarely obviously wrong, and the verisimilitude of the podcast is wild, even if you can hear small glitches here and there.

But the actual quality of NotebookLM’s summaries (both audio and text) is--unsurprisingly if you’ve used any other LLM-based app--inconsistent. Shriram Krishnamurthi asked co-authors to grade its summaries of papers they’d written together; the “podcasters” mostly received Cs. “It is just like a novice researcher: it gets a general sense of what's going on, doesn't always know what to focus on, sometimes does a fairly good idea of the gist (especially for ‘shallower’ papers), but routinely misses some or ALL of what makes THIS paper valuable,” he concludes.

Henry Farrell, who was also unimpressed by the content of the “podcasts,” has a theory about where they go wrong:

It was remarkable to see how many errors could be stuffed into 5 minutes of vacuous conversation. What was even more striking was that the errors systematically pointed in a particular direction. In every instance, the model took an argument that was at least notionally surprising, and yanked it hard in the direction of banality. A moderately unusual argument about tariffs and sanctions (it got into the FT after all) was replaced by the generic criticism of sanctions that everyone makes. And so on for everything else. The large model had a lot of gaps to fill, and it filled those gaps with maximally unsurprising content.

This reflects a general problem with large models. They are much better at representing patterns that are common than patterns that are rare.

This seems intuitively right to me, and it’s reflected in the podcasts, which not only summarize shallowly, often to the point of inaccuracy, but draw only the most banal conclusions from the sources they’re synthesizing. For me, personally, the possibility that I’m consuming either a shallow or, worse, completely incorrect summary of whatever it is I’ve asked the A.I. to summarize all but cancels out the purported productivity gains.

Subscribe now

And yet, for a while now it’s seemed like “automatically generated summaries” will be the first widespread consumer implementation of generative A.I. by established tech companies. The browser I use, Arc, has a feature that offers a short summary of a link when you hover and press the shift key, e.g.:

Gmail, of course, is constantly asking me if I want to “summarize” emails I receive, no matter how long; Apple’s new “Apple Intelligence” is touting a feature through which it “summarizes” your alerts and messages, though screenshots I’ve seen make it seem of … dubious worth, at best:

Setting aside the likelihood that the A.I. is getting these summaries wrong (which it almost always will with the kinds of socially complex messages you get from friends), is reading an email or a text or even a whole article really that much of a burden? Is replacing human-generated text with a slightly smaller amount of machine-generated text actually any kind of timesaver? Seeing all these unnecessary machine summaries of communications already smoothed into near-perfect efficiency, it’s hard to not to think about this week’s Atlantic article about college kids who have apparently never read an entire book, which suggests we’re mostly training kids to be human versions of LLMs, passable but limited synthesists, unable to handle depth, length, or complexity:

But middle- and high-school kids appear to be encountering fewer and fewer books in the classroom as well. For more than two decades, new educational initiatives such as No Child Left Behind and Common Core emphasized informational texts and standardized tests. Teachers at many schools shifted from books to short informational passages, followed by questions about the author’s main idea—mimicking the format of standardized reading-comprehension tests. Antero Garcia, a Stanford education professor, is completing his term as vice president of the National Council of Teachers of English and previously taught at a public school in Los Angeles. He told me that the new guidelines were intended to help students make clear arguments and synthesize texts. But “in doing so, we’ve sacrificed young people’s ability to grapple with long-form texts in general.”

Mike Szkolka, a teacher and an administrator who has spent almost two decades in Boston and New York schools, told me that excerpts have replaced books across grade levels. “There’s no testing skill that can be related to … Can you sit down and read Tolstoy? ” he said. And if a skill is not easily measured, instructors and district leaders have little incentive to teach it. Carol Jago, a literacy expert who crisscrosses the country helping teachers design curricula, says that educators tell her they’ve stopped teaching the novels they’ve long revered, such as My Ántonia and Great Expectations. The pandemic, which scrambled syllabi and moved coursework online, accelerated the shift away from teaching complete works.

A Krishnamurthi puts it: “I regret to say that for now, you're going to have to actually read papers.”

It’s almost immediately being used to create slop.

Yes, there are fantasies of productivity, and experiments in shitposting. But all LLM apps trend very quickly toward the production of slop. This week, using NotebookLM, OpenAI founder Andrej Karpathy “curated a new Podcast of 10 episodes called ‘Histories of Mysteries,’” which he generated out of Wikipedia articles about historical mysteries, and uploaded it to Spotify. Moore, the a16z partner, “uploaded 200 pages of raw court documents to NotebookLM [and] created a true crime podcast that is better than 90% of what's out there now...” Enjoy discovering new podcasts? Not for long!

Read the whole story
tante
2 days ago
reply
"My hunch in general is that the entertainment value of generative A.I.--by which I just mean the simple pleasure of using and talking to a computer that can reproduce human-like language--is as underrated as the productivity gains it offers are overrated, and that often uses that are presented as “productive” are not actually more efficient, just more fun"
Berlin/Germany
Share this story
Delete

Mark Zuckerberg’s rebrand is a master class in distraction

1 Comment and 2 Shares
Mark Zuckerberg’s rebrand is a master class in distraction

In October 2021, Facebook was mired in controversy. Weeks earlier, the Wall Street Journal had begun publishing stories based on leaked documents Frances Haugen had provided that showed how the company ignored Instagram’s harmful impacts on teens and Facebook’s contribution to violence around the world. CEO Mark Zuckerberg was not happy and was determined not to let there be a repeat of the scandal that consumed the company in 2018.

Ahead of a keynote presentation at the company’s annual Facebook Connect, Zuckerberg addressed the leaks, but not in the way many might have imagined. He acknowledged the concerns they presented, then turned the table, claiming his critics were people who would never find it to be “a good time to focus on the future.” Instead, he was representing those doing the hard work, saying the future will only be built “by those who are willing to stand up and say, ‘This is the future we want and I’m going to keep pushing and giving everything I’ve got to make this happen.’” He clearly saw himself as part of that group, even though the future in question was the metaverse.

That moment represented an important shift; one that’s become much more apparent three years later. In the twelve months, Zuckerberg has started working out, changed his hairstyle, and is wearing gold chains, but most importantly, he’s given up on trying to seriously respond to his critics. Instead, as one of the richest men in the world, he wants to do what he pleases regardless of the consequences and be praised for those endeavors. That doesn’t mean the harms created by Zuckerberg’s companies have lessened, but his public relations team are working hard to direct people’s attention away from them — and, shamefully for the media who cover the company, it’s working.

Remaking Mark Zuckerberg’s image

As recently as five years ago, Zuckerberg was arguably the most hated executive in Silicon Valley. Facebook was accused of helping enable Brexit in the United Kingdom and Donald Trump’s election in the United States (often inaccurately, to be fair), but more broadly, the company was rightfully skewered for its mass data collection and the poor decisions it was making on content moderation. Facebook’s neglect contributed to the genocide against the Rohingya people in Myanmar, and despite those issues, several years later former Facebook data scientist Sophie Zhang leaked documents showing its poor moderation in many parts of the world was a factor in political destabilization. Even the company’s US content moderators faced poor working conditions, let alone the much more poorly staffed teams around the world working in other languages.

But there was a political angle to this too. Conservatives have long claimed the mainstream press holds a “liberal bias,” and they’ve used that claim as a cudgel to consistently push major outlets to legitimize more and more extreme right-wing perspectives over the course of several decades. As social media rose and became politically important, they saw the opportunity to use that same strategy once again, claiming the platforms suppressed conservative voices and used the combined power of right-wing media and congressional authority to push executives to make decisions that benefited right-wing users and narratives.

Roundup: Don’t buy the “Zuckessance”
Read to the end for a forthcoming tech book you might like
Mark Zuckerberg’s rebrand is a master class in distraction

In the 2010s, a more politically naive Zuckerberg began hiring on more Republican advisors and executives, who helped shape the policies of the platform to ensure extreme right-wing media was legitimized and that users espousing those views would not be overly moderated or penalized. Chief among them was Joel Kaplan, who worked in the George W. Bush administration before later heading up Facebook’s global public policy team. Employees at the company described how he crafted policies — with Zuckerberg’s approval — that delayed the suspension of extremist figures like Alex Jones and ensured groups like the Oath Keepers could continue organizing on the platform ahead of January 6, 2021.

An internal memo from December 2020 claimed Kaplan would protect “powerful constituencies” by allowing right-wing pages to get away with spreading misinformation, limiting enforcement against conservative accounts, and shaping decisions about what would appear in people’s news feeds. Setting up a more lenient enforcement mechanism for conservatives was how Zuckerberg believed he would get out of their crosshairs, but in fact it showed them their strategy was working. That right-wing pressure only escalated after Facebook started doing the bare minimum during the early part of the pandemic to try to rein in misinformation about Covid-19 and the vaccines.

Embracing right-wing politics

Whereas the Zuckerberg of the past tried to placate Republican content pressures and Democratic investigations of the company’s wider social harms, he’s begun taking a very different stance — and that’s been reflected in his personal politics. A recent story in the New York Times explained that while Zuckerberg used to present himself as a supporter of liberal causes — and engaged in philanthropic efforts to that effect — he’s more recently started identifying as a libertarian or “classical liberal.” For Zuckerberg, that means an opposition to regulation, an embrace of free markets, and an openness to social justice, as long as it doesn’t involve calling out Israel for its war crimes in Gaza — and surely not taxing people like him more either. Let’s be clear: it’s nothing more than a billionaire embracing right-wing politics that are designed to serve his interests.

In recent months, Zuckerberg has called Donald Trump a “badass” and there have been reports they’ve spoken on the phone on more than one occasion. He gave a win to Republican Congressman Jim Jordan, saying Facebook was “wrong” to “censor certain covid-19 content” after he was supposedly “repeatedly pressured” to do so by the Biden administration. He also confirmed he will not donate to support local election offices around the United States in this cycle, after his donation to that effect in 2020 was portrayed as “Zuckerbucks” by Republicans trying to claim the election had been stolen by Democrats. But that political shift, along with his outward makeover, has corresponded to a change in how he approaches his platforms too.

The media’s failure on Elon Musk
After building him up, they need to tear him down
Mark Zuckerberg’s rebrand is a master class in distraction

The days of placating are over; now Zuckerberg just wants to be done with the complaints and move on to other things, while still enjoying the ad profits from his legacy platforms. After Elon Musk overhauled Twitter/X’s approach to moderation and suspensions, Zuckerberg took the opportunity to start making his own changes. Meta has downgraded “political” content across its platforms, yet right-wing misinformation continues to spread through dedicated pages, and lifted the limits placed on Donald Trump’s accounts several years ago. The company has also made it harder for researchers to see what’s happening on the platform by shutting down a tracking tool called CrowdTangle and claims it wants to allow users to choose the type of content they see — effectively shifting responsibility to users and allowing people to see vile, hate-filled garbage if they so choose.

Make no mistake: these decisions are a prime example of conservatives getting their way and succeeding in reshaping the platforms that we use to communicate and find out what’s happening in the world around us. Zuckerberg surely wants the political pressure — especially that coming from Republicans — to go away, but he also doesn’t really care about the impacts of his decisions as long as he can play at being the great future-builder he sees himself to be. The problem is that whereas these decisions would’ve received ample press a few years ago, today they pass a just another story in the news cycle, showing how successful Meta’s public relations team has been.

Reshaping the corporate narrative

The metaverse effort Zuckerberg unveiled in 2021 wasn’t just about his personal desire to go big on virtual reality — the company had bought Oculus for $2 billion in 2014 — it was also about trying to turn the page and write a new narrative. To some degree, that worked. The scrutiny on Facebook didn’t fully go away, but Zuckerberg was no longer just the evil social media baron — he was increasingly the nerd king with a cartoonish avatar taking a photo in front of a virtual Eiffel Tower and expecting us all to want to join in with our own legless virtual selves. It was better to be laughed at than roundly reviled.

That was the first stage of the rebrand, and over the past year we’ve seen the second stage, which has been far more effective, as the Meta Connect presentation at the end of September put on full display. Coming out of that showcase, few people were talking about social media — even though that’s the company’s main business and, as I’ve described, the company has been reshaping its platforms in unsavory ways. Instead, the talk and press coverage was all about AI (given the current hype cycle) and even more so a set of surveillance glasses (my words, not Meta’s) called Orion that are more of a tech demo than a real product.

In the days that followed, we were treated to glowing write ups about the glasses that were long on praise and short on context: namely, the ongoing problems with the company’s business, the longstanding privacy concerns with camera-enabled glasses, or the fact Meta has spent a decade and tens of billions of dollars just to build this product that still looks quite bad, has a paltry battery life, and costs $10,000 to make. Like with the metaverse, the idea that Zuckerberg was building the next big platform was everywhere — with no real proof we’re ever going to abandon our smartphones for glasses, and certainly not in the near future.

Zuckerberg wants to control the next platform—no matter what it is
The Meta CEO’s embrace of open source is nothing more than an opportunistic move
Mark Zuckerberg’s rebrand is a master class in distraction

When Zuckerberg sat down with The Verge for one of their regularly scheduled softball interviews, he did make some surprising statements, but they were treated as secondary to his vision for the next big thing. On the AI front, he made the argument that stealing data to train AI models should be considered fair use and that “individual creators or publishers tend to overestimate the value of their specific content.” Like Meta has done with news, Zuckerberg said they’d simply remove content if its makers demand payment — but that doesn’t mean Meta is going to open up a way for people to ask their data be removed from the company’s datasets. Even more importantly, he disputed the growing controversy over the effects of social media on the mental health of teens.

At another time, those statements that would have received a lot more scrutiny because they play into broader issues with key parts of the company’s business. But because they came at the same moment Meta was unveiling a flashy tech demo and in the broader context of Zuckerberg’s personal rebrand, the news cycle quickly moved on and placed its focus on what Zuckerberg wanted the focus to be on: his grand vision for the future. But that future is a distraction that may never arrive, designed to shift the spotlight away from the present and allow Zuckerberg to evade the accountability he deserves.

Mark Zuckerberg has changed, but not in the way the narratives disseminated by his PR team suggest. He is not done with politics; he’s simply adopting a more right-wing worldview that’s more aligned with his interests and that he hopes will lessen the scrutiny applied to his company. But more than anything, he’s decided he doesn’t have to answer to critics anymore. He’s the second richest person in the world and can’t be dislodged from his company. His platforms will keep causing harm, but as long as he pushes a story about the future he hopes he won’t be held to account like he was in the past — and so far, that gamble is working.

Read the whole story
tante
2 days ago
reply
Zuckerberg rebrand works right now but I wonder how well it would stick if Elon Musk hadn't shat the bed this badly.
Berlin/Germany
Share this story
Delete

Remind me later

1 Comment

New Secret Knots comic: “Remind me later”.
Read the whole story
tante
5 days ago
reply
"Remind me later"
Berlin/Germany
Share this story
Delete

Meta wants to kill the social web

1 Comment

Meta recently had their big “this is what we are doing” conference and in all the noise was one very interesting fact. As the Verge writes:

If you think avoiding AI-generated images is difficult as it is, Facebook and Instagram are now going to put them directly into your feeds. At the Meta Connect event on Wednesday, the company announced that it’s testing a new feature that creates AI-generated content for you “based on your interests or current trends” — including some that incorporate your face.

Now let’s avoid the questions around how messed up it is to use your face (and your friends’ and familiy’s faces) from a privacy perspective. What’s more interesting is the shift it shows in how Meta sees their product.

Meta’s “mission” is (phrased in usual corporate bullshit phrasing):

Meta’s mission is to give people the power to build community and bring the world closer together.

Which is a strange way of saying: We want to build tools that allow people to connect to one another (I mean the whole Meta conference is called “Connect“). This is the good old “social media” story.

But Meta clearly no longer wants to do that. What is the use of generating some content for you that has nothing to do with what the people you care about or are interested in are doing? It is an empty reference, it says nothing about nothing. It doesn’t “build community” in any way shape or form, it does the opposite.

No longer are posts about a thing someone wants to say (even if just to sell you something) but just there to keep you occupied. It’s not about giving you opportunities to engage with other people and their experiences and thinking (“bring the world closer together”) it’s about making you a passive consumer of slop that ads can be put next to.

This is a qualitative shift if how Meta presents themselves: They shifted from “enabling connections” to “being a media company” in the most cynical way possible. As messed up as many (if not all) social media platforms are they do present their users as actors. The term “content creator” makes my blood boil but at the core social media’s narrative (even when it was just blogs) was about giving people the opportunity to publish something they care about. That included the whole “the world is your possible audience”.

When algorithmic sorting became more dominant this narrative was continued: Shape your posts in the shape the algorithm likes and you will get an audience. Of people. With all the opportunities this brings.

Meta shattered that covenant. “We’ll just generate something to keep your eyes busy” discards the promise of you connecting to people. Of you being able to use social media to have new insights into the world and the human condition. It turns the users into lifestock. Into a mere resource.

And sure, that’s digital capitalism for you, but the mask has really been taken off.

Read the whole story
tante
13 days ago
reply
Meta wants to bury the social web and replace it with a zombie
Berlin/Germany
Share this story
Delete
Next Page of Stories