Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2367 stories
·
128 followers

Bundesagentur für Arbeit will 19 Millionen Euro für KI ausgeben​ ​

1 Comment

35 Prozent der eigenen Belegschaft dürften bis 2032 in Rente gehen, fürchtet man bei der Bundesagentur für Arbeit, und wappnet sich mit KI von Aleph Alpha.

Read the whole story
tante
21 hours ago
reply
Agentur für Arbeit will bei Aleph Alfa für 19 Mio "KI" Zeugs einkaufen, um die Verrentungsbedingte Personallücke zu schließen.

Wenn man Menschen, die ALG beantragen müssen, einfach weniger mit Bullshit auf den Sack geht und ihnen die nötige Hilfe gibt, braucht man aber vielleicht gar nicht so viele Leute?
Berlin/Germany
Share this story
Delete

Does Open Source AI really exist?

1 Comment

The Open Source Initiative (OSI) released the RC1 (“Release Candidate 1” meaning: This thing is basically done and will be released as such unless something catastrophic happens) of the “Open Source AI Definition“.

Some people might wonder why that matters. Some people come up with a bit of writing on AI, what else is new? That’s basically LinkedIn’s whole existence currently. But the OSI has a very special role in the Open Source software ecosystem. Because Open Source isn’t just based on the fact whether you can see code but also about the License that code is covered under: You might get code that you can see but that you are not allowed to touch (think of the recent WinAmp release debate). The OSI basically took on the role of defining which of the different licenses that were being used all over the place actually are “Open Source” and which come with restrictions that undermine the idea.

This is very important: Picking a license is a political act with strong consequences. It can allow or forbid different modes of interaction with an object or might put certain requirements to the use. The famous GPL for example allows you to take the code but forces you to also open your own changes to it. Other licenses do not enforce this demand. Choosing a license has tangible effects.

Quick sidebar: “Open Source” already is a bit of a problematic term, it’s (my opinion) a way to depoliticise the idea of “Free Software“. Both do share certain ideas but where “Open Source” frames things more in a pragmatic “corporations want to know which code they can use” kind of way Free Software was always more of a political movement arguing more from a standpoint of user rights and liberation. An idea that was probably damaged the most by the most visible figures in that space that probably should just walk into the sea.

So what makes a thing “Open Source”? Well the OSI has a brief list. You can read it quickly but let’s focus on Point 2: Source Code:

The program must include source code, and must allow distribution in source code as well as compiled form. Where some form of a product is not distributed with source code, there must be a well-publicized means of obtaining the source code for no more than a reasonable reproduction cost, preferably downloading via the Internet without charge. The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed.
Open Source Initiative

To be Open Source a piece of software needs to come with the sources. Okay, that’s not surprising. But the writers have seen some shit so they added that obfuscated code (meaning code that has been mangled to be unreadable) or intermediate forms (meaning you don’t get the actual sources but something that has already been processed) are not allowed. Cool. Makes sense. But why do people care about sources?

Sources of Truth

Open Source is a relatively new mass phenomenon. We had software before, even some we didn’t have to pay for. We called it “Freeware” back then. Freeware is software you can use without cost but that you don’t get any source code to. You cannot change the program (legally), you cannot audit it, cannot add to it. But it’s free of charge. And there was a lot of that back in my younger days. WinAMP, the audio player I talked about above used to be Freeware and basically everyone used it. So why even care about sources?

For some it was about being able to modify the tools easier, especially if the maintainer of the software didn’t really work on it any more or started adding all kinds of stuff you didn’t agree with (think of all those proprietary software packages today you have to use for work that get AI stuffed in there behind every other button). But there is more to it than just feature requests. There’s trust.

When I run software, I need to trust the people who wrote it. Trust them to do a good job, to build reliable and robust software. To add only the features in the documentation and nothing hidden, potentially harmful.

Especially with so large parts of our real lives running on digital infrastructures questions of trust get more and more important. We all know that we want fully open sourced, peer-reviewed and battle-tested encryption algorithms in our infrastructures to our communication is safe from harm.

Open Source is – especially for critical systems and infrastructures – a key part of establishing that trust: Because you want (someone) to be able to verify what’s up. There has been a long push for more reproducible builds. Those build processes basically guarantee that given the same code input you get the same compiled result. Which means that if you want to know if someone really delivered you exactly what they said they would you can check. Because your build process would create an identical artifact.

Not everyone does this level of analysis of course. And even fewer people only use software from reproducible build processes – especially with a lot of software not being compiled today. But relationships are more nuanced than code and trust is a relationship: You being fully open book about your code and how exactly the binary version was built makes it a lot easier for me to trust you. To know what is in the software I am running on the machine that also has my bank statements or encryption keys on it.

What does this have to do with AI?

AI systems and 4 Freedoms

AI systems are a bit special. Because – especially the big ones everyone is so fascinated by – don’t really consist of a lot of code in comparison to their size. A neural network implementation is a few hundred lines of Python for example. An “AI system” consists not just of code but of a whole lot of parameters and data.

A modern LLM (or image generator) consists of some code. You also need a network architecture meaning the setup of digital neurons that are used and how they are connected. This architecture is then parameterized with the so-called “weights” which are the billions of numbers you need to get the system to do anything. But that is of course not all.

In order to translated syllables or words into numbers for an “AI” to consume you need and embedding, sort of a lookup table to tell you what “token” the number “227” stands for. If you took the same neural network but applied a different embedding than the one it was trained with, everything would fall to pieces. The structures wouldn’t match.

Then there is the training process meaning the process that created all the “weights”. In order to train an “AI” you feed it all the data you can find and in millions and billions of iterations the weights start to emerge and crystallise. The training process and which data it used and how is key to understanding the capabilities and issues a machine learning system has: If you want to reduce harm in a network you do need to know if it was trained on the Daily Stormer or not just to give an example.

And here’s the catch.

The OSI “The Open Source AI Definition – 1.0-RC1” demands an Open Source AI to provide four freedoms to its users:

  1. Use the system for any purpose and without having to ask for permission
  2. Study how the system works and inspect its components.
  3. Modify the system for any purpose, including to change its output.
  4. Share the system for others to use with or without modifications, for any purpose.

So far so good. That looks reasonable, right? You can inspect and modify and use and all that. Awesome. Nothing bad could happen in the fineprint, right? Let’s just quickly look what an AI system needs to offer. Code: Check. Model parameters (weights, configs): Check! We’re on a roll here. What about data?

Data Information: Sufficiently detailed information about the data used to train the system so that a skilled person can build a substantially equivalent system. Data Information shall be made available under OSI-approved terms.

In particular, this must include: (1) a detailed description of all data used for training, including (if used) of unshareable data, disclosing the provenance of the data, its scope and characteristics, how the data was obtained and selected, the labeling procedures and data cleaning methodologies; (2) a listing of all publicly available training data and where to obtain it; and (3) a listing of all training data obtainable from third parties and where to obtain it, including for fee.
Open Source Initiative

What does “sufficiently detailed information” mean? The Open Source definition never talks about “sufficiently detailed source code”. You need to get the source code. All of it. And not in obfuscated or mangled form. The actual thing. Because otherwise it doesn’t mean much, it doesn’t help you build trust.

The OSI’s definition of “Open Source AI” pokes a big hole into the idea of Open Source: By making a core part of the model – the training data special in this weird wibbly wobbly way they blessing all kinds of things as “Open Source” that really are not – based on their own definition of what Open Source is and what it’s for.

An AI system’s training data is for all intends and purposes part of its “code”. It is as relevant to the way the model functions as literal code, for AI systems probably even more because the code is just generic matrix operations with delusions of grandeur.

The OSI puts another cherry on top: Users deserve description of “unshareable data” that was used to train a model. What is that? Let’s apply that to code again: If a software product gives us a core part of its functionality just as a compiled artifact and then describes that it’s all totally cool and above board but that the code wasn’t “shareable” we would not call that piece of software open source. Because it does not open all the source.

Does a “description” of partially “unshareable” data help you to reproduce the model? No. You can try to rebuild the model and it might look a bit similar, but it’s significantly different. Does it help you to “study the system and inspect its components”? Only on a superficial level. But if you really want to analyse what’s in the magic statistics box you need to know what went into it. What was filtered out exactly, what went in?

This definition seems to be very weird coming from OSI, right? It very obviously goes against core ideas of what people think open source is and should be. So why do it?

(Un)Open AI

Here’s the thing. On the scale that we are talking about those statistical systems as “AI” today, open source AI cannot exist.

Many smaller models have been trained on explicitly selected and curated, public datasets. Those can provide all the data, all the code, all the processes and can be called Open Source AI. But those are not the machines that make NVIDIA’s stock go WEEEEEEEEEEEEE.

Those big systems that are called “AI” – whether they are for image generation, text generation or multi-modal – are all based on illegally acquired and used material. Because the data sets are too big to do actual filtering and ensuring their legality. It’s just too much.

Now the more naive people among you might wonder “Okay but if you cannot do it legally how can you claim that this is a legitimate business?” and you’d be right but we’re also living in a weird world where hoping that some magic innovation and/or money will come from reproducing Reddit posts saving our economy and progress.

“Open Source AI” is an attempt to “openwash” proprietary systems. In their paper “Rethinking open source generative AI: open-washing and the EU AI Act” Andreas Liesenfeld and Mark Dingemanse showed that many “Open Source” AI models offer hardly more than open model weights. Meaning: You can run the thing but you don’t actually know what it is.

Sounds like something we’ve already had: It’s Freeware. The Open Source models we see today are proprietary freeware blobs. Which is potentially marginally better than OpenAI’s fully closed approach but really only marginally.

Some models offer models cards or other docs but most leave you in the dark. Which stems from the fact that most of those models are being developed by VC funded companies that need some theoretical path towards monetization.

“Open Source” is become a sticker like “Fair Trade”, something to make your product look good and trustworthy. To position it outside of the evil commercial space, giving it some grassroots feeling. “We’re in this together” and shit. But we’re not. We’re not in this with Mark fucking Zuckerberg even if he gives away some LLM weights for free cause it hurts his competition. We, as normal people living on this constantly warmer planet, are not with any of those people.

But there is another aspect to it outside of doing an image makeover for tech bros and their corporations. It’s about legality. At least in Germany there are exceptions to some laws that normally would concern LLM makers: If you do it for research purposes you are allowed to scrape basically anything. You can then train models and release those weights and even though there’s Disney stuff in there you are in the clear. And this is where the whole Open Source AI thing plays a relevant role: This is a wedge to legitimise probably illegal behavior through openwashing: As a corporation you take some “Open Source AI” that is based on all the stuff you wouldn’t legally be allowed to touch and use that to build your product. Do some extra training with licensed data for example.

The Open Source Initiative has caught FOMO – just like the Nobel prize jury. They also want to be a part of the “AI” craze.

But for the systems that we are talking about today as “AI” Open Source AI isn’t practically possible. Because we’ll never be able to download all the actual training data.

“But tante, then we will never have Open Source AI”. Exactly. That’s how reality works. If you can’t fulfil the criteria of a category you are not in that category. The fix is not to change the criteria. That’s playing pigeon chess.

Read the whole story
tante
1 day ago
reply
The new OSI "Open Source AI" definition is bad and shows that Open Source AI probably does not (and will not) really exist for larger models.
Berlin/Germany
Share this story
Delete

AI will never solve this

1 Comment

Greetings all — hell of a week here. As always, thanks to everyone who reads, supports, and shares this stuff. Paid subscribers, you are the very best. Gonna try a thing where I put the week’s tech and labor headlines additional commentary below a paywall, who knows. So sign up or chip in here if you get value out of this work, and cheers to all.

Subscribe now

It was one of those weeks laden with so many compounding crises that you don’t really know where to start, so I guess I’ll start with the hurricane that looked so ominous in the modeling forecasts that it made a career weatherman weep on the air. Hurricane Milton started gathering strength just as the extent of the wreckage of Hurricane Helene—which left over two hundred dead and is now the second deadliest hurricane to hit the United States in the last 50 years, after Katrina—was beginning to be understood.

Both storms stunned meteorologists with their ferocity—Helene with the *40 trillion gallons* of water it dumped, Milton with its rapid growth and intensity. As the Orlando-based meteorologist Noah Bergren wrote in a viral X post, “This is nothing short of astronomical… This hurricane is nearing the mathematical limit of what Earth's atmosphere over this ocean water can produce.”

Of course, we have climate change to thank for the warmer, storm-friendlier conditions that fueled both monster storms, these deadly juggernauts affirming that we are living in an age of crisis. So it was jarring, if not particularly surprising, to hear former Google CEO Eric Schmidt argue that we should spare no expense in ramping up and running energy-intensive AI systems, since “we’re not going to hit the climate goals anyway” while thousands of hurricane survivors were mourning the loss of loved ones, millions were still without power, and millions more braced for a potentially even more brutal storm.

Hurricane Milton, NOAA.

Schmidt was speaking at a summit in Washington DC when he was asked about the energy demands of AI, and whether that was a concern. All of the incremental progress we’ve made as a nation to reduce carbon emissions, he said, “will be swamped by the enormous needs of this new technology… we may make mistakes with respect to how it's used, but I can assure you that we're not going to get there through conservation." Schmidt continued: "We're not going to hit the climate goals anyway because we're not organized to do it… Yes, the needs in this area will be a problem, but I’d rather bet on AI solving the problem than constraining it.”

The clip of the talk, which you might have seen floating around, went viral, as the sentiment was expressed rather bluntly and callously, but it’s a pretty commonly held view among the tech set, and an increasingly popular one outside it, too; Bill Gates shares it, so do droves of AI influencers on social media, so, to some extent, does the World Economic Forum and even the UN.

But we should be extremely clear about this, because it is an inane and even maybe dangerous notion: AI will never “solve” climate change. Even if OpenAI successfully builds an AGI tomorrow, it will never, under any circumstances, produce any kind of magic bullet that will “fix” the climate crisis.

Look, this is not that hard. Even without AGI, we already know what we have to do. We do not need a complex and all-knowing artificial intelligence to understand that we generate too many carbon emissions with our cars, power plants, buildings, and factories, and we need to use less fossil fuels and more renewable energy.

The tricky part—the only part that matters in this rather crucial decade for climate action—is implementation. As impressive as GPT technology or the most state of the art diffusion models may be, they will never, god willing, “solve” the problem of generating what is actually necessary to address climate change: Political will. Political will to break the corporate power that has a stranglehold on energy production, to reorganize our infrastructure and economies accordingly, to push out oil and gas.

Even if an AGI came up with a flawless blueprint for building cheap nuclear fusion plants—pure science fiction—who among us thinks that oil and gas companies would readily relinquish their wealth and power and control over the current energy infrastructure? Even that would be a struggle, and AGI’s not going to doing anything like that anytime soon, if at all. Which is why the “AI will solve climate change” thinking is not merely foolish but dangerous—it’s another means of persuading otherwise smart people that immediate action isn’t necessary, that technological advancements are a trump card, that an all hands on deck effort to slash emissions and transition to proven renewable technologies isn’t necessary right now. It’s techno-utopianism of the worst kind; the kind that saps the will to act.

Subscribe now

Now this is pointedly not to say that AI systems cannot be useful in research and in improving clean energy at all—AI has been used for things like identifying the optimal way to place solar panels to maximize the sunlight they receive, to locate and analyze which glaciers are shrinking fastest, and so on. And not to discount that—those would all be great things, if they were happening in a vacuum, and are genuinely useful. And yet they are also being used to justify both the ideology outlined above and further investment in the technology itself—which is, ironically, itself an increasingly potent contributor to climate change. The rush to adopt AI, as readers of this newsletter know, has done nothing less than helped revitalize the gas industry in the United States.

The big tech companies, once proudly committed to sustainability—and some really were, this is not to be snide about it; Google and Facebook were huge purchasers of solar power, and for a long time made sure to run their servers with clean energy—are faced either to adopt something resembling Schmidt’s attitude, that AI’s steep carbon costs will be worth it, because those costs will eventually come down and AI will unleash unimaginable advances in clean tech, or to ignore the contradiction altogether. This is apparent especially in those tech companies, such as Microsoft and Amazon especially, that are selling AI tools—the same ones touted for their ability to fight climate change—to oil companies to help them locate and extract fossil fuels faster and more efficiently.

The idea that AI can “solve climate change” is what the critic Lewis Mumford would have called a magnificent bribe—a lofty promise or function that encourages us to adopt a tech product despite its otherwise obvious harmful costs. It is of AI’s greatest predicted benefits, to help us overlook its proven harms, to paraphrase Dan McQuillan. Because right now, on net, it’s clear that AI is only adding to our already significant carbon burden.

That AI will “solve climate change” is nonsense—a quasi-religious mantra that is repeated by tech executives and AI advocates to help them make the case for their products, which happen to consume tremendous amounts of energy. And I get it. Like so many similarly-shaped pitches for AI, it’s easy to see the appeal. We’re all exhausted and anxious here, sure it’d be nice if some all-powerful sentient mass of data could just fix everything for us. But you might as well be praying for divine intervention.

There’s just something uniquely dark about surveying the state of play, as folks like Schmidt and Bill Gates have surely done, and saying, ah well, let’s just build more data centers and hook them up to more gas plants and hope for the best. It’s another instance of Silicon Valley’s halo era wearing off—where once it was at least easy to believe the tech companies’s stories about building a better future, now they’re not even bothering to tell them. Instead of ‘we’re part of the solution’ now it’s ‘well it’s complicated’—at best. Schmidt’s vision is even more dire: We’re never going to address climate change anyway, so we might as well set the controls for the heart of the sun, full steam ahead.


Anyway! It was yet another major week in AI news on a number of different fronts, starting with…

Read more

Read the whole story
tante
3 days ago
reply
"But we should be extremely clear about this, because it is an inane and even maybe dangerous notion: AI will never “solve” climate change. Even if OpenAI successfully builds an AGI tomorrow, it will never, under any circumstances, produce any kind of magic bullet that will “fix” the climate crisis. "
Berlin/Germany
Share this story
Delete

Interneting Is Hard

1 Comment

Comments

Read the whole story
tante
6 days ago
reply
Really cool tutorials on HTML and CSS for complete beginners
Berlin/Germany
Share this story
Delete

‘The Community Is In Chaos:’ WordPress.org Now Requires You Denounce Affiliation With WP Engine To Log In

2 Comments

WordPress.org users are now required to agree that they are not affiliated with website hosting platform WP Engine before logging in. It’s the latest shot fired by WordPress co-creator Matt Mullenweg in his crusade against the website hosting platform.

The checkbox on the login page for WordPress.org asks users to confirm, “I am not affiliated with WP Engine in any way, financially or otherwise.” Users who don’t check that box can’t log in or register a new account. As of Tuesday, that checkbox didn’t exist. 

Since last month, Mullenweg has been publicly accusing WP Engine of misusing the WordPress brand and not contributing enough to the open-source community. WP Engine sent him a cease and desist, and he and his company, Automattic, sent one back. He’s banned WP Engine from using WordPress’ resources, and as of today, some contributors are reporting being kicked out of the community Slack they use for WordPress open-source projects. 

A screenshot of the WordPress.org login page as it appears on Oct. 9 at 1:50 p.m. EST

Among WordPress community contributors, who keep the open-source project running, this checkbox added to the organization’s site is an inflection point in the story of a legal battle that they’ve been mostly isolated from until today. 

“Right now the WordPress community is in chaos. People don’t know where they stand legally, they are being banned from participating for speaking up, and Matt is promising more ‘surprises’ all week,” one WordPress open-source community member who has contributed to the project for more than 10 years told me. They requested to speak anonymously because they fear retribution from Mullenweg. “The saddest part is that while WordPress is a tool we use in our work, for many of us it is much more than a software. It is a true global community, made up of long-time friends and coworkers who share a love for the open-source project and its ideals. We are all having that very abruptly ripped away from us.” 

In a Slack channel for WordPress community contributors, Mullenweg said on Wednesday that the checkbox is part of a ban on WP Engine from using WordPress.org’s resources.

Screenshot via @JavierCasares on X

Mullenweg explained the ban in a blog post published on the WordPress.org site in September, saying it’s "because of their legal claims and litigation against WordPress.org.” (WP Engine named Automattic and Mullenweg as defendants in its lawsuit, which we'll get to in a moment, but not WordPress.org or the WordPress Foundation.)

“WP Engine is free to offer their hacked up, bastardized simulacra of WordPress’s GPL code to their customers, and they can experience WordPress as WP Engine envisions it, with them getting all of the profits and providing all of the services,” Mullenweg wrote in the blog. “If you want to experience WordPress, use any other host in the world besides WP Engine.” 

WP Engine is an independent company and platform that hosts sites built on WordPress. WordPress.org is an open-source project, while WordPress.com is the commercial entity owned by Automattic, and which funds development of, and contributes to, the WordPress codebase. Last month, Mullenweg—who also co-founded Automattic—wrote a post on the organization’s blog, calling WP Engine a “cancer to WordPress” and accusing WP Engine of “strip-mining the WordPress ecosystem, giving our users a crappier experience so they can make more money” because the platform disables revision history tracking.

Mullenweg also criticized WP Engine for not contributing enough to the WordPress open source project, and its use of “WP” in its branding. “Their branding, marketing, advertising, and entire promise to customers is that they’re giving you WordPress, but they’re not. And they’re profiting off of the confusion,” he wrote. “WP Engine needs a trademark license to continue their business.” He also devoted most of a WordCamp conference talk to his qualms with WP Engine and its investor Silver Lake.

WP Engine sent Automattic and Mullenweg a cease and desist letter demanding that he “stop making and retract false, harmful and disparaging statements against WP Engine,” the platform posted in September. 

The letter claimed that Mullenweg “threatened that if WP Engine did not agree to pay Automattic—his for-profit entity—a very large sum of money before his September 20th keynote address at the WordCamp US Convention, he was going to embark on a self-described ‘scorched earth nuclear approach’ toward WP Engine within the WordPress community and beyond.”

Automattic lobbed its own cease and desist back. “Your unauthorized use of our Client’s trademarks infringes on their rights and dilutes their famous and well-known marks. Negative reviews and comments regarding WP Engine and its offerings are imputed to our Client, thereby tarnishing our Client’s brands, harming their reputation, and damaging the goodwill our Client has established in its marks,” the letter states. “Your unauthorized use of our Client’s intellectual property has enabled WP Engine to compete with our Client unfairly, and has led to unjust enrichment and undue profits.” 

The WordPress Foundation’s Trademark Policy page was also changed in late September to specifically name WP Engine. “The abbreviation ‘WP’ is not covered by the WordPress trademarks, but please don’t use it in a way that confuses people,” it now says. “For example, many people think WP Engine is ‘WordPress Engine’ and officially associated with WordPress, which it’s not. They have never once even donated to the WordPress Foundation, despite making billions of revenue on top of WordPress.”

WP Engine filed a lawsuit against Automattic and Mullenweg earlier this month, accusing them of extortion and abuse of power, Techcrunch reported.

Last week, Mullenweg announced that he’d given Automattic employees a buyout package, and 159 employees, or roughly 8.4 percent of staff, took the offer. “I feel much lighter,” he wrote.

According to screenshots posted by WordPress project contributors, there’s a heated debate happening in the WordPress community Slack at the moment—between contributors and Mullenweg himself—about the checkbox.

One contributor wrote that they have a day job as an agency developer, which involves working on sites that might be hosted by WP Engine. “That's as far as my association goes. However, ‘financially or otherwise’ is quite vague and therefore confusing,” they wrote. “For example, people form relationships at events, are former employees, collaborate on a project, contribute to a plugin, or have some other connection that could be considered impactful to whether that checkbox is checked. What's the base level of interaction/association that would mean not checking that box?” 

Mullenweg replied: “It’s up to you whether to check the box or not. I can’t answer that for you.” 

At least two WordPress open-source project contributors—Javier Casares and Andrew Hutchings—posted on X that they’ve been kicked out of the WordPress community Slack after questioning Mullenweg’s actions.

“A few of us asked Matt questions on Slack about the new checkbox on the .org login,” Hutchings posted. “I guess we shouldn't have done that.”

“In today's case, somebody changed the login and disconnected everybody, so, without explanation on the check, if you need to contribute to WordPress and access the website, you need to activate it,” Casares told me in an email. “In my case, this morning, I had to publish a post about a Hosting Team meeting this afternoon.” He had to check the box, he said, because without it he couldn’t access the platform to post it, but the vagueness of the statement concerned him.

He said the people banned this morning included contributors who have been contributing to the WordPress project for more than 10 years, or people related to other source-code projects.

“Why? Only Matt knows why he is doing everything he is doing. I really don't know,” Casares said. 

“Matt’s war against WP Engine has been polarizing and upsetting for everyone in WordPress, but most of the WP community has been relatively insulated from any real effects. Putting a loyalty test in the form of a checkmark on the WordPress.org login page has brought the conflict directly to every community member and contributor. Matt is not just forcing everyone to take sides, he is actively telling people to consult attorneys to determine whether or not they should check the box,” the anonymous contributor I spoke to told me. “It is also more than just whether or not you agree to a legally dubious statement to log in. A growing number of active, dedicated community members, many who have no connection with WP Engine, have had their WordPress.org accounts completely disabled with no notice or explanation as to why. No one knows who will be banned next or for what... Whatever Matt’s end goal is, his ‘tactics,’ especially this legally and ethically ambiguous checkbox, are causing a lot of confusion and mental anguish to people around the world.”

Based on entries to his personal blog and social media posts, Mullenweg has been on safari in Africa this week. Mullenweg did not immediately respond to a request for comment. 



Read the whole story
tante
7 days ago
reply
"At least two WordPress open-source project contributors—Javier Casares and Andrew Hutchings—posted on X that they’ve been kicked out of the WordPress community Slack after questioning Mullenweg’s actions."

Matt Mullenweg is not fit to lead anything Wordpress related
Berlin/Germany
Share this story
Delete
1 public comment
fxer
7 days ago
reply
Mullenweg has always been a twat
Bend, Oregon

Report: Roblox Is Somehow Even Worse Than We Thought, And We Already Thought It Was Pretty Fuckin’ Bad

1 Comment

'Moderators described being paid $12 a day to review countless instances of child grooming and bullying'

The post Report: Roblox Is Somehow Even Worse Than We Thought, And We Already Thought It Was Pretty Fuckin’ Bad appeared first on Aftermath.



Read the whole story
tante
8 days ago
reply
"Anyone [...] probably already has a dim view of Roblox. Whether it's for the child labour stuff [...] or a host of other issues--from customer service to loot boxes to child predators--I think we'd all agree that it's a pretty shitty platform run by a pretty shitty company. But not even I, an avowed hater, was prepared for the depths of Roblox's reported shittiness until I read through a paper released by Hindenburg Research earlier today."
Berlin/Germany
Share this story
Delete
Next Page of Stories