Thinking loudly about networked beings. Commonist. Projektionsfläche. License: CC-BY
2382 stories
·
129 followers

Hundreds of code libraries posted to NPM try to install malware on dev machines

1 Comment

An ongoing attack is uploading hundreds of malicious packages to the open source node package manager (NPM) repository in an attempt to infect the devices of developers who rely on code libraries there, researchers said.

The malicious packages have names that are similar to legitimate ones for the Puppeteer and Bignum.js code libraries and for various libraries for working with cryptocurrency. The campaign, which was active at the time this post was going live on Ars, was reported by researchers from the security firm Phylum. The discovery comes on the heels of a similar campaign a few weeks ago targeting developers using forks of the Ethers.js library.

Beware of the supply chain attack

“Out of necessity, malware authors have had to endeavor to find more novel ways to hide intent and to obfuscate remote servers under their control,” Phylum researchers wrote. “This is, once again, a persistent reminder that supply chain attacks are alive and well.”

Read full article

Comments



Read the whole story
tante
1 day ago
reply
It's cute that this software supply chain attack on NPM directly targets Ethereum users who are supposed to check every smart contract they want to interact with to protect themselves but don't seem to use the same rigour when checking code they include.
Berlin/Germany
Share this story
Delete

Guy makes “dodgy e-bike” from 130 used vapes to make point about e-waste

1 Comment

Disposable vapes are indefensible. Many, or maybe most, of them contain rechargeable lithium-ion batteries, but manufacturers prefer to sell new ones. More than 260 million vape batteries are estimated to enter the trash stream every year in the UK alone. Vapers and vape makers are simply leaving an e-waste epidemic to the planet's future residents to sort out.

To make a point about how wasteful this practice is—and to also make a pretty rad project and video—Chris Doel took 130 disposable vape batteries (the bigger "3,500 puff" types with model 20400 cells) found littered at a music festival and converted them into a 48-volt, 1,500-watt e-bike battery, one that powered an e-bike with almost no pedaling more than 20 miles. You can see the whole build and watch Doel zoom along trails on his YouTube video.

A pile of empty aluminum vape shells, and the juice and batteries that came out of them, on Chris Doel's workstation. Credit: A pile of empty aluminum
Vape batteries, put into group cases, wired together in batches, and then wired in serial into two stacks, next to a multimeter. Credit: Chris Doel
How the battery fits onto the bike that Chris Doel powers with vape batteries: a big bag, ratchet straps, and wiring to a rear hub motor. Just, one more time, folks: do not do this at home. Credit: Chris Doel

To be clear: Do not do this. Do not put disposable vape cartridges in a vise clamp to "pop out" their components. Do not desolder them from vape cartridges that have a surprising amount of concentration still in them. Do not wire them together using a balance board, group them using 3D-printed cell holders, and then wire them in series. Heck, do not put that much power into a rear hub on a standard bike frame, at least more than once. Doel has a fire extinguisher present and visible on his workbench, and he shows you what happens when two of the wrong batteries happen to make momentary contact—smoke, coughing, and strong warnings.

Read full article

Comments



Read the whole story
tante
2 days ago
reply
"Disposable vapes are indefensible. Many, or maybe most, of them contain rechargeable lithium-ion batteries, but manufacturers prefer to sell new ones. More than 260 million vape batteries are estimated to enter the trash stream every year in the UK alone. Vapers and vape makers are simply leaving an e-waste epidemic to the planet's future residents to sort out."
Berlin/Germany
Share this story
Delete

Perplexity AI Offers to Help New York Times With Tech Union Strike

1 Comment

On Monday and a day before Election Day, tech workers for the New York Times went on strike seeking to secure a contract with fairer pay and just cause job protections. In response Aravind Srinivas, the CEO of AI search engine Perplexity, tweeted that the chairman of the New York Times company AG Sulzberger should contact him for assistance during the strike. It's unclear exactly what services Srinivas is offering Sulzberger, but it appears that the CEO of an AI company is trying to help the Times bypass its human workers who are currently in the middle of an authorized labor strike.

The offer is especially ironic given Perplexity’s repeated cases of lifting and regurgitating human journalists’ work without credit. Earlier this year Forbes found that the AI service was using much of its original investigative reporting without credit. And last month Dow Jones and the New York Post sued Perplexity, alleging “massive” copyright infringement.

“Hey AG Sulzberger @nytimes - sorry to see this,” Srinivas tweeted in response to a Sulzberger email saying the strike would likely continue through the election. “Perplexity is on standby to help ensure your essential coverage is available to all through the election. DM me anytime here.”

Perplexity offers an AI-powered search engine which, like many others, is built in part by scraping information and material from the web. WIRED previously found that Perplexity was scraping sites without permission, and plagiarizing multiple articles

In a statement published on Monday, the NewsGuild of NY and the Times Tech Guild said that the latter “has walked off the job in a ULP strike that threatens Election Day.” The Times Tech Guild is the union “that powers the technology behind election coverage at The New York Times,” the statement added.

Throughout Monday, members have been picketing outside The New York Times building. The statement asked New York Times readers to honor the digital picket line and not play New York Times’ owned games such as Wordle.

“Throughout the bargaining process, Times management has engaged in numerous labor law violations, including implementing return-to-office mandates without bargaining and attempting to intimidate members through interrogations about their strike intentions,” the statement continued. “The NewsGuild of NY has filed unfair labor practice charges against The Times on these tactics as well as numerous other violations of labor law.”

On Monday, The New York Times announced it had passed 11 million subscribers.

Perplexity did not immediately respond to a request for comment. Neither did the NewsGuild of NY.



Read the whole story
tante
2 days ago
reply
Perplexity are not just pushing their stochastic parrots into newsrooms, they are scabs helping publishers to disenfranchise their workers.

If you are not a multi-millionaire, they are not on your side and you are a sucker for using their product and paying for it.
Berlin/Germany
Share this story
Delete

Perplexity debuts an AI-powered election information hub

1 Comment
Vector collage of the Perplexity logo.
The Verge

AI search company Perplexity is putting to the test whether it’s a good idea to use AI to serve crucial voting information with a new Election Information Hub it announced on Friday. The hub offers things like AI-generated answers to voting questions and summaries of candidates, and on November 5th, Election Day, the company says it will track vote counts live, using data from The Associated Press.

Perplexity says its voter information, which includes polling requirements, locations, and times, is based on data from Democracy Works. (The same group powers similar features from Google). And that its election-related answers come from “a curated set of the most trustworthy and informative sources.”

Perplexity spokesperson Sara Plotnick...

Continue reading…

Read the whole story
tante
2 days ago
reply
Perplexity starts doing election information with their stochastic parrot. What could go wrong?

"The AI summaries when I clicked on candidates had some errors, like failing to mention that Robert F. Kennedy, who’s on the ballot where I live, had dropped out of the race. It also listed a “Future Madam Potus” candidate that, when clicked, led me to the above summary of Vice President Kamala Harris’ candidacy, except with some meme pictures that aren’t in her normal summary."
Berlin/Germany
Share this story
Delete

The Cult of Microsoft

2 Comments and 3 Shares

Soundtrack: EL-P - Flyentology

At the core of Microsoft, a three-trillion-dollar hardware and software company, lies a kind of social poison — an ill-defined, cult-like pseudo-scientific concept called 'The Growth Mindset" that drives company decision-making in everything from how products are sold, to how your on-the-job performance is judged.

I am not speaking in hyperbole. Based on a review of over a hundred pages of internal documents and conversations with multiple current and former Microsoft employees, I have learned that Microsoft — at the direction of CEO Satya Nadella — has oriented its entire culture around the innocuous-sounding (but, as we’ll get to later, deeply troubling) Growth Mindset concept, and has taken extraordinary measures to institute it across the organization.

One's "growth mindset" determines one’s success in the organization. Broadly speaking, it includes attributes that we can all agree are good things. People with growth mindsets are willing to learn, accept responsibility, and strive to overcome adversity. Conversely, those considered to have a "fixed mindset" are framed as irresponsible, selfish, and quick to blame others. They believe that one’s aptitudes (like their skill in a particular thing, or their intelligence) are immutable and cannot be improved through hard work.

On the face of things, this sounds uncontroversial. The kind of nebulous pop-science that a CEO might pick up at a leadership seminar. But, from the conversations I’ve held and the internal documents I’ve read, it’s clear that the original (and shaky) scientific underpinnings of mindset theory have devolved into an uglier, nastier beast at Redmond. 

The "growth mindset" is Microsoft's cult — a vaguely-defined, scientifically-questionable, abusively-wielded workplace culture monstrosity, peddled by a Chief Executive obsessed with framing himself as a messianic figure with divine knowledge of how businesses should work. Nadella even launched his own Bible — Hit Refresh — in 2017, which he claims has "recommendations presented as algorithms from a principled, deliberative leader searching for improvement." 

I’ve used the terms “messianic,” “Bible,” and “divine” for a reason. This book — and the ideas within — have taken an almost religious-like significance within Microsoft, to the point where it’s actually weird. 

Like any messianic tale, the book is centered around the theme of redemption, with the subtitle mentioning a “quest to rediscover Microsoft’s soul.” Although presented and packaged like any bland business book that you’d find in an airport Hudson News and half-read on a red eye to nowhere, its religious framing extends to separation of dark and enlightened ages. The dark age — Steve “Developers” Balmer’s Microsoft, with Microsoft stagnant and missing winnable opportunities, like mobile — contrasted against this brave, bright new era where a nearly-assertive Redmond pushes frontiers in places like AI. 

Hit Refresh became a New York Times bestseller likely due to the fact that Microsoft employees were instructed (based on an internal presentation I’ve reviewed) to "facilitate book discussions with customers or partners" using talking points provided by the company around subjects like culture, trust, artificial intelligence, and mixed reality.

Side note: Hey, didn’t Microsoft lay off a bunch of people from its mixed reality team earlier this year

Nadella, desperate to hit the bestseller list and frame himself as some kind of guru, attempted to weaponize tens of thousands of Microsoft employees as his personal propagandists, instructing them to do things like...

Use these questions to facilitate a book discussion with your customers or partners if they are interested in exploring the ideas around leadership, culture and technology in Hit Refresh...

Reflect on each of the three passages about lessons learned from cricket and discuss how they could apply in your current team. (pages 38-40)

"...compete vigorously and with passion in the face of uncertainty and intimidation" (page 38)

"...the importance of putting your team first, ahead of your personal statistics and recognition" (page 39)

"One brilliant character who does not put team first can destroy the entire team" (page 39)

Nadella's campaign was hugely successful, with years of fawning press around him bringing a "growth mindset" to Microsoft, turning employees from "know-it-alls" into "learn-it-alls," Nadella is hailed as "embodying a growth mindset," claiming that he "pushes people to think of themselves as students as part of how he changed things," the kind of thing that sounds really good but is difficult to quantify.

This is, it turns out, a continual problem with the Growth Mindset itself.


If you're wondering why I'm digging into this so deeply, it's because — and I hate to repeat myself — the Growth Mindset is at the very, very core of Microsoft's culture. It’s both a tool for propaganda and a religion.. And it is, in my opinion, a flimsily-founded kind of grift-psychology, one that is deeply irresponsible to implement at scale.

In the late 1980s, American Psychologist Carol Dweck started researching how mindsets — or, how a person perceives a challenge, or their own innate attributes — can influence outcomes in things like work and school. Over the coming decades, she further refined and defined her ideas, coining the terms “growth mindset” and “fixed mindset” in 2012, a mere five years before Nadella took over at Microsoft. These can be explained as follows: 

  • A "fixed" mindset, where one believes that our intelligence and skills are innate, and cannot be significantly changed or improved upon.
    • To quote Microsoft's training materials, "A fixed mindset is an assumption that character, intelligence, and creative ability are static givens that can't be altered."
  • A "growth" mindset where you believe that your intelligence and the things that you can do can be improved with enough effort.
    • To quote Microsoft's training materials, "A growth mindset is the belief that abilities and intelligence can be developed through perseverance and hard work."

Mindset theory itself is incredibly controversial for a number of reasons, chief of which is that nobody can seem to reliably replicate the results of Dweck's academic work. For the most part, research into mindset theory has been focused on children, with the idea that if we believe we can learn more we can learn more, and that by simply thinking and trying harder, anything is possible.

One of the weird tropes of mindset theory is that praise for intelligence is bad. Dweck herself said in an interview in 2016 that it's better to tell a kid that they worked really hard or put in a lot of effort rather than telling them they're smart, to "teach them they can grow their skills in that way." 

Another is that you should say "not yet" instead of "no," as that teaches you that anything is possible, as Dweck believes that kids are "condition[ed] to show that they have talents and abilities all the time...[and that we should show them] that the road to their success is learning how to think through problems [and] bounce back from failures."

All of this is the kind of Vaynerchuckian flim-flam that you'd expect from a YouTube con artist rather than professional psychologist, and one would think that it'd be a bad idea to talk about it if it wasn't scientifically proven — let alone shape the corporate culture of a three-trillion-dollar business around it. 

The problem, however, is that things like "mindset theory" are often peddled with little regard for whether they're true or not, peddling concepts that make the reader feel smart because they sort of make sense. After all, being open to the idea that we can do anything is good, right? Surely having a positive and open mind would lead to better outcomes, right?

Sort of, but not really. 

A study out of the University of Edinburgh from early 2017 found that mindset didn't really factor into a child's outcomes (emphasis mine).

Mindset theory states that children’s ability and school grades depend heavily on whether they believe basic ability is malleable and that praise for intelligence dramatically lowers cognitive performance. Here we test these predictions in 3 studies totalling 624 individually tested 10-12-year-olds.

Praise for intelligence failed to harm cognitive performance and children’s mindsets had no relationship to their IQ or school grades. Finally, believing ability to be malleable was not linked to improvement of grades across the year. We find no support for the idea that fixed beliefs about basic ability are harmful, or that implicit theories of intelligence play any significant role in development of cognitive ability, response to challenge, or educational attainment.

...Fixed beliefs about basic ability appear to be unrelated to ability, and we found no support for mindset-effects on cognitive ability, response to challenge, or educational progress

The problem, it seems, is that Dweck's work falls apart the second that Dweck isn't involved in the study itself.

In a September 2016 study by Education Week's Research Center, 72% of teachers said the Growth Mindset wasn’t effective at fostering high standardized test scores. Another study (highlighted in this great article from Melinder Wenner Moyer) run by Case Western University psychologist Brooke MacNamara and Georgia Tech psychologist Alexander Burgoyne published in the Psychological Bulletin said that “the apparent effects of growth mindset interventions on academic achievement are likely attributable to inadequate study design, reporting flaws, and bias.” 

In other words, the evidence that supports the efficacy of mindset theory is unreliable, and there’s no proof that this actually improves educational outcomes. To quote Wenner Moyer:

Dr. MacNamara and her colleagues found in their analysis that when study authors had a financial incentive to report positive effects — because, say, they had written books on the topic or got speaker fees for talks that promoted growth mindset — those studies were more than two and half times as likely to report significant effects compared with studies in which authors had no financial incentives.

Wenner Moyer's piece is a balanced rundown of the chaotic world of mindset theory, counterbalanced with a few studies where there were positive outcomes, and focuses heavily on one of the biggest problems in the field — the fact that most of the research is meta-analyses of other people's data: Again, from Wenner Moyer. 

For you data geeks out there, I’ll note that this growth mindset controversy is a microcosm of a much broader controversy in the research world relating to meta-analysis best practices. Some researchers think that it’s best to lump data together and look for average effects, while others, like Dr. Tipton, don’t. “There's often a real focus on the effect of an intervention, as if there's only one effect for everyone,” she said. She argued to me that it’s better try to figure out “what works for whom under what conditions.” Still, I’d argue there can be value to understanding average effects for interventions that might be broadly used on big, heterogeneous groups, too.

The problem, it seems, is that a "growth mindset" is hard to define, the methods of measuring someone's growth (or fixed) mindset are varied, and the effects of each form of implementation are also hard to evaluate or quantify. It’s also the case that, as Dweck’s theory has grown, it’s strayed away from the scientific fundamentals of falsifiability and testability. 

Case in point: In 2016, Carol Dweck introduced the concept of a “false growth mindset.” This is where someone outwardly professes a belief in mindset theory, but their internal monologue says something different. If you’re a social scientist trying to deflect from a growing corpus of evidence casting doubt on the efficacy of your life’s work, this is incredibly useful.

Someone accused of having a false growth mindset could argue, until they’re blue in the face, that they genuinely do believe all of this crap. And the accuser could retort: “Well, you would say that. You’ve got a false growth mindset.”

To quote Wenner Moyer, "we shouldn't pretend that growth mindset is a panacea." To quote George Carlin (speaking on another topic, although pertinent to this post): “It’s all bullshit, and it’s bad for you.”

In Satya Nadella's Hit Refresh, he says that "growth mindset" is how he describes Microsoft's emerging culture, and that "it's about every individual, every one of us having that attitude — that mindset — of being able to overcome any constraint, stand up to any challenge, making it possible for us to grow and, thereby, for the company to grow."

Nadella notes that when he became CEO of Microsoft, he "looked for opportunities to change [its] practices and behaviors to make the growth mindset vivid and real." He says that Minecraft, the game it acquired in 2014 for $2.5bn, "represented a growth mindset because it created new energy and engagement for people on [Microsoft's] mobile and cloud technologies." At one point in the book, he describes how an anonymous Microsoft manager came to him to share how much he loved the "new growth mindset," and "how much he wanted to see more of it," pointing out that he "knew these five people who don't have a growth mindset," adding that he believed that the manager in question was "using growth mindset to find a new way to complain about others," and that was not what they had in mind.

The problem, however, is that this is the exact culture that Microsoft fosters — one where fixed mindsets are bad, growth mindsets are good, and the definition of both varies wildly depending on the scenario. 

One employee related to me that managers occasionally add that they "did not display a growth mindset" after meetings, with little explanation as to what that meant or why it was said. Another said that "[the growth mindset] can be an excuse for anything, like people would complain about obvious engineering issues, that the code is shit and needs reworking, or that our tooling was terrible to work with, and the response would be to ‘apply Growth Mindset’ and continue churning out features."

In essence, the growth mindset means whatever it has to mean at any given time, as evidenced by internal training materials that that suggest that individual contributions are subordinate to "your contributions to the success of others," the kind of abusive management technique that exists to suppress worker wages and, for the most part, deprive them of credit or compensation.

One post from Blind, an anonymous social network where you're required to have a company email to post, noted in 2016 that "[the Growth Mindset] is a way for leadership to frame up shitty things that everybody hates in a way that encourages us to be happy and just shut the fuck up," with another adding it was "KoolAid of the month."

In fact, the big theme of Microsoft's "Growth Mindset" appears to be "learn everything you can, say yes to everything, then give credit to somebody else." While this may in theory sound positive — a selflessness that benefits the greater whole — it inevitably, based on conversations with Microsoft employees, leads to managerial abuse. 

Managers, from the conversations I've had with Microsoft employees, are the archons of the Growth Mindset — the ones that declare you are displaying a fixed mindset for saying no to a task or a deadline, and frame "Growth Mindset" contributions as core to their success. Microsoft's Growth Mindset training materials continually reference "seeing feedback as more fair, specific and helpful," and "persisting in the face of setbacks," framing criticism as an opportunity to grow.

Again, this wouldn't be a problem if it wasn't so deeply embedded in Microsoft's culture. If you search for the term “Growth Mindset” on the Microsoft subreddit, you’ll find countless posts from people who have applied for jobs and internships asking for interview advice, and being told to demonstrate they have a growth mindset to the interviewer. Those who drink the Kool Aid in advance are, it seems, at an advantage. 

“The interview process works more as a personality test,” wrote one person. “You're more likely to be chosen if you have a growth mindset… You can be taught what the technologies are early on, but you can't be taught the way you behave and collaborate with others.”

Personality test? Sounds absolutely nothing like the Church of Scientology

Moving on.

Microsoft boasts in its performance and development materials that it "[doesn’t] use performance ratings [as it goes] against [Microsoft's] growth mindset culture where anyone can learn, grow and change over time," meaning that there are no numerical evaluations of what a growth mindset is or how it might be successfully implemented.

There are many, many reasons this is problematic, but the biggest is that the growth mindset is directly used to judge your performance at Microsoft. Twice a year, Microsoft employees have a "Connect" with managers where they must answer a number of different questions about their current and future work at Microsoft, with sections titled things like "share how you applied a growth mindset," with prompts to "consider when you could have done something different," and how you might have applied what you learned to make a greater impact. Once filled-out, your manager responds with comments, and then the document is finalized and published internally, though it's unclear who is able to see them.

In theory, they're supposed to be a semi-regular opportunity to reflect on your work and think about how you might do better. In practice? Not so much. The following was shared with me by a Microsoft employee.

First of all, everyone haaaaates filling those out. You need to include half-a-year worth of stuff you've done, which is very hard. A common advice is to run a diary where you note down what you did every single day so that you can write something in the Connect later. Moreover, it forces you into a singular voice. You cannot say "we" in a Connect, it's always "I". Anyone who worked in software (or I would suspect most jobs) will tell you that's idiotic. Almost everything is a team effort. Second, the stakes of those are way too high. It's not a secret that the primary way decisions about bonuses and promotions are done is by looking at this. So this is essentially your "I deserve a raise" form, you fill out one, max two of those a period and that's it.

Microsoft's "Connects" are extremely important to your future at the company, and failing to fill them in in a satisfactory manner can lead to direct repercussions at work. An employee told me the story of Feng Yuan, a high-level software engineer with decades at the company, beloved for his helpful internal emails about working with Microsoft's .NET platform, who was deemed as "underperforming" because he "couldn't demonstrate high impact in his Connects."

He was fired for "low performance," despite the fact that he spent hours educating other employees, running training sessions, and likely saving the company millions in overhead by making people more efficient. One might even say that Yuan embodied the Growth Mindset, selflessly dedicating himself to educating others as a performance architect at the company. Feng's tenure ended with an internal email criticizing the Connect experience.

Feng, however, likely needed to be let go for other reasons. Another user on Blind related a story of Feng calling a junior engineer's code "pathetic" and "a waste of time," spending several minutes castigating the engineer until they cried, relating that they had heard other stories about him doing so in the past. This, clearly, was not a problem for Microsoft, but filling in his Connect was.

One last point: These “Connects” are high-stakes games, with the potential to win or lose, depending on how compelling your story is and how many boxes it ticks. As a result, responses to each of the questions invariably takes the form of a short essay. It’s not enough to write a couple of sentences, or a paragraph. You’ve really got to sell yourself, or demonstrate — with no margin for doubt — that you’re on-board with the growth mindset mantra. This emphasis on long-form writing (whether accidental or intentional) inevitably disadvantages people who don’t speak English (or whatever language is used in their office) natively, or have conditions like dyslexia. 


The problem, it seems, is that Microsoft doesn't really care about the Growth Mindset at all, and is more concerned with stripping employees of their dignity and personality in favor of boosting their managers' goals. Some of Microsoft's "Connect" questions veer dangerously close to "attack therapy," where you are prompted to "share how you demonstrated a growth mindset by taking personal accountability for setbacks, asking for feedback, and applying learnings to have a greater impact."

Your career at Microsoft — a $3 trillion company — is largely defined by the whims of your managers and your ability to write essays of indeterminate length, based on your adherence to a vague, scientifically-questionable "mindset theory." You can (and will!) be fired both for failing to express your "growth mindset" — a term as malleable as its alleged adherents — to managers that are also interpreting its meaning in realtime, likely for their own benefit.

This all feels so distinctly cult-y. Think about it. You have a High Prophet (Satya Nadella) with a holy book (Hit Refresh). You have an original sin (a fixed mindset) and a path to redemption (embracing the growth mindset). You have confessions. You have a statement of faith (or close enough) for new members to the church. You have a priestly class (managers) with the power to expel the insufficiently-devout (those with a sinful fixed mindset). Members of the cult are urged to apply its teachings to all facets of their working life, and to proselytize to outsiders.

As with any scripture, its textural meanings are open to interpretation, and can be read in ways that advantage or disadvantage a person. 

And, like any cult, it encourages the person to internalize their failures and externalize their successes. If your team didn’t hit a deadline, it isn’t because you’re over-worked and under-resourced. You did something wrong. Maybe you didn’t collaborate enough. Perhaps your communication wasn’t up to scratch. Even if those things are true, or if it was some other external factor that you have no control over, you can’t make that argument because that would demonstrate a fixed mindset. And that would make you a sinner.  

Yet there's another dirty little secret behind Microsoft's Connects.

Microsoft is actively training its employees to generate their responses to Connects using Copilot, its generative AI. When I say "actively training," I mean that there is an entire document — "Copilot for Microsoft 365 Performance and Development Guidance" — that explains, in detail, how an employee (or manager) can use Copilot to generate the responses for their Connects. While there are guidelines about how managers can't use Copilot to "infer impact" or "make an impact determination" for direct reports, they are allowed to "reference the role library and understand the expectations for a direct report based on their role profile."

Side Note: What I can't speak to here is how common using Copilot to fill in a Connects or summarize someone else's Connects actually is. However, the documents I have reviewed - as I'll explain - explicitly instruct Microsoft employees and managers on how to do so, and frame them doing so positively.

In essence, a manager can't say how good you were at a job using Copilot, but they can use Copilot to see whether you are meeting expectations using it. Employees are instructed to use Copilot to "collect and summarize evidence of accomplishments" from internal Microsoft sources, and to "ensure [their] inputs align to Microsoft's Performance & Development philosophy."

In another slide from an internal Microsoft presentation, Microsoft directly instructs employees how to prompt Copilot to help them write a self-assessment for their performance review, to "reflect on the past," to "create new core priorities," and find "ideas for accomplishments." The document also names those who "share their Copilot learnings with other Microsoft employees" as "Copilot storytellers," and points them to the approved Performance and Development prompts from the company.

At this point, things become a little insane.

In one slide, titled "Copilot prompts for Connect: Ideas for accomplishments," Microsoft employees are given a prompt to write a self-assessment for their performance review based on their role at Microsoft. It then generates 20 "ideas for success measurements" to include in their performance review. It's unclear if these are sourced from anywhere, or if they're randomly generated. When a source ran the query multiple times, it hallucinated wildly different statistics for the same metrics. 

Microsoft's guidance suggests that these are meant to be "generic ideas on metrics" which a user should "modify to reflect their own accomplishments," but one only has to ask it to draft your own achievements to have these numbers — again, generated using the same models as ChatGPT — customized to your own work.

While Copilot warns you that "AI-generated content may be incorrect," it's reasonable to imagine that somebody might use its outputs — either the "ideas" or the responses — as the substance of their Connect/performance review. I have also confirmed that when asked to help draft responses based on things that you've achieved since your last Connect, Copilot will use your activity on internal Microsoft services like Outlook, Teams and your previous Connects.

Side note: How bad is this? Really bad. A source I talked to confirmed that personalized achievements are also prone to hallucinations. When asked to summarize one Microsoft employee’s achievements based on their emails, messages, and other internal documents from the last few quarters, Copilot spat out a series of bullet points with random metrics about their alleged contributions, some of which the employee didn’t even have a hand in, citing emails and documents that were either tangentially related or entirely unrelated to their “achievements,” including one that linked to an internal corporate guidance document that had nothing to do with the subject at hand.

On a second prompt, Copilot produced entirely different achievements, metrics and citations. To quote one employee, “Some wasn't relevant to me at ALL, like a deck someone else put together. Some were relevant to me but had nothing to do with the claim. It's all hallucination.”

To be extremely blunt: Microsoft is asking its employees to draft their performance reviews based on the outputs of generative AI models — the same ones underpinning ChatGPT — that are prone to hallucination. 

Microsoft is also — as I learned from an internal document I’ve reviewed — instructing managers to use it to summarize "their direct report's Connects, Perspectives and other feedback collected throughout the fiscal year as a basis to draft Rewards/promotion justifications in the Manage Rewards Tool (MRI)," which in plain English means "use a generative AI to read performance reviews that may or may not be written by generative AI, with the potential for hallucinations at every single step."

Microsoft's corporate culture is built on a joint subservience to abusive pseudoscience and the evaluations of hallucination-prone artificial intelligence. Working at Microsoft means implicitly accepting that you are being evaluated on your ability to adhere to the demands of an obtuse, ill-defined "culture," and the knowledge that whatever you say both must fit a format decided by a generative AI model so that it can be, in turn, read by the very same model to evaluate you.

While Microsoft will likely state that corporate policy prohibits using Copilot to "infer impact or make impact determination for direct reports" or "model reward outcomes," there is absolutely no way that instructing managers to summarize people's Connects — their performance reviews — as a means of providing reward/promotion justifications will end with anything other than an artificial intelligence deciding whether someone is hired or fired. 

Microsoft's culture isn't simply repugnant, it's actively dystopian and deeply abusive. Workers are evaluated based on their adherence to pseudo-science, their "achievements" — which may be written by generative AI — potentially evaluated by managers using generative AI. While they ostensibly do a "job" that they're "evaluated for" at Microsoft, their world is ultimately beholden to a series of essays about how well they are able to express their working lives through the lens of pseudoscience, and said expressions can be both generated by and read by machines.

I find this whole situation utterly disgusting. The Growth Mindset is a poorly-defined and unscientific concept that Microsoft has adopted as gospel, sold through Satya Nadella's book and reams of internal training material, and it's a disgraceful thing to build an entire company upon, let alone one as important as Microsoft.

Yet to actively encourage the company-wide dilution of performance reviews — and by extension the lives of Microsoft employees — by introducing generative AI is reprehensible. It shows that, at its core, Microsoft doesn't actually want to evaluate people's performance, but see how well it can hit the buttons that make managers and the Senior Leadership Team feel good, a masturbatory and specious culture built by a man — Satya Nadella — that doesn't know a fucking thing about the work being done at his company.

This is the inevitable future of large companies that have simply given up on managing their people, sacrificing their culture — and ultimately their businesses — to as much automation as is possible, to the point that the people themselves are judged based on the whims of managers that don't do the actual work and the machines that they've found to do what little is required of them. Google now claims that 25% of its code is written by AI, and I anticipate Microsoft isn't far behind.

Side note: This might be a little out of the scope of this newsletter, but the 25% stat is suspect at best.

First, even before generative AI was a thing, developers were using autocomplete to write code. There are a lot of patterns in writing software. Code has to meet a certain format to be valid. And so, the difference between an AI model creating a class declaration, or an IDE doing it is minimal. You’ve substituted one tool for another, but the outcome is the same.

Second, I’d question how much of this code is actually… you know… high-value stuff. Is Google using AI to build key parts of its software, or is it just writing comments and creating unit/integration tests? Based on my conversations with developers at other companies that have been strong-armed into using Copilot, I’m fairly confident this is the case.

Third, lines of code is an absolute dogshit metric. Developers aren’t judged by how many bytes they can shovel into a text editor, but how good — how readable, efficient, reliable, secure — their work is. To quote The Zen of Python, “Simple is better than complex… Sparse is better than dense.”

This brings me on to my fourth, and last, point: How much of this code is actually solid from the moment it’s created, and how much has to get fixed by an actual human engineer? 

At some point, these ugly messes will collapse as it becomes clear that their entire infrastructure is written upon increasingly-automated levels of crap, rife with hallucinations and devoid of any human touch.

The Senior Leadership Team of Microsoft are a disgrace and incapable of any real leadership, and every single conversation I've had with Microsoft employees for this article speaks to a miserable, rotten culture where managers castigate those lacking the "growth mindset," a term that oftentimes means "this wasn't done fast enough, or you didn't give me enough credit."

Yet because the company keeps growing, things will stay the same.

At some point, this deck of cards will collapse. It has to. When you have tens of thousands of people vaguely aspiring to meet the demands of a pseudoscientific concept, filling in performance reviews using AI that will ultimately be judged by AI, you are creating a non-culture — a company that elevates those who can adapt to the system rather than service any particular customer.

It all turns my fucking stomach.

Read the whole story
sarcozona
9 hours ago
reply
It’s interesting how many corporations turn into cults
Epiphyte City
tante
2 days ago
reply
"The "growth mindset" is Microsoft's cult — a vaguely-defined, scientifically-questionable, abusively-wielded workplace culture monstrosity, peddled by a Chief Executive obsessed with framing himself as a messianic figure with divine knowledge of how businesses should work."
Berlin/Germany
Share this story
Delete

Zuckerberg: The AI Slop Will Continue Until Morale Improves

1 Comment

During my year-long odyssey into the world of AI-generated slop on Facebook and other Meta platforms, I had come to the conclusion that Meta does not mind—and actively likes—the bizarre AI spam that has taken over its platforms. Wednesday, in a call with investors, Mark Zuckerberg made this clear: The AI-generated content will continue until morale improves.

In a quarterly earnings call that was overwhelmingly about AI and Meta’s plans for it, Zuckerberg said that new, AI-generated feeds are likely to come to Facebook and other Meta platforms. Zuckerberg said he is excited for the “opportunity for AI to help people create content that just makes people’s feed experiences better.” Zuckerberg’s comments were first reported by Fortune.

“I think we’re going to add a whole new category of content, which is AI generated or AI summarized content or kind of existing content pulled together by AI in some way,” he said. “And I think that that’s going to be just very exciting for the—for Facebook and Instagram and maybe Threads or other kind of Feed experiences over time.”

Zuckerberg said this would continue to be an evolution of traditional feeds on Meta products. As we have previously reported, the virality of AI-generated slop made and posted by people trying to make money on Facebook has been powered by Meta’s “recommendation” algorithm, which boosts content that was not posted by your friends or anyone you know—and which is often engagement bait—into feeds because it increases engagement and time on site. Wednesday, Zuckerberg explained this strategy in the investor call, and said the new AI feeds would be built with the success of the recommended feed in mind.

“If you look at the big trends in Feeds over the history of the company, it started off as friends, right?,” he said. “So all the updates that were in there were basically from your friends posting things. And then we went into this era where we added in creator content too, where now a very large percent of the content on Instagram and Facebook is not from your friends. It may not even be from people that you’re following directly. It could just be recommended content from creators that we can algorithmically determine is going to be interesting and engaging and valuable to you.”

What he is describing, of course, are social media networks that are not even remotely social and which may increasingly not even feature much human-made content at all. Both Facebook and Instagram are already going this way, with the rise of AI spam, AI influencers, and armies of people copy-pasting and clipping content from other social media networks to build their accounts. This content and this system, Meta said, has led to an 8 percent increase in time spent on Facebook and a 6 percent increase in time spent on Instagram, all at the expense of a shared reality and human connections to other humans. 

In the earnings call, Zuckerberg and Susan Li, Meta’s CFO, said that Meta has already slop-ified its ad system and said that more than 1 million businesses are now creating more than 15 million ads per month on Meta platforms using generative AI. 

“The Gen AI tools that we have built here that will help us enable businesses to make ads significantly more customized at scale, which is going to accrue to ad performance,” Li said. “That’s a place where, again, we’re already seeing promising results in both performance gains and adoption. I think we shared that over a million advertisers use our Gen AI ad tools specifically.”

Like most every other company that has gone all-in on AI, Li said Meta trains its AI models “on content that is publicly available online, and we crawl the web for a variety of purposes.” 



Read the whole story
tante
2 days ago
reply
"During my year-long odyssey into the world of AI-generated slop on Facebook and other Meta platforms, I had come to the conclusion that Meta does not mind—and actively likes—the bizarre AI spam that has taken over its platforms. Wednesday, in a call with investors, Mark Zuckerberg made this clear: The AI-generated content will continue until morale improves."
Berlin/Germany
Share this story
Delete
Next Page of Stories