Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

Brink Bites: Using AI to Detect Alzheimer’s Disease; NIH Supports COPD Research in BU | The edge

October 17, 2025

NSF Announces Funding to Establish National AI Research Resources Operations Center | NSF

October 17, 2025

Cutting-edge imaging and AI research looking for tiny defects in chips

October 17, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI in Technology»Microsoft lays off its AI ethics and society team
AI in Technology

Microsoft lays off its AI ethics and society team

November 15, 2024009 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Stk095 microsoft 03.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Microsoft has laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs which affected 10,000 employees across the company, Platform learned.

The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design, at a time when the company is leading the charge to make AI tools accessible to the general public, current employees and elders.

Microsoft still maintains an active Office of Responsible AIwhich is responsible for creating rules and principles to govern the company’s AI initiatives. The company says its overall investment in responsible work is increasing despite recent layoffs.

“Microsoft is committed to developing AI products and experiences safely and responsibly, and does so by investing in people, processes and partnerships that prioritize this,” the company said in a statement . “Over the past six years, we have increased the number of people on our product teams and in the Responsible AI Office who, along with all of us at Microsoft, are responsible for ensuring that we put into practice our AI principles. (…) We appreciate the pioneering work carried out by Ethics & Society to help us in our ongoing approach to responsible AI.

But employees said the ethics and society team played a critical role in ensuring the company’s responsible AI principles were actually reflected in the design of the products it shipped.

“Our job was to … create rules in areas where there were none.”

“People would look at the principles coming out of the Responsible AI office and say, ‘I don’t know how this applies,'” says one former employee. “Our job was to show them and create rules in areas where there were none.”

In recent years, the team designed a role-playing game called Judgment Call that helped designers imagine the potential harm that could result from AI and discuss it during product development. It was part of a bigger “responsible innovation toolbox” which the team published publicly.

Most recently, the team worked to identify the risks posed by Microsoft’s adoption of OpenAI technology across its product suite.

The ethics and society team was at its peak in 2020, when it had around thirty employees including engineers, designers and philosophers. In October, the team was reduced to around seven people as part of a reorganization.

In a meeting with the team following the reorganization, John Montgomery, AI vice president, told employees that company leaders had asked them to act quickly. “The pressure from (CTO) Kevin (Scott) and (CEO) Satya (Nadella) is very, very strong to take these newest OpenAI models and those following them and get them into the hands of customers at very high speed, ” he said, according to audio of the meeting obtained by Platform.

Because of that pressure, Montgomery said, much of the team was going to be moved to other areas of the organization.

Some team members pushed back. “I’m going to have the audacity to ask you to reconsider this decision,” one employee said on the call. “While I understand there are business issues at play… what this team has always been deeply concerned about is how we impact society and the negative impacts we have had. And they are important.

Montgomery refused. “Can I reconsider?” I don’t think I will,” he said. “Because unfortunately the pressures remain the same. You don’t have the same perspective as me, and you can probably be grateful for that. There’s a lot of ground stuff in the sausage.

However, in response to questions, Montgomery said the team would not be eliminated.

“It’s not that it’s disappearing, it’s that it’s evolving,” he said. “This is moving toward a greater focus of energy within the individual product teams that create the services and software, meaning that the central hub that did some of the work delegates its capabilities and responsibilities. »

Most of the team members transferred elsewhere within Microsoft. Subsequently, the other members of the ethics and society team stated that the small size of the crew made it difficult to implement their ambitious plans.

Decision Leaves Fundamental Void in Holistic AI Product Design, Employee Says

About five months later, on March 6, remaining employees were asked to join a Zoom call at 11:30 a.m. PT to hear a “critical business update” from Montgomery. During the meeting, they were told that their team was finally eliminated.

One employee says the move leaves a fundamental void in terms of user experience and holistic design of AI products. “The worst part is that we exposed the company to risk and human beings to risk by doing this,” they explained.

The conflict highlights an ongoing tension for tech giants that are creating divisions dedicated to making their products more socially responsible. At their best, they help product teams anticipate potential misuses of technology and resolve any issues before they ship.

But they also have the job of saying “no” or “slow down” within organizations that often don’t want to hear it – or spelling out risks that could lead to legal problems for the company if they were revealed during a judicial investigation. And the resulting friction sometimes spills over into the public.

In 2020, Google Timnit Gebru, AI ethical researcher, licensed after publishing an article criticizing the major language models that would explode in popularity two years later. The resulting furor led to the departures of several other senior managers from the departmentand diminished the company’s credibility on responsible AI issues.

Microsoft focused on delivering AI tools faster than competitors

Members of the ethics and society team said they generally try to support product development. But they said that as Microsoft focused on delivering AI tools faster than its competitors, company executives had become less interested in the kind of long-term thinking in which the team was specialized.

This is a dynamic that deserves careful consideration. On the one hand, Microsoft may now have a unique chance to gain significant ground against Google in search, productivity software, cloud computing and other areas where the giants compete. When it relaunched Bing with AI, the company told investors that Every 1% of market share this could take away from Google in search would result in $2 billion in annual revenue..

This potential explains why Microsoft has so far invested $11 billion in OpenAIand is currently race to integrate the startup’s technology into every corner of its empire. It appears to have had some success: the company said last week that Bing now has 100 million daily active usersa third of which are new since the relaunch of the search engine with OpenAI technology.

On the other hand, everyone involved in the development of AI agrees that this technology poses powerful, even existential, risks, both known and unknown. Tech giants have been careful to signal that they take these risks seriously – only Microsoft has done so. three different groups work on the subject, even after the elimination of the ethics and society team. But given the stakes, any reduction in teams focused on responsible work seems notable.

The elimination of the ethics and society team comes just as the group’s remaining employees have been focused on what is arguably their biggest challenge yet: anticipating what would happen when Microsoft releases tools powered by OpenAI to a global audience.

Last year, the team wrote a memo detailing the brand risks associated with Bing Image Makerwhich uses OpenAI’s DALL-E system to create images based on text prompts. The image tool launched in a handful of countries in Octobermaking this one of Microsoft’s first public collaborations with OpenAI.

Although text-to-image conversion technology has proven extremely popular, Microsoft researchers correctly predicted that it could also threaten artists’ livelihoods by making it easy for anyone to copy their style.

“In testing Bing Image Creator, it was discovered that with a simple prompt including only the artist name and a medium (painting, print, photograph or sculpture), the generated images were almost indistinguishable from the original works” , the researchers wrote in the memo.

“The risk of harm to the brand… is real and significant enough to require remedy.”

They added: “The risk of harm to the brand, both to the artist and its financial partners, as well as negative public relations towards Microsoft resulting from artist complaints and public backlash is sufficiently real and significant to require a repair before damaging the Microsoft brand. »

Additionally, last year, OpenAI updated its terms of service to give users “full ownership rights to images you create with DALL-E.” This decision worried Microsoft’s ethics and society team.

“If an AI image generator mathematically reproduces images of works, it is ethically suspect to suggest that the person who submitted the prompt has full ownership rights to the resulting image,” they wrote in the note.

Microsoft researchers created a list of mitigation strategies, including preventing Bing Image Creator users from using the names of living artists as prompts and creating a marketplace to sell the work of an artist who would appear if someone was looking for his name.

Employees claim that none of these strategies were implemented and that Bing Image Creator launched in test countries anyway.

Microsoft says the tool was modified before launch to address concerns raised in the document and requested additional work from its responsible AI team.

But legal questions surrounding this technology remain unanswered. In February 2023, Getty Images filed a lawsuit against Stability AI, creators of the AI ​​art generator Stable Diffusion. Getty accused the AI ​​startup of inappropriately using more than 12 million images to train its system.

These accusations echo concerns raised by Microsoft’s AI ethicists. “It is likely that few artists have agreed to have their works used as training data, and it is likely that many are still unaware of how generative technology can produce online image variations of their work in seconds” , employees wrote last year.

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

Fox Businessexpert predicts that AI will lead the technology sector leading to a huge president of TD Ameritrade, president and chief executive

September 19, 2025

Google Deepmind demands the breakthrough of “historic” AI in problem solving | Artificial Intelligence (AI)

September 19, 2025

The AI ​​CEO claims that technology “ moving very quickly ” could soon replace more jobs

September 19, 2025
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (29)
  • AI in Business (75)
  • AI in Healthcare (64)
  • AI in Technology (78)
  • AI Logistics (24)
  • AI Research Updates (42)
  • AI Startups & Investments (64)
  • Chain Risk (37)
  • Smart Chain (32)
  • Supply AI (21)
  • Track AI (33)

Brink Bites: Using AI to Detect Alzheimer’s Disease; NIH Supports COPD Research in BU | The edge

October 17, 2025

NSF Announces Funding to Establish National AI Research Resources Operations Center | NSF

October 17, 2025

Cutting-edge imaging and AI research looking for tiny defects in chips

October 17, 2025

AI is a strategic tool to improve scientific research

October 17, 2025

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (29)
  • AI in Business (75)
  • AI in Healthcare (64)
  • AI in Technology (78)
  • AI Logistics (24)
  • AI Research Updates (42)
  • AI Startups & Investments (64)
  • Chain Risk (37)
  • Smart Chain (32)
  • Supply AI (21)
  • Track AI (33)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2025 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.