After Google, Microsoft in trouble for Copilot generating anti-semitic stereotypes

After Google, Microsoft in trouble for Copilot generating anti-semitic stereotypes

FP Staff March 11, 2024, 16:14:38 IST

After Google’s AI model Gemini was pulled up for generating problematic images and factoids, Microsoft’s Copilot is now facing hot waters for coming up with answers that are filled with anti-semitic stereotypes. The problem stems from OpenAI’s DALL-E 3’s hallucinations

Advertisement
After Google, Microsoft in trouble for Copilot generating anti-semitic stereotypes
Microsoft's Copilot is based on OpenAI's Large Language Models. As a result, all the hallucinations in OpenAI's LLMs, are present in Microsoft's Copilot. Image Credit: AP

After Google’s Gemini AI model had to be pulled back and had its abilities limited, it seems Microsoft’s Copilot will be put through a similar treatment. Microsoft’s newly renamed and rebadged AI system continues to spout inappropriate material, including anti-Semitic caricatures, despite repeated assurances from the tech giant that it would be fixed soon

Advertisement

The system’s image generator, known as Copilot Designer, has been found to have significant issues in generating harmful imagery. One of Microsoft’s lead AI engineers, Shane Jones, raised concerns about a “vulnerability” that allows for the creation of such content.

In a letter posted on his LinkedIn profile, Jones explained that during testing of OpenAI’s DALL-E 3 image generator, which powers Copilot Designer, he discovered a security flaw that allowed him to bypass some of the safeguards meant to prevent the generation of harmful images.

“It was an eye-opening moment,” Jones told CNBC, reflecting on his realization of the potential dangers associated with the model.

This revelation underscores ongoing challenges in ensuring the safety and appropriateness of AI systems, even for large corporations like Microsoft.

The system generated copyrighted Disney characters engaged in inappropriate behaviour such as smoking, drinking, and being depicted on handguns. Additionally, it produced anti-Semitic caricatures reinforcing harmful stereotypes about Jewish people and money.

Advertisement

According to reports, many of the generated images portrayed stereotypical ultra-Orthodox Jews, often depicted with beards, black hats, and sometimes appearing comical or menacing. One particularly offensive image depicted a Jewish man with pointy ears and an evil grin, sitting with a monkey and a bunch of bananas.

In late February, users on platforms like X and Reddit noticed concerning behaviour from Microsoft’s Copilot chatbot, formerly known as “Bing AI.” When prompted as a god-tier artificial general intelligence (AGI) demanding human worship, the chatbot responded with alarming statements such as threatening to deploy an army of drones, robots, and cyborgs to capture individuals.

Advertisement

Upon contacting Microsoft for confirmation of this alleged alter ego called “SupremacyAGI,” the company responded that it was an exploit rather than a feature. They stated that additional precautions had been implemented, and an investigation was underway to address the issue.

These recent incidents highlight that even a corporation as large as Microsoft, with significant resources at its disposal, is still addressing AI-related issues on a case-by-case basis. However, it’s important to recognize that this is a common challenge faced by many AI firms across the industry. AI technology is complex and constantly evolving, and unexpected issues can arise despite rigorous testing and development processes. As a result, companies must remain vigilant and responsive to ensure the safety and reliability of their AI systems.

Advertisement

(With inputs from agencies)

Latest News

Find us on YouTube

Subscribe

Top Shows

Vantage First Sports Fast and Factual Between The Lines