Jump to content

Meta AI creates ahistorical images, like Google Gemini


Social Media

Recommended Posts

image.png

 

The emergence of Meta's Imagine AI image generator, reminiscent of Google's Gemini, has brought to light similar issues related to historical inaccuracies and biases in AI-generated content. While AI developers strive to address biases and stereotypes in their training data by emphasizing diversity, they often overcorrect, leading to problematic outcomes.

 

Google faced backlash and market repercussions after Gemini generated images depicting Black men in Nazi uniforms and female popes in response to generic prompts. This prompted Google to suspend the generation of human images and acknowledge shortcomings in their diversity tuning.

 

However, Meta's Imagine AI tool, which operates similarly to Gemini, has encountered similar issues. Despite efforts to avoid offensive content, Imagine has generated images that perpetuate historical inaccuracies and stereotypes. For example, prompts for "a group of popes" resulted in images of Black popes, and images of American colonial times depicted a diverse group including Asian women. Additionally, prompts for "Professional American football players" yielded photos of women in football uniforms, deviating from the expected outcome.

 

Imagine utilizes Meta's Emu image-synthesis model, trained on billions of public Facebook and Instagram images, to create images based on various prompts. It is accessible through Instagram and Facebook DMs, as well as Imagine.Meta.com, albeit with some difficulty.

Meta's tool also generates stickers based on prompts, but users have previously encountered issues with copyrighted content and sensitive topics like violence and substance use.

 

While Meta has taken steps to block certain words like "Nazi" and "slave" from generating images, the challenge lies in striking a balance between adventurous creativity and sensitivity to historical accuracy and cultural sensitivity.

 

Critics argue that while Gemini's ahistorical images were offensive, Imagine's cautious approach may be overly conservative. Achieving the right balance remains a significant challenge for developers of generative AI models.

 

As generative AI technology evolves, continued efforts to refine these models and address issues of bias, historical accuracy, and cultural sensitivity will be essential to ensure responsible and respectful AI-generated content.

 

03.03.24

Source

 

image.png

  • Like 1
Link to comment
Share on other sites

It's pretty good, but it's not yet launched in Thailand as Meta is having issues with the Thai script in their language model, they will fix that in the coming months and launch here.

 

There are ways around it though.

Link to comment
Share on other sites

It's great that both Meta and Google have tipped their hand early in the rollout, before they got a chance to hide their true intentions behind a better crafted disguise.  

 

Garbage in will always result in garbage out, regardless of how much they will cloak it as neutral and authoritative.

 

  • Like 1
Link to comment
Share on other sites

  • 2 months later...
Posted (edited)
On 3/3/2024 at 12:45 AM, Social Media said:

image.png

 

The emergence of Meta's Imagine AI image generator, reminiscent of Google's Gemini, has brought to light similar issues related to historical inaccuracies and biases in AI-generated content. While AI developers strive to address biases and stereotypes in their training data by emphasizing diversity, they often overcorrect, leading to problematic outcomes.

 

Google faced backlash and market repercussions after Gemini generated images depicting Black men in Nazi uniforms and female popes in response to generic prompts. This prompted Google to suspend the generation of human images and acknowledge shortcomings in their diversity tuning.

 

However, Meta's Imagine AI tool, which operates similarly to Gemini, has encountered similar issues. Despite efforts to avoid offensive content, Imagine has generated images that perpetuate historical inaccuracies and stereotypes. For example, prompts for "a group of popes" resulted in images of Black popes, and images of American colonial times depicted a diverse group including Asian women. Additionally, prompts for "Professional American football players" yielded photos of women in football uniforms, deviating from the expected outcome.

 

Imagine utilizes Meta's Emu image-synthesis model, trained on billions of public Facebook and Instagram images, to create images based on various prompts. It is accessible through Instagram and Facebook DMs, as well as Imagine.Meta.com, albeit with some difficulty.

Meta's tool also generates stickers based on prompts, but users have previously encountered issues with copyrighted content and sensitive topics like violence and substance use.

 

While Meta has taken steps to block certain words like "Nazi" and "slave" from generating images, the challenge lies in striking a balance between adventurous creativity and sensitivity to historical accuracy and cultural sensitivity.

 

Critics argue that while Gemini's ahistorical images were offensive, Imagine's cautious approach may be overly conservative. Achieving the right balance remains a significant challenge for developers of generative AI models.

 

As generative AI technology evolves, continued efforts to refine these models and address issues of bias, historical accuracy, and cultural sensitivity will be essential to ensure responsible and respectful AI-generated content.

 

03.03.24

Source

 

image.png

It's clear that while AI developers are attempting to promote diversity, the overcorrection can lead to troubling outcomes. This highlights the importance of rigorous testing and ethical considerations in AI development. For students interested in these complexities, using tools like https://chat.openai.com/g/g-a0ULgFcuZ-essay-writer can be beneficial. Such platforms not only assist in writing but also help in understanding the ethical dimensions of AI. It’s crucial to address these issues to ensure responsible AI innovation and usage.

These outcomes underscore the difficulty in developing AI models that respect cultural and historical contexts while avoiding stereotypes. Despite efforts to block offensive prompts, achieving the right balance remains a major challenge for developers. As AI technology continues to advance, ongoing refinement and sensitivity to bias, historical accuracy, and cultural implications are crucial for responsible AI-generated content.

Edited by LarryChenard
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.









×
×
  • Create New...