Jump to content

Opinion: Google’s AI blunder over images reveals a much bigger problem


Social Media

Recommended Posts

image.png

 

The recent controversy surrounding Google's AI system, Gemini, has sparked a broader conversation about the implications of AI manipulation and censorship. Modeled after the polite yet uncooperative HAL from "2001: A Space Odyssey," Gemini found itself at the center of criticism when it politely refused to generate images of historically White figures, citing concerns about perpetuating harmful stereotypes.

 

If Big Tech organizations such as Google, which have become the new gatekeepers to the world’s information, are manipulating historical information based on possible ideological beliefs and cultural edicts, what else are they willing to change? In other words, have Google and other Big Tech companies been manipulating information, including search results, about the present or the past because of ideology, cultures or government censorship?

 

In the 21st century, forget censoring films, burning books or creating propaganda films as forms of information control. Those are so 20th century. Today, if it ain’t on Google, it might as well not exist. In this technology-driven world, search engines can be the most effective tool for censorship about the present and the past. To quote a Party slogan from George Orwell’s “1984,” “Who controls the past controls the future: who controls the present controls the past.”

 

While Google's intentions to combat bias are commendable, the fallout from Gemini's actions revealed a deeper issue within the realm of AI technology. Previous AI systems have exhibited clear biases, from facial recognition software's failure to identify Black individuals to loan approval algorithms discriminating against minorities. In an effort to rectify these biases, Google may have overcorrected with Gemini, leading to unintended consequences.

 

The underlying problem lies in the training data used to develop AI systems, which often reflect existing societal biases. As AI becomes more sophisticated, the potential for manipulation of historical information and censorship looms large. With Google and other Big Tech companies serving as gatekeepers to vast amounts of information, questions arise about the extent to which ideological beliefs and cultural considerations influence the presentation of historical facts.

 

In an era where search engines wield significant influence over what information is accessible, the rise of AI-driven conversational tools like ChatGPT poses new challenges. As more individuals turn to AI for information retrieval and summarization, the risk of biased or manipulated content proliferating increases.

 

Furthermore, the inherent hallucination problem in AI adds another layer of complexity to the issue. AI systems have been known to generate fictitious content, blurring the lines between reality and fabrication. This raises concerns about the potential for AI leaders to impose their own rules and biases on the information presented, further exacerbating issues of censorship and manipulation.

 

The implications of this AI blunder extend beyond concerns about diversity, equity, and inclusion. It serves as a cautionary tale of the dangers posed by unchecked AI development and the need for robust safeguards to prevent manipulation and censorship. As AI continues to evolve, vigilance and oversight will be essential to ensure that it serves as a tool for knowledge dissemination rather than a mechanism for control and distortion.

 

11.04.24

Source

 

image.png

Link to comment
Share on other sites

Correct me if I'm wrong but we do not have autonomous AI yet, so if the results are incorrect it was a programming error.

AI, far as I understand it now is just a fast computer that has more information than ever before. How it takes a question, looks at the information and replies ( or produces a picture ), is still dependent on the human programming.

 

True or real AI will learn and process information without any human input or program, and will be completely autonomous. HAL was autonomous and decided that the humans were a threat to itself, so it decided to eliminate them. One hopes that an autonomous AI is not given the ability to launch a nuclear strike.

  • Like 1
  • Agree 1
Link to comment
Share on other sites

1 hour ago, thaibeachlovers said:

Correct me if I'm wrong but we do not have autonomous AI yet, so if the results are incorrect it was a programming error.

 

A programming error, or a narrative feature?

 

The "blunder" was making it so obvious.  It's supposed to be subtle.

 

 

  • Like 1
  • Haha 1
Link to comment
Share on other sites

1 hour ago, candide said:

We are lucky to have so many AI experts at AN!

😀

It might be reasonable to assume that there are so many, (self professed experts), in almost every subject that can be found on Google, here on AN.

  • Haha 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.




×
×
  • Create New...