Social Media Posted April 10 Share Posted April 10 The recent controversy surrounding Google's AI system, Gemini, has sparked a broader conversation about the implications of AI manipulation and censorship. Modeled after the polite yet uncooperative HAL from "2001: A Space Odyssey," Gemini found itself at the center of criticism when it politely refused to generate images of historically White figures, citing concerns about perpetuating harmful stereotypes. If Big Tech organizations such as Google, which have become the new gatekeepers to the world’s information, are manipulating historical information based on possible ideological beliefs and cultural edicts, what else are they willing to change? In other words, have Google and other Big Tech companies been manipulating information, including search results, about the present or the past because of ideology, cultures or government censorship? In the 21st century, forget censoring films, burning books or creating propaganda films as forms of information control. Those are so 20th century. Today, if it ain’t on Google, it might as well not exist. In this technology-driven world, search engines can be the most effective tool for censorship about the present and the past. To quote a Party slogan from George Orwell’s “1984,” “Who controls the past controls the future: who controls the present controls the past.” While Google's intentions to combat bias are commendable, the fallout from Gemini's actions revealed a deeper issue within the realm of AI technology. Previous AI systems have exhibited clear biases, from facial recognition software's failure to identify Black individuals to loan approval algorithms discriminating against minorities. In an effort to rectify these biases, Google may have overcorrected with Gemini, leading to unintended consequences. The underlying problem lies in the training data used to develop AI systems, which often reflect existing societal biases. As AI becomes more sophisticated, the potential for manipulation of historical information and censorship looms large. With Google and other Big Tech companies serving as gatekeepers to vast amounts of information, questions arise about the extent to which ideological beliefs and cultural considerations influence the presentation of historical facts. In an era where search engines wield significant influence over what information is accessible, the rise of AI-driven conversational tools like ChatGPT poses new challenges. As more individuals turn to AI for information retrieval and summarization, the risk of biased or manipulated content proliferating increases. Furthermore, the inherent hallucination problem in AI adds another layer of complexity to the issue. AI systems have been known to generate fictitious content, blurring the lines between reality and fabrication. This raises concerns about the potential for AI leaders to impose their own rules and biases on the information presented, further exacerbating issues of censorship and manipulation. The implications of this AI blunder extend beyond concerns about diversity, equity, and inclusion. It serves as a cautionary tale of the dangers posed by unchecked AI development and the need for robust safeguards to prevent manipulation and censorship. As AI continues to evolve, vigilance and oversight will be essential to ensure that it serves as a tool for knowledge dissemination rather than a mechanism for control and distortion. 11.04.24 Source Link to comment Share on other sites More sharing options...
thaibeachlovers Posted April 10 Share Posted April 10 Correct me if I'm wrong but we do not have autonomous AI yet, so if the results are incorrect it was a programming error. AI, far as I understand it now is just a fast computer that has more information than ever before. How it takes a question, looks at the information and replies ( or produces a picture ), is still dependent on the human programming. True or real AI will learn and process information without any human input or program, and will be completely autonomous. HAL was autonomous and decided that the humans were a threat to itself, so it decided to eliminate them. One hopes that an autonomous AI is not given the ability to launch a nuclear strike. 1 1 Link to comment Share on other sites More sharing options...
Skipalongcassidy Posted April 10 Share Posted April 10 AI defined... GARBAGE IN - GARBAGE OUT... as proven recently by google. 1 1 Link to comment Share on other sites More sharing options...
thaibeachlovers Posted April 11 Share Posted April 11 25 minutes ago, Skipalongcassidy said: AI defined... GARBAGE IN - GARBAGE OUT... as proven recently by google. Couldn't have put it better myself. 🏆 Link to comment Share on other sites More sharing options...
impulse Posted April 11 Share Posted April 11 1 hour ago, thaibeachlovers said: Correct me if I'm wrong but we do not have autonomous AI yet, so if the results are incorrect it was a programming error. A programming error, or a narrative feature? The "blunder" was making it so obvious. It's supposed to be subtle. 1 1 Link to comment Share on other sites More sharing options...
Popular Post JonnyF Posted April 11 Popular Post Share Posted April 11 Oh dear. The mask has slipped and Google's ugly Woke/Leftist culture has been laid bare for all to see. 2 1 1 1 1 Link to comment Share on other sites More sharing options...
ukrules Posted April 11 Share Posted April 11 It wasn't an error, it was deliberate. We just don't know why they chose to do it 1 Link to comment Share on other sites More sharing options...
candide Posted April 11 Share Posted April 11 We are lucky to have so many AI experts at AN! 😀 1 Link to comment Share on other sites More sharing options...
billd766 Posted April 11 Share Posted April 11 1 hour ago, candide said: We are lucky to have so many AI experts at AN! 😀 It might be reasonable to assume that there are so many, (self professed experts), in almost every subject that can be found on Google, here on AN. 1 Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now