Jump to content

ChatGPT owner in probe over risks around false answers


Social Media

Recommended Posts

image.png

 

US regulators are probing artificial intelligence company OpenAI over the risks to consumers from ChatGPT generating false information.

The Federal Trade Commission (FTC) sent a letter to the Microsoft-backed business requesting information on how it addresses risks to people's reputations.

The inquiry is a sign of the rising regulatory scrutiny of the technology.

OpenAI chief executive Sam Altman says the company will work with the FTC.

ChatGPT generates convincing human-like responses to user queries within seconds, instead of the series of links generated by a traditional internet search. It, and similar AI products, are expected to dramatically change the way people get information they are searching for online.

Tech rivals are racing to offer their own versions of the technology, even as it generates fierce debate, including over the data it uses, the accuracy of the responses and whether the company violated authors' rights as it was training the technology.

 

The FTC's letter asks what steps OpenAI has taken to address its products' potential to "generate statements about real individuals that are false, misleading, disparaging or harmful".

 

FULL STORY

image.png

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites


I just Used chat GPt to resolve a problem with the date format in my google chrome results, (Thai calendar rather than western) and the answer it gave me  was incorrect. I could not find what it said to look for, where it said to look for and adjust. This in not the first time it gave me false information . and when I ask for the sources of the information to verify it, i forgot exactly what it said  but it would not give me it's source.  

  • Like 1
  • Sad 1
Link to comment
Share on other sites

It is quite entertaining to consider that after years of in-depth scientific research that we still do not fully understand the entire workings of the human brain and cannot predict what it will do; yet we allow humans to programme the workings of A.I.  What could possibly go wrong?

Link to comment
Share on other sites

Gee, false information... well, where does AI get it's data? From humans, who err, give false info.... maybe Fox could have a custom AI based on their news.....

The deal is "Garbage in Garbage out"

Link to comment
Share on other sites

15 minutes ago, Emdog said:

Gee, false information... well, where does AI get it's data? From humans, who err, give false info.... maybe Fox could have a custom AI based on their news.....

The deal is "Garbage in Garbage out"

Not always true. 

AIs suffer from a condition called hallucinations. 

It is something that people in the field are trying to fix but don't really know why it is happening. 

Much like some posters in this forum ????

"In the field of artificial intelligence, a hallucination or artificial hallucination is a confident response by an AI that does not seem to be justified by its training data. "

https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)

  • Thumbs Up 1
  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.








×
×
  • Create New...