Jump to content

ChatGPT found to give better medical advice than real doctors in blind study: ‘This will be a game changer’


Recommended Posts

Posted

When it comes to answering medical questions, can ChatGPT do a better job than human doctors?

It appears to be possible, according to the results of a new study published in JAMA Internal Medicine, led by researchers from the University of California San Diego.

The researchers compiled a random sample of nearly 200 medical questions that patients posted on Reddit, a popular social discussion website, for doctors to answer. Next, they entered the questions into ChatGPT (OpenAI’s artificial intelligence chatbot) and recorded its response.

A panel of health care professionals then evaluated both sets of responses for quality and empathy.

  • Haha 1
Posted

Medicine isn't magic. Medics are taught strict algorithms of diagnosis (i.e.if this then do that) at university and in their post grad training. Human beings are fallible, they forget, they can be lazy and often don't update the algorithms. Sometimes they don't even apply the algorithms and make 'Educated' guesses which can often be wrong. Take all those diagnostic algorithms and put them in a computer program and update them on a yearly basis. The computer program is bound to be better than humans at applying these algorithms

  • Like 1
  • Sad 1
  • Thanks 1
Posted
2 hours ago, rabas said:

True, but, when doctors graduate they have lots of book learning and little practical experience. Good doctors, if you can find them, learn from experience treating 1000s of patients. ChatGPT can't.

 

One of the biggest disruptions from programs like ChatGPT will be people believing they can do things that they can't.

 

Actually, the way AI learns is also from experience. The difference being that AI will have millions of cases to learn from..

  • Like 1
  • Confused 2
  • Haha 1
Posted

I tried it with a medical question a few days ago.  It was easy to follow the way the condition was explained, and the advice for dealing with it worked.  I'm impressed.

 

 

  • Thanks 1
Posted
46 minutes ago, placeholder said:

Actually, the way AI learns is also from experience. The difference being that AI will have millions of cases to learn from..

... from limited experience.

 

Forget ChatGPT, the chat layer is not relevant. Neural networks, and more powerful deep NNs, are commonly used for medical diagnosis and other recognition problems. These NNs must be trained on existing data sets collected from existing medical records limited by which variables are collected, etc.

 

A NN trained on millions of ordinary cases would almost surely  be correct in a higher percentage of ordinary cases. Going beyond ordinary, the NN will have trouble competing with a good doctor who has human intuition and unbounded experience.

 

For example, a Thai doctor notices over time that most Thai over 40 with H-pylori stomach infections are resistant to all but one drug regimen, which influences his decisions.  It will be years before this is fully studied, written up, and becomes practiced medicine. (I learned this when I was recently diagnosed with long term H-pylori.) Now multiply this by a million to account for all that humans observe and can reason about.

 

In short, NN's cannot reason outside their training.  They are useful in some situations but I will still trust a good doctor first.

  • Thumbs Up 1
Posted
1 hour ago, rabas said:

... from limited experience.

 

Forget ChatGPT, the chat layer is not relevant. Neural networks, and more powerful deep NNs, are commonly used for medical diagnosis and other recognition problems. These NNs must be trained on existing data sets collected from existing medical records limited by which variables are collected, etc.

 

A NN trained on millions of ordinary cases would almost surely  be correct in a higher percentage of ordinary cases. Going beyond ordinary, the NN will have trouble competing with a good doctor who has human intuition and unbounded experience.

 

For example, a Thai doctor notices over time that most Thai over 40 with H-pylori stomach infections are resistant to all but one drug regimen, which influences his decisions.  It will be years before this is fully studied, written up, and becomes practiced medicine. (I learned this when I was recently diagnosed with long term H-pylori.) Now multiply this by a million to account for all that humans observe and can reason about.

 

In short, NN's cannot reason outside their training.  They are useful in some situations but I will still trust a good doctor first.

First off, my comment was about AI. And your comment, even it is valid about H. pylori, does little to address the massive issue of misdiagnosis:

How Common is Misdiagnosis - Infographic

https://www.docpanel.com/blog/post/how-common-misdiagnosis-infographic

 

In New Math Proofs, Artificial Intelligence Plays to Win

https://www.quantamagazine.org/in-new-math-proofs-artificial-intelligence-plays-to-win-20220307/

 

And the notion that AI cannot reason outside their training is false.

Mathematicians hail breakthrough in using AI to suggest new theorems

https://news.sky.com/story/mathematicians-hail-breakthrough-in-using-ai-to-suggest-new-theorems-12483934


 

  • Like 1
Posted
On 5/2/2023 at 8:24 PM, Pouatchee said:

Will bring AN chatGpt to my next doctor's appointment then!

AN ChatGPT is v3.5 and out of already.

Posted (edited)
4 hours ago, placeholder said:

First off, my comment was about AI. And your comment, even it is valid about H. pylori, does little to address the massive issue of misdiagnosis:

How Common is Misdiagnosis - Infographic

https://www.docpanel.com/blog/post/how-common-misdiagnosis-infographic

 

In New Math Proofs, Artificial Intelligence Plays to Win

https://www.quantamagazine.org/in-new-math-proofs-artificial-intelligence-plays-to-win-20220307/

 

And the notion that AI cannot reason outside their training is false.

Mathematicians hail breakthrough in using AI to suggest new theorems

https://news.sky.com/story/mathematicians-hail-breakthrough-in-using-ai-to-suggest-new-theorems-12483934

"First off, my comment was about AI."

 

The NN's in my answer are AI, they're the heart that learns and does things like medical analysis and drives cars. If you weren't aware of that it's not surprising you misleadingly claim "the notion that AI cannot reason outside their training is false." It is a generally accepted property and certainly true in the context of my answer.

 

But since we're likely at different levels, ... let's ask ChatGPT!  So, I posed my initial statement to OpenAI's ChatGPT.

 

Rabus: Can neural networks reason outside of their training?

 

ChatGPT: Neural networks are typically not capable of reasoning outside of their training data. The ability of a neural network to generalize to new situations is largely dependent on the quality and diversity of the training data that it has been exposed to....

 

Click image for full answer:

full answer

 

Edited by rabas
  • Like 2
Posted
14 minutes ago, rabas said:

"First off, my comment was about AI."

 

The NN's in my answer are AI, they're the heart that learns and does things like medical analysis and drives cars. If you weren't aware of that it's not surprising you misleadingly claim "the notion that AI cannot reason outside their training is false." It is a generally accepted property and certainly true in the context of my answer.

 

But since we're likely at different levels, ... let's ask ChatGPT!  So, I posed my initial statement to OpenAI's ChatGPT.

 

Rabus: Can neural networks reason outside of their training?

 

ChatGPT: Neural networks are typically not capable of reasoning outside of their training data. The ability of a neural network to generalize to new situations is largely dependent on the quality and diversity of the training data that it has been exposed to....

 

Click image for full answer:

full answer

 

And yet I have produced evidence from scientists and mathematicians that says otherwise. AI is capable of making connections that humans cannot because there is simply too much data for one person to absorb and correlate. And you'll note that ChatGPT qualifies its statement with "typically".
 

  • Confused 2
Posted
On 5/3/2023 at 3:06 PM, RobU said:

Medicine isn't magic. Medics are taught strict algorithms of diagnosis (i.e.if this then do that) at university and in their post grad training. Human beings are fallible, they forget, they can be lazy and often don't update the algorithms. Sometimes they don't even apply the algorithms and make 'Educated' guesses which can often be wrong. Take all those diagnostic algorithms and put them in a computer program and update them on a yearly basis. The computer program is bound to be better than humans at applying these algorithms

A lot depends on what is inputted by real humans. If that info is not correct then GIGO follows.

 

Personally I would rather talk to a real doctor any time.

Posted
8 hours ago, placeholder said:

Actually, the way AI learns is also from experience. The difference being that AI will have millions of cases to learn from..

But that only works if all those millions of cases are inputted into the system.

 

All those inputs are from humans and if they put in the wrong info, who puts it right?

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.



×
×
  • Create New...