Skip to content
View in the app

A better way to browse. Learn more.

Thailand News and Discussion Forum | ASEANNOW

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

ChatGPT found to give better medical advice than real doctors in blind study: ‘This will be a game changer’

Featured Replies

When it comes to answering medical questions, can ChatGPT do a better job than human doctors?

It appears to be possible, according to the results of a new study published in JAMA Internal Medicine, led by researchers from the University of California San Diego.

The researchers compiled a random sample of nearly 200 medical questions that patients posted on Reddit, a popular social discussion website, for doctors to answer. Next, they entered the questions into ChatGPT (OpenAI’s artificial intelligence chatbot) and recorded its response.

A panel of health care professionals then evaluated both sets of responses for quality and empathy.

 

  • Popular Post

Well I just asked it a question about a special type of blood test, and it didn't even answer it, just rabbited on about biopsies. I happen to know about this blood test as the doctors have used it in my case a number of times, but Chat GPT doesn't seem to think it exists. Be very wary of AI, it might write pretty essays or poems for students, but its learning is constrained by what's been openly published on the internet during a certain time period, so it's very far from having access to all knowledge, and especially not the most recent information. Thank you, but I'll take a doctor's advice over a computer's.

Medicine isn't magic. Medics are taught strict algorithms of diagnosis (i.e.if this then do that) at university and in their post grad training. Human beings are fallible, they forget, they can be lazy and often don't update the algorithms. Sometimes they don't even apply the algorithms and make 'Educated' guesses which can often be wrong. Take all those diagnostic algorithms and put them in a computer program and update them on a yearly basis. The computer program is bound to be better than humans at applying these algorithms

  • Popular Post

In responses where chatGPT uses the word 'amputate' i recommend a second opinion......

  • Popular Post
17 hours ago, RobU said:

Medicine isn't magic. Medics are taught strict algorithms of diagnosis (i.e.if this then do that) at university and in their post grad training. Human beings are fallible, they forget, they can be lazy and often don't update the algorithms. Sometimes they don't even apply the algorithms and make 'Educated' guesses which can often be wrong. Take all those diagnostic algorithms and put them in a computer program and update them on a yearly basis. The computer program is bound to be better than humans at applying these algorithms

True, but, when doctors graduate they have lots of book learning and little practical experience. Good doctors, if you can find them, learn from experience treating 1000s of patients. ChatGPT can't.

 

One of the biggest disruptions from programs like ChatGPT will be people believing they can do things that they can't.

 

2 hours ago, rabas said:

True, but, when doctors graduate they have lots of book learning and little practical experience. Good doctors, if you can find them, learn from experience treating 1000s of patients. ChatGPT can't.

 

One of the biggest disruptions from programs like ChatGPT will be people believing they can do things that they can't.

 

Actually, the way AI learns is also from experience. The difference being that AI will have millions of cases to learn from..

ChatGPT probably listens better than doctors ...

Could be a nightmare for the big private hospitals.......people being given an honest diagnosis.

Would ChatGPT have dared to counter the prevailing covid and jab narratives?

I tried it with a medical question a few days ago.  It was easy to follow the way the condition was explained, and the advice for dealing with it worked.  I'm impressed.

 

 

46 minutes ago, placeholder said:

Actually, the way AI learns is also from experience. The difference being that AI will have millions of cases to learn from..

... from limited experience.

 

Forget ChatGPT, the chat layer is not relevant. Neural networks, and more powerful deep NNs, are commonly used for medical diagnosis and other recognition problems. These NNs must be trained on existing data sets collected from existing medical records limited by which variables are collected, etc.

 

A NN trained on millions of ordinary cases would almost surely  be correct in a higher percentage of ordinary cases. Going beyond ordinary, the NN will have trouble competing with a good doctor who has human intuition and unbounded experience.

 

For example, a Thai doctor notices over time that most Thai over 40 with H-pylori stomach infections are resistant to all but one drug regimen, which influences his decisions.  It will be years before this is fully studied, written up, and becomes practiced medicine. (I learned this when I was recently diagnosed with long term H-pylori.) Now multiply this by a million to account for all that humans observe and can reason about.

 

In short, NN's cannot reason outside their training.  They are useful in some situations but I will still trust a good doctor first.

1 hour ago, rabas said:

... from limited experience.

 

Forget ChatGPT, the chat layer is not relevant. Neural networks, and more powerful deep NNs, are commonly used for medical diagnosis and other recognition problems. These NNs must be trained on existing data sets collected from existing medical records limited by which variables are collected, etc.

 

A NN trained on millions of ordinary cases would almost surely  be correct in a higher percentage of ordinary cases. Going beyond ordinary, the NN will have trouble competing with a good doctor who has human intuition and unbounded experience.

 

For example, a Thai doctor notices over time that most Thai over 40 with H-pylori stomach infections are resistant to all but one drug regimen, which influences his decisions.  It will be years before this is fully studied, written up, and becomes practiced medicine. (I learned this when I was recently diagnosed with long term H-pylori.) Now multiply this by a million to account for all that humans observe and can reason about.

 

In short, NN's cannot reason outside their training.  They are useful in some situations but I will still trust a good doctor first.

First off, my comment was about AI. And your comment, even it is valid about H. pylori, does little to address the massive issue of misdiagnosis:

How Common is Misdiagnosis - Infographic

https://www.docpanel.com/blog/post/how-common-misdiagnosis-infographic

 

In New Math Proofs, Artificial Intelligence Plays to Win

https://www.quantamagazine.org/in-new-math-proofs-artificial-intelligence-plays-to-win-20220307/

 

And the notion that AI cannot reason outside their training is false.

Mathematicians hail breakthrough in using AI to suggest new theorems

https://news.sky.com/story/mathematicians-hail-breakthrough-in-using-ai-to-suggest-new-theorems-12483934


 

On 5/2/2023 at 8:24 PM, Pouatchee said:

Will bring AN chatGpt to my next doctor's appointment then!

AN ChatGPT is v3.5 and out of already.

4 hours ago, placeholder said:

First off, my comment was about AI. And your comment, even it is valid about H. pylori, does little to address the massive issue of misdiagnosis:

How Common is Misdiagnosis - Infographic

https://www.docpanel.com/blog/post/how-common-misdiagnosis-infographic

 

In New Math Proofs, Artificial Intelligence Plays to Win

https://www.quantamagazine.org/in-new-math-proofs-artificial-intelligence-plays-to-win-20220307/

 

And the notion that AI cannot reason outside their training is false.

Mathematicians hail breakthrough in using AI to suggest new theorems

https://news.sky.com/story/mathematicians-hail-breakthrough-in-using-ai-to-suggest-new-theorems-12483934

"First off, my comment was about AI."

 

The NN's in my answer are AI, they're the heart that learns and does things like medical analysis and drives cars. If you weren't aware of that it's not surprising you misleadingly claim "the notion that AI cannot reason outside their training is false." It is a generally accepted property and certainly true in the context of my answer.

 

But since we're likely at different levels, ... let's ask ChatGPT!  So, I posed my initial statement to OpenAI's ChatGPT.

 

Rabus: Can neural networks reason outside of their training?

 

ChatGPT: Neural networks are typically not capable of reasoning outside of their training data. The ability of a neural network to generalize to new situations is largely dependent on the quality and diversity of the training data that it has been exposed to....

 

Click image for full answer:

full answer

 

14 minutes ago, rabas said:

"First off, my comment was about AI."

 

The NN's in my answer are AI, they're the heart that learns and does things like medical analysis and drives cars. If you weren't aware of that it's not surprising you misleadingly claim "the notion that AI cannot reason outside their training is false." It is a generally accepted property and certainly true in the context of my answer.

 

But since we're likely at different levels, ... let's ask ChatGPT!  So, I posed my initial statement to OpenAI's ChatGPT.

 

Rabus: Can neural networks reason outside of their training?

 

ChatGPT: Neural networks are typically not capable of reasoning outside of their training data. The ability of a neural network to generalize to new situations is largely dependent on the quality and diversity of the training data that it has been exposed to....

 

Click image for full answer:

full answer

 

And yet I have produced evidence from scientists and mathematicians that says otherwise. AI is capable of making connections that humans cannot because there is simply too much data for one person to absorb and correlate. And you'll note that ChatGPT qualifies its statement with "typically".
 

On 5/3/2023 at 3:06 PM, RobU said:

Medicine isn't magic. Medics are taught strict algorithms of diagnosis (i.e.if this then do that) at university and in their post grad training. Human beings are fallible, they forget, they can be lazy and often don't update the algorithms. Sometimes they don't even apply the algorithms and make 'Educated' guesses which can often be wrong. Take all those diagnostic algorithms and put them in a computer program and update them on a yearly basis. The computer program is bound to be better than humans at applying these algorithms

A lot depends on what is inputted by real humans. If that info is not correct then GIGO follows.

 

Personally I would rather talk to a real doctor any time.

8 hours ago, placeholder said:

Actually, the way AI learns is also from experience. The difference being that AI will have millions of cases to learn from..

But that only works if all those millions of cases are inputted into the system.

 

All those inputs are from humans and if they put in the wrong info, who puts it right?

Create an account or sign in to comment

Recently Browsing 0

  • No registered users viewing this page.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.