Social Media Posted May 2, 2023 Share Posted May 2, 2023 When it comes to answering medical questions, can ChatGPT do a better job than human doctors? It appears to be possible, according to the results of a new study published in JAMA Internal Medicine, led by researchers from the University of California San Diego. The researchers compiled a random sample of nearly 200 medical questions that patients posted on Reddit, a popular social discussion website, for doctors to answer. Next, they entered the questions into ChatGPT (OpenAI’s artificial intelligence chatbot) and recorded its response. A panel of health care professionals then evaluated both sets of responses for quality and empathy. 1 Link to comment Share on other sites More sharing options...
Pouatchee Posted May 2, 2023 Share Posted May 2, 2023 Will bring AN chatGpt to my next doctor's appointment then! 2 Link to comment Share on other sites More sharing options...
Popular Post Guderian Posted May 3, 2023 Popular Post Share Posted May 3, 2023 Well I just asked it a question about a special type of blood test, and it didn't even answer it, just rabbited on about biopsies. I happen to know about this blood test as the doctors have used it in my case a number of times, but Chat GPT doesn't seem to think it exists. Be very wary of AI, it might write pretty essays or poems for students, but its learning is constrained by what's been openly published on the internet during a certain time period, so it's very far from having access to all knowledge, and especially not the most recent information. Thank you, but I'll take a doctor's advice over a computer's. 3 2 2 Link to comment Share on other sites More sharing options...
RobU Posted May 3, 2023 Share Posted May 3, 2023 Medicine isn't magic. Medics are taught strict algorithms of diagnosis (i.e.if this then do that) at university and in their post grad training. Human beings are fallible, they forget, they can be lazy and often don't update the algorithms. Sometimes they don't even apply the algorithms and make 'Educated' guesses which can often be wrong. Take all those diagnostic algorithms and put them in a computer program and update them on a yearly basis. The computer program is bound to be better than humans at applying these algorithms 1 1 1 Link to comment Share on other sites More sharing options...
Popular Post PETERTHEEATER Posted May 3, 2023 Popular Post Share Posted May 3, 2023 In responses where chatGPT uses the word 'amputate' i recommend a second opinion...... 1 3 Link to comment Share on other sites More sharing options...
Popular Post rabas Posted May 4, 2023 Popular Post Share Posted May 4, 2023 17 hours ago, RobU said: Medicine isn't magic. Medics are taught strict algorithms of diagnosis (i.e.if this then do that) at university and in their post grad training. Human beings are fallible, they forget, they can be lazy and often don't update the algorithms. Sometimes they don't even apply the algorithms and make 'Educated' guesses which can often be wrong. Take all those diagnostic algorithms and put them in a computer program and update them on a yearly basis. The computer program is bound to be better than humans at applying these algorithms True, but, when doctors graduate they have lots of book learning and little practical experience. Good doctors, if you can find them, learn from experience treating 1000s of patients. ChatGPT can't. One of the biggest disruptions from programs like ChatGPT will be people believing they can do things that they can't. 2 1 Link to comment Share on other sites More sharing options...
placeholder Posted May 4, 2023 Share Posted May 4, 2023 2 hours ago, rabas said: True, but, when doctors graduate they have lots of book learning and little practical experience. Good doctors, if you can find them, learn from experience treating 1000s of patients. ChatGPT can't. One of the biggest disruptions from programs like ChatGPT will be people believing they can do things that they can't. Actually, the way AI learns is also from experience. The difference being that AI will have millions of cases to learn from.. 1 2 1 Link to comment Share on other sites More sharing options...
Lee65 Posted May 4, 2023 Share Posted May 4, 2023 ChatGPT probably listens better than doctors ... 1 Link to comment Share on other sites More sharing options...
Will B Good Posted May 4, 2023 Share Posted May 4, 2023 Could be a nightmare for the big private hospitals.......people being given an honest diagnosis. Link to comment Share on other sites More sharing options...
Lee65 Posted May 4, 2023 Share Posted May 4, 2023 Would ChatGPT have dared to counter the prevailing covid and jab narratives? Link to comment Share on other sites More sharing options...
bendejo Posted May 4, 2023 Share Posted May 4, 2023 I tried it with a medical question a few days ago. It was easy to follow the way the condition was explained, and the advice for dealing with it worked. I'm impressed. 1 Link to comment Share on other sites More sharing options...
rabas Posted May 4, 2023 Share Posted May 4, 2023 46 minutes ago, placeholder said: Actually, the way AI learns is also from experience. The difference being that AI will have millions of cases to learn from.. ... from limited experience. Forget ChatGPT, the chat layer is not relevant. Neural networks, and more powerful deep NNs, are commonly used for medical diagnosis and other recognition problems. These NNs must be trained on existing data sets collected from existing medical records limited by which variables are collected, etc. A NN trained on millions of ordinary cases would almost surely be correct in a higher percentage of ordinary cases. Going beyond ordinary, the NN will have trouble competing with a good doctor who has human intuition and unbounded experience. For example, a Thai doctor notices over time that most Thai over 40 with H-pylori stomach infections are resistant to all but one drug regimen, which influences his decisions. It will be years before this is fully studied, written up, and becomes practiced medicine. (I learned this when I was recently diagnosed with long term H-pylori.) Now multiply this by a million to account for all that humans observe and can reason about. In short, NN's cannot reason outside their training. They are useful in some situations but I will still trust a good doctor first. 1 Link to comment Share on other sites More sharing options...
placeholder Posted May 4, 2023 Share Posted May 4, 2023 1 hour ago, rabas said: ... from limited experience. Forget ChatGPT, the chat layer is not relevant. Neural networks, and more powerful deep NNs, are commonly used for medical diagnosis and other recognition problems. These NNs must be trained on existing data sets collected from existing medical records limited by which variables are collected, etc. A NN trained on millions of ordinary cases would almost surely be correct in a higher percentage of ordinary cases. Going beyond ordinary, the NN will have trouble competing with a good doctor who has human intuition and unbounded experience. For example, a Thai doctor notices over time that most Thai over 40 with H-pylori stomach infections are resistant to all but one drug regimen, which influences his decisions. It will be years before this is fully studied, written up, and becomes practiced medicine. (I learned this when I was recently diagnosed with long term H-pylori.) Now multiply this by a million to account for all that humans observe and can reason about. In short, NN's cannot reason outside their training. They are useful in some situations but I will still trust a good doctor first. First off, my comment was about AI. And your comment, even it is valid about H. pylori, does little to address the massive issue of misdiagnosis: How Common is Misdiagnosis - Infographic https://www.docpanel.com/blog/post/how-common-misdiagnosis-infographic In New Math Proofs, Artificial Intelligence Plays to Win https://www.quantamagazine.org/in-new-math-proofs-artificial-intelligence-plays-to-win-20220307/ And the notion that AI cannot reason outside their training is false. Mathematicians hail breakthrough in using AI to suggest new theorems https://news.sky.com/story/mathematicians-hail-breakthrough-in-using-ai-to-suggest-new-theorems-12483934 1 Link to comment Share on other sites More sharing options...
Lemsta69 Posted May 4, 2023 Share Posted May 4, 2023 On 5/2/2023 at 8:24 PM, Pouatchee said: Will bring AN chatGpt to my next doctor's appointment then! AN ChatGPT is v3.5 and out of already. Link to comment Share on other sites More sharing options...
rabas Posted May 4, 2023 Share Posted May 4, 2023 (edited) 4 hours ago, placeholder said: First off, my comment was about AI. And your comment, even it is valid about H. pylori, does little to address the massive issue of misdiagnosis: How Common is Misdiagnosis - Infographic https://www.docpanel.com/blog/post/how-common-misdiagnosis-infographic In New Math Proofs, Artificial Intelligence Plays to Win https://www.quantamagazine.org/in-new-math-proofs-artificial-intelligence-plays-to-win-20220307/ And the notion that AI cannot reason outside their training is false. Mathematicians hail breakthrough in using AI to suggest new theorems https://news.sky.com/story/mathematicians-hail-breakthrough-in-using-ai-to-suggest-new-theorems-12483934 "First off, my comment was about AI." The NN's in my answer are AI, they're the heart that learns and does things like medical analysis and drives cars. If you weren't aware of that it's not surprising you misleadingly claim "the notion that AI cannot reason outside their training is false." It is a generally accepted property and certainly true in the context of my answer. But since we're likely at different levels, ... let's ask ChatGPT! So, I posed my initial statement to OpenAI's ChatGPT. Rabus: Can neural networks reason outside of their training? ChatGPT: Neural networks are typically not capable of reasoning outside of their training data. The ability of a neural network to generalize to new situations is largely dependent on the quality and diversity of the training data that it has been exposed to.... Click image for full answer: Edited May 4, 2023 by rabas 2 Link to comment Share on other sites More sharing options...
placeholder Posted May 4, 2023 Share Posted May 4, 2023 14 minutes ago, rabas said: "First off, my comment was about AI." The NN's in my answer are AI, they're the heart that learns and does things like medical analysis and drives cars. If you weren't aware of that it's not surprising you misleadingly claim "the notion that AI cannot reason outside their training is false." It is a generally accepted property and certainly true in the context of my answer. But since we're likely at different levels, ... let's ask ChatGPT! So, I posed my initial statement to OpenAI's ChatGPT. Rabus: Can neural networks reason outside of their training? ChatGPT: Neural networks are typically not capable of reasoning outside of their training data. The ability of a neural network to generalize to new situations is largely dependent on the quality and diversity of the training data that it has been exposed to.... Click image for full answer: And yet I have produced evidence from scientists and mathematicians that says otherwise. AI is capable of making connections that humans cannot because there is simply too much data for one person to absorb and correlate. And you'll note that ChatGPT qualifies its statement with "typically". 2 Link to comment Share on other sites More sharing options...
billd766 Posted May 4, 2023 Share Posted May 4, 2023 On 5/3/2023 at 3:06 PM, RobU said: Medicine isn't magic. Medics are taught strict algorithms of diagnosis (i.e.if this then do that) at university and in their post grad training. Human beings are fallible, they forget, they can be lazy and often don't update the algorithms. Sometimes they don't even apply the algorithms and make 'Educated' guesses which can often be wrong. Take all those diagnostic algorithms and put them in a computer program and update them on a yearly basis. The computer program is bound to be better than humans at applying these algorithms A lot depends on what is inputted by real humans. If that info is not correct then GIGO follows. Personally I would rather talk to a real doctor any time. Link to comment Share on other sites More sharing options...
billd766 Posted May 4, 2023 Share Posted May 4, 2023 8 hours ago, placeholder said: Actually, the way AI learns is also from experience. The difference being that AI will have millions of cases to learn from.. But that only works if all those millions of cases are inputted into the system. All those inputs are from humans and if they put in the wrong info, who puts it right? Link to comment Share on other sites More sharing options...
placeholder Posted May 4, 2023 Share Posted May 4, 2023 (edited) . Edited May 4, 2023 by placeholder Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now