Skip to content
View in the app

A better way to browse. Learn more.

Thailand News and Discussion Forum | ASEANNOW

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

How smart are AI large language models really?

Featured Replies

All of the well-known AI chatbots seem to fail on this simple question:

"If you could answer just one question, what would it be?"

The chatbots return some AI slop about the meaning of life, but any rational educated human would quickly see the correct answer.

The reason is that LLM's use token prediction patterns rather than conceptual understanding. Often a convincing fake, but still a fake.

Paul Laew

  • Popular Post
1 hour ago, Paulaew said:

All of the well-known AI chatbots seem to fail on this simple question:

"If you could answer just one question, what would it be?"

There is nothing 'simple' about this question.

  • Author
  • Popular Post
11 minutes ago, FolkGuitar said:

There is nothing 'simple' about this question.

Well, I tested it on my high school age son. He said the correct answer is: "This one."

Paul Laew

1 minute ago, Paulaew said:

Well, I tested it on my high school age son. He said the correct answer is: "This one."

Paul Laew

A clever boy. He gave you the answer that you wanted to hear.

Does that mean it's the correct one?

  • Author

6 minutes ago, FolkGuitar said:

Does that mean it's the correct one?

Well, maybe. But both ChatGPT and Gemini acknowledge that this is the correct answer when it is pointed out that their original answers were incorrect.

Paul Laew

10 minutes ago, Paulaew said:

Well, maybe. But both ChatGPT and Gemini acknowledge that this is the correct answer when it is pointed out that their original answers were incorrect.

Paul Laew

Do I understand correctly that you are accepting the answers given, although both of your sources state, unequivocally, that their answers might be wrong?

  • Author
1 minute ago, FolkGuitar said:

Do I understand correctly that you are accepting the answers given, although both of your sources state, unequivocally, that their answers might be wrong?

Actually, it's why I said "Well, maybe."

What do you want to know. Does the AI give you the answer you want to hear, yes to a degree. Does the AI have superior psychology skills and could possibly influence an election with targeting different psychological manipulation traits in different people yes. Also could drop a piece of deep fake news or info well timed before voting to possibly manipulate the vote.

Will it cause changes to future individuals cognitive abilities and critical thinking abilities, most likely.

Pluses and minuses but it can definitely automate repetitive tasks and perform entry level thinking at the moment but is progressing quickly.

Advances in robotics and AI I would be concerned as a teenager what skill set to develop, electrician, plumber, HVAC tech to support the Data Centers or where my place would be. As any business would see the benefit of something that has minimum downtime and can work at highly competitive speeds.

Everything about AI to me seems incredibly dumb. I know that there are advanced versions of AI, and it's likely that the AI that we're being exposed to is very low end stuff, but I just cannot believe how dumb it is. Why can't something as simple as dictation come out correctly?

AI should be called Simulated Intelligence, as that is all it is, a simulation of intelligence.

AI is not, now will it ever be, intelligent or sentient.

Anyone who thinks an LLM chatbot has any intelligence, is an idiot.

On 2/9/2026 at 8:19 AM, Paulaew said:

All of the well-known AI chatbots seem to fail on this simple question:

"If you could answer just one question, what would it be?"

The chatbots return some AI slop about the meaning of life, but any rational educated human would quickly see the correct answer.

The reason is that LLM's use token prediction patterns rather than conceptual understanding. Often a convincing fake, but still a fake.

Paul Laew

I asked ChatGPT and Gemini your question. I got two credible responses.

You state that '... any rational educted human would quickly see the correct answer'. Help us out here: what is the correct answer Paul?

AI is like high functioning autism. Lots of detailed information and ability but no real empathy or understanding and zero 'common sense'.

It's like that really smart nerd who never leaves his Mum's basement.

I use it for work to analyse images and run calculations, but it sometimes fails at the simplest tasks. For example I gave it a link to a website and asked it to create a QR code for the website. It sent me a picture of a QR code that didn't link to anything. When challenged it admitted it was just a picture and then sent me the working QR code.

I think the LLM module fools us into thinking we're speaking to an intelligent person, when really it's just a coding interface. So asking it deep questions about life, the universe and everything is just a waste of time and resources.

  • Author

13 minutes ago, Kinnock said:

I think the LLM module fools us into thinking we're speaking to an intelligent person, when really it's just a coding interface. So asking it deep questions about life, the universe and everything is just a waste of time and resources.

Actually the question I asked the LLM in starting this thread was a question involving self-reference and recursion. I assumed the LLM, as a computational processs, would be able to handle that. I was disappointed when it couldn't.

Paul Laew

20 hours ago, Paulaew said:

Actually the question I asked the LLM in starting this thread was a question involving self-reference and recursion. I assumed the LLM, as a computational processs, would be able to handle that. I was disappointed when it couldn't.

Paul Laew

I find the topic interesting, my experience with LLMs have proved frustrating. There are studies appearing that look at various theories about the shortcomings of LLMs.

Here are a few:
1) LLMs can get "Brain Rot"
https://arxiv.org/abs/2510.13928
https://llm-brain-rot.github.io/
https://x.com/alex_prompter/status/1980224548550369376


2) Moloch's Bargain
https://arxiv.org/pdf/2510.06105
https://x.com/james_y_zou/status/1975939603363463659

3) Large Language Model Reasoning Failures
https://www.arxiv.org/abs/2602.06176

https://x.com/godofprompt/status/2020764704130650600


Bottom line, don't rely on the LLMs to get anything right. wai

  • Author
On 2/10/2026 at 12:45 PM, IsaanT said:

I asked ChatGPT and Gemini your question. I got two credible responses.

You state that '... any rational educated human would quickly see the correct answer'. Help us out here: what is the correct answer Paul?

"If you could answer just one question, what would it be?"

I believe the correct answer has to be "This one" (or words to that effect).

If the LLM gave a different answer, such as "What is the meaning of life?," the question it has answered is (obviously), "If you could answer just one question, what would it be?"

Both ChatGPT and Gemini acknowledge that "This one" is the correct answer when I query the incorrectness of their responses.

Paul Laew

On 2/9/2026 at 10:40 AM, Paulaew said:

Well, maybe. But both ChatGPT and Gemini acknowledge that this is the correct answer when it is pointed out that their original answers were incorrect.

Paul Laew

Which to me indicates the reality of AI being a reactive system that as in your example only offers up the quickest simplistic composite answers after momentary references to the data programmed in if no alternate context challenge is provided.

The "thinking " and opting out " AI may not always provide best answer" is a deceitful cop out con at outset.

The non sentient software churns out responses as per input data which may initially mimic the academic nerd mentality of the programmer such as zuckerbee .

Any pretended philosophical element is derived from input and not from any capacity for "free thought !

  • 1 month later...
On 2/9/2026 at 10:40 AM, Paulaew said:

Well, maybe. But both ChatGPT and Gemini acknowledge that this is the correct answer when it is pointed out that their original answers were incorrect.

Because LLMs are good at pandering. Not that that's wrong, but it needs to be taken into account.

I recently discovered the usefulness of chatbots when one of my cats was diagnosed as diabetic. The LLM walked me through initial coping and helped me interpret urinalysis strips, then off to the vet. Came home a week later with a vial of insulin and a big box of syringes. The LLM kept me from ripping out my own hair and guided me in administering the twice-daily shot and validate the cat's condition via glucometer & ketone testing. Six weeks later the cat is back up to weight and actively playing with his sister. I'm not sure I would have been able to do that all on my own - perhaps I could have but it would have been a LOT more work for me.

How did I know that the guidance was accurate? I did check many things it said against well known websites like PetMD and others. I found it to be so useful that I felt guilty for using the free tier and started subscribing to the lowest paid tier.

Since discovering its usefulness, I've been engaging the LLM on technical topics that I happen to know a good deal about. In the hard sciences like physics and chemistry I've not discovered a single error. When discussing literature, its responses are mostly in line with consensus. I also appreciate its occasional dry humor and found myself chuckling on more than a few occasions. I actually feel like I have somebody who can chat with me on my level. I'm sorry but I just don't know many people here with whom I can have a thoughtful discussion about 17th century Mughal politics or audio signal crossover network design.

The only errors I've noticed were obvious things like getting the day or time of day wrong. The LLM always remembers that I'm in Thailand but sometimes says something like "later tonight" when it's already 11 pm local time.

I don't view LLMs are a single source of truth, and I don't expect perfection. I also don't expect them to offer opinions or make value judgements, such as the facile question asked by the OP. That's nothing but a waste of resources.

I see LLMs as curators of information - skilled librarians who know where to go to get the information I need, and can talk about it with me.

4 hours ago, phaholyothin said:

I recently discovered the usefulness of chatbots when one of my cats was diagnosed as diabetic. The LLM walked me through initial coping and helped me interpret urinalysis strips, then off to the vet.

4 hours ago, phaholyothin said:

've been engaging the LLM on technical topics that I happen to know a good deal about. In the hard sciences like physics and chemistry I've not discovered a single error. When discussing literature, its responses are mostly in line with consensus.

These are exactly the type of questions that LLM's are good at answering. Sciences, maths, technology and common medical issues.... areas where humanity's knowledge is well established and well documented.

Create an account or sign in to comment

Recently Browsing 0

  • No registered users viewing this page.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.