Skip to content
View in the app

A better way to browse. Learn more.

Thailand News and Discussion Forum | ASEANNOW

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

Why so many conspiracy theorists and what to do about them

Featured Replies

Just now, Lacessit said:

AI is not a truth engine, it is a plausibility engine. It is based on a statistically significant mass of information which may be totally outdated.

Example: AI will tell you it is better to have a full tank of petrol than one only a quarter full, due to an increased risk of condensation of water in the headspace.

That risk no longer exists, because the ethanol content of most gasoline blends will easily accommodate any water derived from condensation.

If, however, you want to change the AI mind, it will stick with what it has, which is a mass of articles about condensation against one refutation.

It is like trying to turn an ocean liner 189 degrees with a dinghy.

Humans are capable of having new insights. AI isn't.

Yes, AI is not a truth engine. It's good for finding sources. Then you check the source. I once asked AI what was the last battleship on battleship combat in WW2, and it gave me the wrong answer.

  • Replies 1.2k
  • Views 31.1k
  • Created
  • Last Reply

Top Posters In This Topic

Most Popular Posts

  • Why so many conspiracy theorists and what to do about them   Mark your calendar and look again in 6 months, because so many of them are actually spoiler alerts.  

  • Stiddle Mump
    Stiddle Mump

    More conspiracy theories are not at all.   They are truths denied by authorities, to stop us becoming intrigued; and then investigating further.

  • Red Phoenix
    Red Phoenix

Posted Images

8 minutes ago, Lacessit said:

Humans are capable of having new insights. AI isn't.

Indeed. Would AI be capable of inventing the wheel, if it were given the same parameters men had in the Neolithic age?

  • Author

Calling something “AI” isn’t an argument — it’s a mantra. - It gets repeated like it proves something… but it doesn’t address facts, logic, or evidence. It just signals that the person saying it doesn’t really understand what AI is.

Sadly, some people don't know what AI actually is or even the difference between a source and a search engine or the difference between search and research – if they did, they probably would never have become conspiracy theorists..

Most people using “AI” as a dismissal seem to think it’s just a search engine that spits out answers. It isn’t. And ironically, they’re probably using AI all day without even realising it.

e.g.

Face ID on your phone.
Google Maps routing your journey.
Netflix or Spotify recommendations.
Spam filters catching junk mail.
Social media feeds deciding what you see.

-       All AI.

 

AI is just a tool — like a calculator, or a spellchecker. It can be used well or badly. The output still needs to be judged on its accuracy, not dismissed because of how it was produced.

 

I’m dyslexic, so I use tools like Grammarly. What’s your favourite?

 

So the question isn’t “is it AI?” - The question is: is it correct?

 

If you disagree with something, explain why.
Point out the flaw.
Challenge the reasoning.

Because saying “that’s AI” isn’t debate.
It’s avoidance.

 

  • Author
2 hours ago, rattlesnake said:

Copy/pasting AI-generated arguments authoritatively is about as compelling and credible as using an Instagram filter:

images.jpeg

Here's one with your filter removed...

rattlesnake 2 bef37f22-d552-4bf2-bee6-fa2754345f9e.jpeg

  • Author
2 hours ago, rattlesnake said:

Indeed. Would AI be capable of inventing the wheel, if it were given the same parameters men had in the Neolithic age?

would you?

  • Author
14 hours ago, Fat is a type of crazy said:

One issue I have is some of the conspiracy theorists seem like good guys, nice guys, even smart guys. I wonder why they would spend their time pondering the moon landing, flat earth, and may aspects of the vaccine issues when they could admit:

a. many of the arguments are extremely tenuous at best and often incredibly weak and often focus on some minor specific detail - a photo or a study by someone often not qualified in the area;

b. the gain to them if they do find some example of a corrupt study or a falsehood, of what had been considered an established fact, will likely only apply to the specific issue and will normally have limited or no broader implications;

c. there are so many more worthy issues of corruption and deception to look at in plain view that have much more significant implications to the world around us.

One doesn't have to look far to see the amazing work of scientists over recent decades and years - that fact alone may be difficult for them to accept when broadly accepted science is so linked in their minds with corruption and dark forces.

One wonders if it is that they like the notoriety of taking a contrary position and that they feel like classifying others as sheep gives them a support to their idea of their individuality and their idea of what intelligence is and it's applicability to them.

Why Do Intelligent People Fall for Conspiracy Theories?

One of the biggest myths in this whole debate is that conspiracy theorists are simply “stupid.” They’re not. In fact, quite a few are highly intelligent and well-educated. O=in fact they are often the ones who sow the seeds in the first place.

 

And that’s precisely the problem…

Research consistently shows that intelligence doesn’t make you immune to conspiracy thinking — it just makes you better at defending it. The issue isn’t a lack of brainpower, it’s how that brainpower is used.

 

1. Intelligence ≠ Objectivity
Highly intelligent people are often very good at something called motivated reasoning — starting with a conclusion and then using their intelligence to justify it. In other words, they don’t follow the evidence… they build a case around what they already want to believe.

 

2. Pattern-Seeking Gone Wrong
Smart people are good at spotting patterns. That’s useful — until it isn’t. The same ability can lead to seeing connections that simply aren’t there. Random events become “linked,” coincidence becomes “evidence,” and suddenly you’ve got a conspiracy.

 

3. The “I Know Something You Don’t” Effect
There’s a strong psychological pull in believing you’ve uncovered hidden knowledge. It feeds a sense of superiority — “I’ve worked it out, the rest are sheep.”
For some, that’s far more appealing than accepting boring, evidence-based explanations.

 

4. Control in a Chaotic World
Conspiracy theories simplify complex, messy reality. Instead of random events, uncertainty, or systemic problems, you get a clear villain and a neat explanation. That’s comforting — even if it’s wrong.

 

5. Identity and Belonging
Beliefs aren’t always about truth — they’re about tribe. Conspiracy theories often act as social glue, creating in-groups of “truth seekers” versus everyone else. Once identity is tied to the belief, changing your mind feels like losing your place in the group.

 

 

The Bottom Line

Intelligent people don’t fall for conspiracy theories because they can’t think.
They fall for them because they can think — and then use that ability to rationalise, defend, and entrench beliefs that aren’t supported by evidence.

In short:
They don’t lack intelligence — they misuse it.

And that’s a much harder problem to fix.

31 minutes ago, kwilco said:

Calling something “AI” isn’t an argument — it’s a mantra. - It gets repeated like it proves something… but it doesn’t address facts, logic, or evidence. It just signals that the person saying it doesn’t really understand what AI is.

I most probably understand it a lot better than you do. I am involved with people who chose to use it on a mass scale in the entertainment industry and are regretting it big time, because the deliverables are very far removed from the initial promise and the bubble is bursting.

34 minutes ago, kwilco said:

Most people using “AI” as a dismissal seem to think it’s just a search engine that spits out answers. It isn’t.

35 minutes ago, kwilco said:

AI is just a tool — like a calculator, or a spellchecker. It can be used well or badly. The output still needs to be judged on its accuracy, not dismissed because of how it was produced.

What you neglect to say or consider is that you can ask an AI tool to produce a whole argumentation for you, using a simple prompt.

I am posting an example below, where I copied in the first two paragraphs of your post into Gemini and asked it to produce a counter-argumentation explaining why calling out AI is actually a valid approach. The result was produced in a split second, I am pasting it below in italics. All the characteristics of AI-generated argumentation are there, just as they are in the numerous diatribes you posted in this thread. No thought, no competence, no knowledge and most importantly, no effort was required.

Actually copy/pasting this type of material in a debate and presenting it as your own output without mentioning that it is AI content is both very lazy and very dishonest. It is level 0 of intellectual interaction and ethics, and I will call it out as often as required… just as you like to call out 'misinformation' – each to their own, and if a few hypertrophied egos have to get bruised in the process, so be it.


Identifying the use of Artificial Intelligence is not a mere "mantra"—it is an act of intellectual transparency and epistemic rigor. Contrary to the original text’s suggestion, calling out AI is a valid critique of the reliability and nature of the information being presented.

Here is a counter-argumentation outlining why identifying AI usage is a legitimate and necessary part of modern discourse:

1. Distinguishing Probabilistic Output from Logical Reasoning

Generative AI does not process "facts" or "logic" in the human sense; it predicts the most statistically probable next token based on a massive dataset.

  • The Argument: Calling something "AI" highlights that the statement does not stem from an understanding of the world, but from a calculation of probability.

  • The Impact: This directly addresses the "logic" of the opponent. Since an AI can produce a perfectly structured but factually hallucinated response, identifying it as such is a valid challenge to the truth-value of the claim.

2. Accountability and the Burden of Proof

The original text conflates research (a human process of verification) with generation (a machine process of synthesis).

  • The Argument: Identifying AI points to the absence of a responsible author. A human source can be held accountable for biases or errors; a machine cannot.

  • The Impact: Knowing content is AI-generated helps distinguish between a primary source (actual data) and a synthetic engine that may smooth over nuances or repeat systemic biases without discernment.

3. Defense Against Information Overload (The "Bull<deleted> Asymmetry")

AI makes it incredibly easy to produce "evidence" at scale that looks like research but lacks depth.

  • The Argument: Saying "this is AI" is a defense against the Brandolini's Law (the Bull<deleted> Asymmetry Principle). It is far easier to generate 1,000 pages of plausible-sounding nonsense than it is to debunk a single page of it.

  • The Impact: Calling out AI is not a sign of ignorance about technology; it is an expert recognition of AI’s capacity to saturate public discourse with superficial content, making genuine research more difficult.

4. Reclaiming the Definition of "Research"

The original text mocks those who don't know the difference between a search engine and a source.

  • The Argument: Ironically, AI is frequently used by those who don't want to do research, opting instead for a pre-digested summary.

  • The Impact: Pointing out AI usage reminds the speaker that automated synthesis is not a substitute for methodological rigor. It flags that the "research" presented may just be a mirror of the user's own prompts or a loop of existing internet consensus.

Summary

Calling out AI is not an "escape hatch" from an argument; it is a safety label. Just as consumers have a right to know the ingredients in their food, participants in a debate have a right to know the origin of a reasoning. It is not an attack on the technology itself, but a refusal to let machine-generated probability be passed off as human-verified truth.

  • Author
5 minutes ago, rattlesnake said:

I am involved with people who chose to use it

In other words, you don't!

  • Author
6 minutes ago, rattlesnake said:

I most probably understand it a lot better than you do. I am involved with people who chose to use it on a mass scale in the entertainment industry and are regretting it big time, because the deliverables are very far removed from the initial promise and the bubble is bursting.

What you neglect to say or consider is that you can ask an AI tool to produce a whole argumentation for you, using a simple prompt.

I am posting an example below, where I copied in the first two paragraphs of your post into Gemini and asked it to produce a counter-argumentation explaining why calling out AI is actually a valid approach. The result was produced in a split second, I am pasting it below in italics. All the characteristics of AI-generated argumentation are there, just as they are in the numerous diatribes you posted in this thread. No thought, no competence, no knowledge and most importantly, no effort was required.

Actually copy/pasting this type of content in a debate and presenting it as your own output without mentioning that it is AI content is both very lazy and very dishonest. It is level 0 of intellectual interaction and ethics, and I will call it out as often as required… just as you like to call out 'misinformation' – each to their own, and if a few hypertrophied egos have to get bruised in the process, so be it.


Identifying the use of Artificial Intelligence is not a mere "mantra"—it is an act of intellectual transparency and epistemic rigor. Contrary to the original text’s suggestion, calling out AI is a valid critique of the reliability and nature of the information being presented.

Here is a counter-argumentation outlining why identifying AI usage is a legitimate and necessary part of modern discourse:

1. Distinguishing Probabilistic Output from Logical Reasoning

Generative AI does not process "facts" or "logic" in the human sense; it predicts the most statistically probable next token based on a massive dataset.

  • The Argument: Calling something "AI" highlights that the statement does not stem from an understanding of the world, but from a calculation of probability.

  • The Impact: This directly addresses the "logic" of the opponent. Since an AI can produce a perfectly structured but factually hallucinated response, identifying it as such is a valid challenge to the truth-value of the claim.

2. Accountability and the Burden of Proof

The original text conflates research (a human process of verification) with generation (a machine process of synthesis).

  • The Argument: Identifying AI points to the absence of a responsible author. A human source can be held accountable for biases or errors; a machine cannot.

  • The Impact: Knowing content is AI-generated helps distinguish between a primary source (actual data) and a synthetic engine that may smooth over nuances or repeat systemic biases without discernment.

3. Defense Against Information Overload (The "Bull<deleted> Asymmetry")

AI makes it incredibly easy to produce "evidence" at scale that looks like research but lacks depth.

  • The Argument: Saying "this is AI" is a defense against the Brandolini's Law (the Bull<deleted> Asymmetry Principle). It is far easier to generate 1,000 pages of plausible-sounding nonsense than it is to debunk a single page of it.

  • The Impact: Calling out AI is not a sign of ignorance about technology; it is an expert recognition of AI’s capacity to saturate public discourse with superficial content, making genuine research more difficult.

4. Reclaiming the Definition of "Research"

The original text mocks those who don't know the difference between a search engine and a source.

  • The Argument: Ironically, AI is frequently used by those who don't want to do research, opting instead for a pre-digested summary.

  • The Impact: Pointing out AI usage reminds the speaker that automated synthesis is not a substitute for methodological rigor. It flags that the "research" presented may just be a mirror of the user's own prompts or a loop of existing internet consensus.

Summary

Calling out AI is not an "escape hatch" from an argument; it is a safety label. Just as consumers have a right to know the ingredients in their food, participants in a debate have a right to know the origin of a reasoning. It is not an attack on the technology itself, but a refusal to let machine-generated probability be passed off as human-verified truth.

this shows how you can't use AI and know very little about making an argument – can't you see the difference???

Just now, kwilco said:

In other words, you don't!

I use it all the time, it is a great productivity tool. It is incapable of producing quality creative content, though (as compellingly demonstrated in this very thread), unlike what was fallaciously and unilaterally claimed over the past three years.

4 minutes ago, kwilco said:

this shows how you can't use AI and know very little about making an argument – can't you see the difference???

Thank you for your insights, kwilco.

8 hours ago, rattlesnake said:

I most probably understand it a lot better than you do.

This may be the biggest whopper you're ever produced.

Please enlighten us with a 2 word synopsis of the fundamental algorithm/procedure/process that is underpinning the recent AI models?

This is a super easy question. It should take you 5 seconds.

9 hours ago, rattlesnake said:

unlike what was fallaciously and unilaterally claimed over the past three years.

Sounds like it is the same thing the flat-earthers do. Are you irritated because they stole a page out of your playbook?

14 hours ago, rattlesnake said:

Indeed. Would AI be capable of inventing the wheel, if it were given the same parameters men had in the Neolithic age?

It depends on whether AI would generate a convergence of trial and error simulations which ended up with a wheel as the most efficient way of moving stuff over land.

The answer is yes, probably.

AI is a tool. It does pretty well at certain tasks. It is probably designing new medications as I write.

If, however, you rely on it for information, you are getting a probabilistic outcome, not necessarily what is true.

Perhaps I should revise my previous post to say AI does not learn until the mass of new information outweighs the old.

Create an account or sign in to comment

Recently Browsing 0

  • No registered users viewing this page.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.