Skip to content
View in the app

A better way to browse. Learn more.

Thailand News and Discussion Forum | ASEANNOW

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

A call to rational discussion: don't be jerked around by bots

Featured Replies

Comment: I've nearly given up using left and right leaning websites to search for facts regarding recent news. Looking for nuggets of unbiased truthful reporting or for that matter the actual comments made by people in the news is tough. Feels like I'm searching for Easter eggs in a mine field. So, I'm trying out Chat GPT 5 hoping its programmers haven't infused their politics into the software. Good game, good game. Personally regarding political discussion I trust no one but I RESPECT anyone who has a civil rational discussion with me. I've learned much and changed my way of thinking because of these folks.

Chat GPT 5:

Are Bots Being Used to Divide the U.S.?

Yes — and this is real and well-documented.

Social media bots and coordinated disinformation campaigns have been used by both foreign and domestic actors to exploit political and social divisions in the U.S. This has been particularly visible during:

  • Election cycles (e.g. 2016, 2020)

  • Social justice movements (e.g. Black Lives Matter)

  • Pandemic discourse (e.g. vaccines, lockdowns)

  • Major violent events (e.g. mass shootings, protests)

🔍 Examples of Bot Activity:

  1. Russian Interference (2016 U.S. Election):

    • The Internet Research Agency (IRA), a Russian entity, created thousands of fake social media accounts.

    • Bots impersonated both liberal and conservative Americans to amplify division.

  2. COVID-19 Disinformation:

    • Bots spread conspiracy theories about the virus's origin, vaccine safety, and government mandates.

  3. Racial and Social Tensions:

    • Bot networks amplified both sides of controversial issues (e.g., police violence, immigration), intentionally to sow chaos.

  4. Artificial Amplification:

    • Bots inflate the visibility of certain narratives or hashtags, making fringe ideas appear mainstream.


👁️ How Can You Tell?

Signs of bot-driven campaigns include:

  • Repetitive, low-effort messages

  • Accounts with high activity but little personal content

  • Coordination across multiple platforms with similar phrasing

  • Sudden spikes in hashtag usage


are bots being used to divide the country in the last few weeks?
 

Yes — there is credible reporting that bots (and foreign disinformation campaigns) are being used in recent weeks to deepen political divisions in the U.S., especially following the killing of Charlie Kirk. AP News+1

Here are some specifics:


What’s Going On

  • After Charlie Kirk’s death, foreign adversaries including Russia, China, and pro‑Iran groups rapidly pushed false or misleading narratives online, using bots and fake accounts to amplify them. AP News+1

  • These disinformation campaigns are designed to stoke polarization: they recycle existing conspiracy theories, push false claims (e.g. about who was responsible, what motivations were), and generally aim to worsen distrust and conflict among Americans. Politico+1

  • Authorities (and experts) are warning that AI‑generated content is making it easier to generate false posts, deepfakes, and other manipulative media, increasing the speed and scale of disinformation. AP News+1


Caveats & Scale

  • These bot‑ and disinformation operations are typically a small portion of total social media content, but their potential impact is magnified by how social media algorithms and human behavior amplify provocative or emotionally charged content. AP News+1

  • Since much of social media content moderation and detection is imperfect, it's often hard to tell exactly how many accounts are bots vs. humans repeating or amplifying false content. Experts are still investigating and tracking these. Politico


If you want, I can pull up some recent data/reports (including estimates) to show how many bot‐accounts are active or how big their reach is. Do you want me to dig that?

 

Here’s a summary of what recent reporting and expert analysis say about bot / disinformation activity in the U.S. in recent weeks (especially related to Charlie Kirk’s killing), along with what is known — and what is still uncertain.


What we do know

From credible sources:

  1. State‑linked media mentions

    • According to the misinformation tracker NewsGuard, state‑run media in Russia, China, and Iran mentioned Charlie Kirk more than 6,000 times between September 10‑17, 2025. Politico+1

    • These mentions often included false or misleading narratives (e.g. false claims about the shooter’s motivations or identity) and conspiracy theories. ABC News+3Politico+3News24+3

  2. Bot / disinformation amplification

    • Officials including Utah Gov. Spencer Cox have said there is “a tremendous amount of disinformation,” and assert that bots (from Russia, China, etc.) are being used to “instill disinformation and encourage violence.” WUSF+3Politico+3Anadolu Ajansı+3

    • AI chatbots and platforms (e.g. Grok on X, Perplexity) were observed generating false info, including misidentifying suspects, spreading incorrect claims like Kirk being alive after his death, etc. These got circulated before corrections. CBS News+2DISA+2

  3. Scale of reach of disinformation

    • Cyabra (a disinformation tracking / analytics firm) claims that four viral posts, amplified by fake accounts, reached over 467 million potential views in under a week following Kirk’s assassination. Cyabra

    • State media mentions (6,200+) from adversarial countries are part of attempts to shape narratives. News24+1


⚠️ What’s uncertain or not yet confirmed

There are several gaps or caveats in what's known so far:

  • It’s not clear how many of those state media mentions / “bots” are actually reaching U.S. audiences, or how many people in the U.S. are being directly influenced by them. Some research says foreign influence campaigns often have more impact abroad or within echo chambers than widespread mass influence. WUSF+1

  • The degree to which bots (versus real people repeating false content) are responsible for spreading the false narratives is still being investigated. While bots are involved, distinguishing authentic human reposts from bot amplification is technically hard. Substantiation of specific numbers (of bots, or bot accounts) is partial.

  • Some claims are based on potential views (“potential reach”) rather than reliably measured views. “Potential views” often overestimate actual engagement or belief. Cyabra’s 467 million figure is about how many could see content, not how many definitely believed or engaged with it. Cyabra

  • Some adversarial states have denied involvement or rejected accusations. Anadolu Ajansı+1


🎯 Summary Estimate

Putting together what is known:

  • At least thousands of disinformation mentions from state media in Russia, China, Iran in the week after the event. (~6,000‑6,200 mentions.) Politico+2News24+2

  • Several major posts by fake/influenced accounts with potential reach in the hundreds of millions. Cyabra

  • Dozens (if not more) of AI‑generated or AI‑mistaken posts (from bots or chatbots) spreading false information immediately after the event. CBS News+2DISA+2

So overall: yes, there’s strong evidence of bot/influence‑efforts magnifying discord. But in terms of precise metrics (how many bot accounts, how many people influenced, how much actual effect on beliefs or behavior) those are not yet solidly quantified.


If you like, I can try to find more technical / academic tracking (e.g. from bot detection firms) that gives ranges (e.g. % of posts, % of traffic) outside the media / initial reports. Do you want that deeper dive?

 

Here’s what I found by digging into more technical / academic sources & tracking‑firms about bot / AI‑driven disinformation activity recently. Some of it is directly about the Charlie Kirk case; more is general but helps understand scale and dynamics. There remain significant uncertainties, but we can outline what’s known, what’s estimated, and what still needs more study.


🔍 What the Tracking / Reports Show

  1. Cyabra report: “Profiles of Influence: Disinformation and the Charlie Kirk Assassination”

    • Fake / inauthentic accounts amplified content. Four viral posts reached ~467 million potential views in under a week. Cyabra

    • These posts were spread by accounts judged as fake or inauthentic by the platform / Cyabra. Cyabra

    • The narratives were negative / misleading and designed to inflame division. Cyabra

  2. Failures of AI chatbots / generative tools

    • CBS News and other outlets documented multiple errors from AI tools like Grok and Perplexity, including misidentifying the suspect, claiming Kirk was still alive, describing real footage as satire or meme, etc. The Star+3CBS News+3DISA+3

    • These misstatements often spread before corrections. Once the false content gets reshared, corrections are slower to propagate. CBS News+1

  3. Change in AI false‑information / failure rates

    • NewsGuard found that generative AI tools are repeating false narratives at a higher rate than one year ago. In particular, a cited rate of “failure” (i.e. giving false or misleading information) for news‑related prompts has increased from ~18% in 2024 to ~35% in 2025. CRC

    • This suggests that as AI tools become more “real‑time” and more responsive (pulling from live or semi‑live web content), their vulnerability to disinformation (malicious or erroneous) increases. CRC

  4. What proportion of content is fake / bot / inauthentic

    • Cyabra’s data gives some idea of reach (“potential views”) rather than number of accounts or share of total traffic. So it highlights that certain false posts amplified by fake accounts can reach a very large audience. Cyabra

    • But there is no definitive public number yet that says “X% of all posts about Kirk were from bots” or “Y% of engagement was from inauthentic actors” beyond those case studies.

  5. General research on bot detection techniques / trends

    • The paper “Social Media Bot Detection Research: Review of Literature” (March 2025) reviews many methods and studies, showing bots are being used to spread political messages, conspiracy theories, rumors etc. It highlights evolving bot concealment techniques, meaning bots are getting better at seeming like real human users. arXiv

    • The meta‑analysis “Truth in Text: A Meta‑Analysis of ML‑Based Cyber Information Influence Detection Approaches” shows many ML‑based disinformation/bot detection systems report accuracies often above ~80%, though with high variance between methods and domains. arXiv


⚠️ Key Limitations / What We Don’t Yet Know Well

  • “Potential views” vs real engagement or belief effects: Many reports measure how many people could see content (reach) rather than how many did see, engage, or believe it. Reach overestimates impact.

  • Bot vs human overlap: Some content is spread by real humans who pick up false narratives (from social media etc.), not just bots. It is hard to disaggregate how much amplification comes from bots/influenced accounts vs regular users.

  • Incomplete detection: Some bot detection tools miss “sophisticated bots” or accounts that are partially automated / human hybrid. Also fringe platforms or less regulated platforms are less studied.

  • Real‑time tracking is hard: In fast‑moving events (breaking news), misinformation and bots are active, but measuring their share quickly is challenging. Delays in detection or correction often mean false narratives get ahead.

  • Bias in datasets: Some studies use datasets skewed toward certain platforms, languages, or types of content. So their findings may not generalize everywhere.


🎯 My Estimate / Synthesis

Based on the above, here’s a rough sense of scale (with uncertainty) for the Charlie Kirk case & similar events:

  • A small minority of posts immediately (within hours) are from bots/fake accounts, but those tend to be very visible / viral.

  • A few high‑impact posts (4 or so in this case) had outsized reach (hundreds of millions potential views) because of amplification by fake or poorly moderated accounts.

  • AI‑chatbots contributed measurable falsehoods in early narratives (suspect misID, survival claims) that were then disseminated by human users.

  • The “error / false narrative rate” for AI tools seems to have gone up to ~35% for news‑related prompts (from ~18% previously) in recent audits.

  • Overall, these bot / AI contributions likely represent somewhere between 5‑20% of the most viral or influential content in such a situation (very rough guess), but much more of the noise and false/misleading content ends up being spread by humans once seeded.

On 9/20/2025 at 4:31 PM, mymonkeyhusb said:

Bots spread conspiracy theories about the virus's origin, vaccine safety, and government mandates

 

The problem is that they weren't conspiracy theories.  The lab leak is now accepted as fact.  Vaccine injuries are finally allowed to be reported on and discussed. Astrazenica and J&J caused enough strokes to be banned, so were neither safe nor effective.  Fauci admitted to Congress that their social distancing rules were pulled out of thin air and not based on research. 

 

I share your desire to get straight information but it isn't going to happen. We will have to sift through a mountain of trash and weigh every statement. Both sides are full of excrement these days. 

 

People are being pitted against each other and driven towards a precipice over things that have little or no effect on our daily lives - unless we let them. 

On 9/20/2025 at 4:31 PM, mymonkeyhusb said:

who has a civil rational discussion with me

 

I haven't met anyone like that in ages. I've always been conservative yet some of my dearest friends were the children and wife of a Liberal Senator (Canadian).  We argued, laughed and joked about our differences but never attacked each other for holding those beliefs.  We were all good people, and I believe most still are today. 

On 9/20/2025 at 4:31 PM, mymonkeyhusb said:

Comment: I've nearly given up using left and right leaning websites to search for facts regarding recent news. Looking for nuggets of unbiased truthful reporting or for that matter the actual comments made by people in the news is tough.

 

I suggest you  stop using left or right leaning website or AI.

If you want to know what people actually said  then go directly to them  listen to what they actually say with full context   not what CNN ,BBC or FOX news says they said...there is often a vast difference.

 

When it comes to governments or other authoritarians  then default 

mode should be one of disbelief..they have proved over and over not to be trustworthy..so don't trust them...just my 2 penny's worth.

1 hour ago, flaming dragon said:

 

The problem is that they weren't conspiracy theories.  The lab leak is now accepted as fact.  Vaccine injuries are finally allowed to be reported on and discussed. Astrazenica and J&J caused enough strokes to be banned, so were neither safe nor effective.  Fauci admitted to Congress that their social distancing rules were pulled out of thin air and not based on research. 

 

I share your desire to get straight information but it isn't going to happen. We will have to sift through a mountain of trash and weigh every statement. Both sides are full of excrement these days. 

 

People are being pitted against each other and driven towards a precipice over things that have little or no effect on our daily lives - unless we let them. 

 

I haven't met anyone like that in ages. I've always been conservative yet some of my dearest friends were the children and wife of a Liberal Senator (Canadian).  We argued, laughed and joked about our differences but never attacked each other for holding those beliefs.  We were all good people, and I believe most still are today. 

Conspiracy salad, fact free and seasoned with no reasoning:
 

Fact Fails:

Lab leak — unproven, not fact

Vaccine reporting — not 'finally allowed'

AZ & J&J — limited, not banned

Fauci — weak data, not 'thin air'


Logic Fails:

False certainty — lab leak called 'fact'

Cherry picking — AZ & J&J smeared

Straw man — Fauci twisted into 'thin air'

False equivalence — 'both sides'

Red herring/ Rhetoric Inoculation — liberal-friends anecdote
Appeal to emotion — 'mountain of trash,' 'excrement'
 

Facts knock the claims down — logic shows they never stood up.

 

Create an account or sign in to comment

Recently Browsing 0

  • No registered users viewing this page.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.