Skip to content
View in the app

A better way to browse. Learn more.

Thailand News and Discussion Forum | ASEANNOW

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

Is AI lulling us into a false sense of security?

Featured Replies

Is AI lulling us into a false sense of security by letting us do fun things until we really love it.

 

Then comes the surprise Skynet attack!!

 

Five minutes with chatGPT free!

Punch and Judy go to the beach.

 

image.jpeg.1870ed9c07d4b78d391d1fe064ead2c5.jpeg

 

Becomes

 

image.png.da564804990832ea93620b55af2cd577.png

 

I'll be back!

 

"I don't want to know why you can't. I want to know how you can!"

  • Replies 31
  • Views 2.4k
  • Created
  • Last Reply

Top Posters In This Topic

Most Popular Posts

  • Meanwhile, this is NOT AI generated. It's a worryingly accurate vision of the future from a 1930s magazine!   The full set are here https://rarehistoricalphotos.com/futuristic-visions-cards-

  • Will B Good
    Will B Good

    Well this is my take on it.....I'm going to pose the same question to Co-Pilot and see what it says.     🧠 Is AI Lulling Us into a False Sense of Security?   This is a critica

  • newbee2022
    newbee2022

    And that is fun? In general I use my time doing not Kindergarten related things. Not an offend, but it's not working for me. Same as pep up old photos and let your Grandma singing "I can't get no

Posted Images

 

 

Well this is my take on it.....I'm going to pose the same question to Co-Pilot and see what it says.

 

 

🧠 Is AI Lulling Us into a False Sense of Security?

 

This is a critical and timely question. As AI systems become more integrated into everyday life — from self-driving cars and medical diagnostics to hiring tools and national security — there’s growing concern that public trust in AI may outpace its actual reliability. This can lead to a false sense of security with wide-reaching consequences.

1. ⚙️ What Does “False Sense of Security” Mean in AI?

 

A false sense of security means people or institutions:

Overestimate AI’s accuracy, judgment, or capabilities

Underestimate risks, limitations, or failure modes

Delegate decisions to systems they don’t fully understand

Assume neutrality when biases persist in the data or algorithms

 

This becomes dangerous when systems are trusted in critical, high-stakes environments — like criminal justice, finance, warfare, or medicine — without proper oversight or fail-safes.

2. 🚗 Everyday Examples

 

➤ Autonomous Vehicles

Early enthusiasm suggested self-driving cars would be widespread by the early 2020s.

In reality, most systems are still in testing, and accidents still occur, especially when humans trust the car too much.

Tesla’s “Full Self-Driving” is not fully autonomous — yet many drivers behave as if it is.

 

➤ Medical Diagnosis

AI can outperform radiologists in narrow image-recognition tasks.

But if used blindly, it can miss rare cases, misclassify due to biased training data, or give overconfident outputs.

Human-AI collaboration is still vital, yet doctors are sometimes pushed to rely too heavily on algorithms due to time pressure or trust in the tech.

 

➤ Content Moderation and Misinformation

Platforms use AI to filter harmful content or detect fake news.

But the tools are imperfect, sometimes removing valid speech and missing nuanced or context-driven misinformation.

Users may assume platforms are “keeping things clean,” when they’re not — leaving them more vulnerable to manipulation.

3. 🧩 Why Are We So Willing to Trust AI?

 

a. 

Illusion of Objectivity

 

People often see AI as neutral, data-driven, and free from human emotion or prejudice — when in fact, biases in training data and design can be profound.

 

b. 

Overconfidence in Technology

 

We’re conditioned by decades of technological progress (smartphones, GPS, automation) to assume that if it’s “advanced,” it must be safe.

 

c. 

Black Box Problem

 

Most people — including many decision-makers — don’t understand how AI systems actually work. That creates a tendency to defer to them uncritically.

 

d. 

Marketing and Media Hype

 

AI is often portrayed as smarter, faster, and more capable than it really is. Companies have a financial incentive to encourage public confidence, even when reality lags.

4. ⚠️ What Are the Risks of This Complacency?

 

1. 

Systemic bias

 goes unchecked

 

If we assume AI is fair or infallible, we miss the deep ways it can amplify societal inequalities — e.g., in hiring, policing, or lending.

 

2. 

Human oversight erodes

 

The more we trust AI blindly, the less humans are likely to question, intervene, or second-guess — leading to worse outcomes when errors occur.

 

3. 

Security vulnerabilities

 

AI can be attacked or manipulated in subtle ways (e.g., adversarial images, prompt injection). If we assume these systems are secure “by default,” we open ourselves to exploitation.

 

4. 

Loss of skills and autonomy

 

If humans grow dependent on AI for navigation, diagnosis, or decision-making, they may lose their own ability to think critically or act confidently in unfamiliar situations.

5. 🛡️ How Can We Guard Against a False Sense of Security?

 

Educate the public (and professionals)

AI literacy needs to be part of school curricula, business training, and policymaker awareness.

People should understand both what AI can do — and what it can’t.

 

Maintain human oversight

Human-in-the-loop systems are critical in high-stakes areas (aviation, health, law).

We must resist efforts to “automate away” responsibility.

 

Regulate and audit AI systems

Just like planes or medicines, AI systems should be independently tested, certified, and regulated.

We need algorithmic transparency, explainability standards, and regular reviews for harm.

 

Promote open-source alternatives and accountability

Allowing researchers and watchdogs to inspect AI models helps counter overconfidence, bias, and monopoly power.

🔚 Conclusion

 

AI holds immense potential, but overtrust is as dangerous as undertrust. A false sense of security isn’t just a technical flaw — it’s a psychological and social vulnerability. If we treat AI as infallible, neutral, or immune to error, we risk building systems that fail quietly, subtly — and at scale.

 

We shouldn’t reject AI — but we must learn to treat it like a tool, not a deity. That means building guardrails, not just algorithms.

 

8 minutes ago, Crossy said:

Is AI lulling us into a false sense of security by letting us do fun things until we really love it.

 

Then comes the surprise Skynet attack!!

 

Five minutes with chatGPT free!

Punch and Judy go to the beach.

 

image.jpeg.1870ed9c07d4b78d391d1fe064ead2c5.jpeg

 

Becomes

 

image.png.da564804990832ea93620b55af2cd577.png

 

I'll be back!

 

And that is fun? In general I use my time doing not Kindergarten related things.

Not an offend, but it's not working for me. Same as pep up old photos and let your Grandma singing "I can't get no satisfaction". But who likes it...?🤩

IMHO   AI should only be used  in a sand boxed  chrooted jail

or air gaped   never given control of critical equipment...but as we have been told (programmed)  Skynet will  come about one way or the other  it's inevitable

now that the tech exists...goodbye and thanks for all the fish.

Nobody knows where AI is leading us ... 

 

But is the status quo that great? 

 

Let's bring it on .... unleash AI ... and see what happens. 

 

 

4 minutes ago, save the frogs said:

Nobody knows where AI is leading us ... 

 

But is the status quo that great? 

 

Let's bring it on .... unleash AI ... and see what happens. 

 

 

If the internet is an indication then what will happen is it will make things worse. Nothing was wrong with the 90s

I have proved AI to be factually incorrect several times.

 

It's only as good as the data it gets.

 

I have corrected it with facts, and it does not learn from them.

11 hours ago, Lacessit said:

I have proved AI to be factually incorrect several times.

 

It's only as good as the data it gets.

 

I have corrected it with facts, and it does not learn from them.

 

The current versions of AI are pre-programmed, so yes they can be incorrect. 

 

But I'm wondering if future versions may change ... 

 

Current ChatGPT is considered "Level 4 AI". AGI is where it starts to think for itself more ..

 

image.png.a9d3e359ff56a794cbe92d4484af77a0.png

 

Anybody that has seen the 2006 American science fiction comedy should realize that AI means Allowing Idiocracy.....

YES - currently I am using google pic search to try to sort out more than 6000 photos of our US trip in 2015 - they all have the same date!! (my Thai lady did not think to reset the date on her Pentax after recharging).

I have our itinerary so I know where we were on a given date. 

I use the AI part of google pics to try to locate where the picture was taken then I can allocate it to the correct date.

80% of the time it is correct BUT often it tells my the photo is in a place I know cannot be correct because I have never been there.

And what I don't like is its insistence it is right when I know it is wrong!!!

AI is a tool and not the be all and end all of digital life.

1 hour ago, Crossy said:

Is AI lulling us into a false sense of security by letting us do fun things until we really love it.

 

Then comes the surprise Skynet attack!!

 

I think it's more a case of making people think they can do things that they cannot, and probably failing to value things that they believe can be replaced by AI.

 

One of the biggest dangers is that people suddenly feel that doing something like writing a book or painting a picture is meaningless, since AI can do it in 5 minutes.

 

The phrase "AI generated slop" has luckily been coined, which I think will be really important in the coming years.

1 hour ago, Crossy said:

Is AI lulling us into a false sense of security by letting us do fun things until we really love it.

 

Then comes the surprise Skynet attack!!

 

Five minutes with chatGPT free!

Punch and Judy go to the beach.

 

image.jpeg.1870ed9c07d4b78d391d1fe064ead2c5.jpeg

 

Becomes

 

image.png.da564804990832ea93620b55af2cd577.png

 

I'll be back!

 

Well, when you're back, watch this: 

 

  • Author
1 minute ago, BangkokReady said:

Interesting surfing on the left and the right.

 

And the number of straws 🙂

"I don't want to know why you can't. I want to know how you can!"

Ai is nothing more than search engines with creative ancillaries and basic wordy analytics and answers that are often incorrect, after all it’s the internet. Don’t believe everything you see on the internet, as the saying goes.

I went on YouTube and was surprised how much, and how good, there is of AI generated content. It has improved immensely from what it was just a few years ago. But they still haven't solved the 6 finger problem....   

Internet ai attempts to tell the user the answers. Makes for a lazy mind.


Independent research with empirical thought gives actual insight and real knowledge.

 

Ai on the everyday internet is a lazy person way of trying to look accomplished, though it has its uses in production efficiency for industries, so in that manner, individuals applying it as such can be benefiting so long it’s not used as a crutch.

19 minutes ago, SpaceKadet said:

I went on YouTube and was surprised how much, and how good, there is of AI generated content. It has improved immensely from what it was just a few years ago. But they still haven't solved the 6 finger problem....   

 

These people seem to have done a pretty good job with the 6 finger problem.......and a lot more:

Screenshot2025-07-22at13-28-50The1950sLikeYouveNeverSeenBefore-AIShortFilm.png.74198cee02e070ab65c747c5245d85b9.png

 

 

 

AI has benefits and drawbacks. One drawback is people are no longer doing what they used to and that is use their brains. Feed something into AI and it does it for you. Many news articles are now example of this..

We have already learned to 'trust' a device that starts out by telling us not to trust its answers. How many actually verify their results?

We're so fu(ked.

Warning: AI is nothing. Quantum computing chips are now being successfully produced. 

AI is not sustainable. It's going to collapse in a year or two, and we're going to have a repeat of the Great Depression. The stock market is way overvalued now, and it's being held up by half a dozen AI companies buying chips from Nvidia. Eventually they're going to complete their huge data centers and stop buying chips.

I think it is both a great asset and also a great danger.

If used to do good it can be fantastic but in the wrong hands or used for the wrong purpose it can be really dangerous.

It is already very hard to tell what is AI generated and what is real.

I think there should be a strong law making sure that all AI generated content or product should be labelled as such.

 

On 7/22/2025 at 11:01 AM, Will B Good said:

 

 

Well this is my take on it.....I'm going to pose the same question to Co-Pilot and see what it says.

 

 

🧠 Is AI Lulling Us into a False Sense of Security?

 

This is a critical and timely question. As AI systems become more integrated into everyday life — from self-driving cars and medical diagnostics to hiring tools and national security — there’s growing concern that public trust in AI may outpace its actual reliability. This can lead to a false sense of security with wide-reaching consequences.

1. ⚙️ What Does “False Sense of Security” Mean in AI?

 

A false sense of security means people or institutions:

Overestimate AI’s accuracy, judgment, or capabilities

Underestimate risks, limitations, or failure modes

Delegate decisions to systems they don’t fully understand

Assume neutrality when biases persist in the data or algorithms

 

This becomes dangerous when systems are trusted in critical, high-stakes environments — like criminal justice, finance, warfare, or medicine — without proper oversight or fail-safes.

2. 🚗 Everyday Examples

 

➤ Autonomous Vehicles

Early enthusiasm suggested self-driving cars would be widespread by the early 2020s.

In reality, most systems are still in testing, and accidents still occur, especially when humans trust the car too much.

Tesla’s “Full Self-Driving” is not fully autonomous — yet many drivers behave as if it is.

 

➤ Medical Diagnosis

AI can outperform radiologists in narrow image-recognition tasks.

But if used blindly, it can miss rare cases, misclassify due to biased training data, or give overconfident outputs.

Human-AI collaboration is still vital, yet doctors are sometimes pushed to rely too heavily on algorithms due to time pressure or trust in the tech.

 

➤ Content Moderation and Misinformation

Platforms use AI to filter harmful content or detect fake news.

But the tools are imperfect, sometimes removing valid speech and missing nuanced or context-driven misinformation.

Users may assume platforms are “keeping things clean,” when they’re not — leaving them more vulnerable to manipulation.

3. 🧩 Why Are We So Willing to Trust AI?

 

a. 

Illusion of Objectivity

 

People often see AI as neutral, data-driven, and free from human emotion or prejudice — when in fact, biases in training data and design can be profound.

 

b. 

Overconfidence in Technology

 

We’re conditioned by decades of technological progress (smartphones, GPS, automation) to assume that if it’s “advanced,” it must be safe.

 

c. 

Black Box Problem

 

Most people — including many decision-makers — don’t understand how AI systems actually work. That creates a tendency to defer to them uncritically.

 

d. 

Marketing and Media Hype

 

AI is often portrayed as smarter, faster, and more capable than it really is. Companies have a financial incentive to encourage public confidence, even when reality lags.

4. ⚠️ What Are the Risks of This Complacency?

 

1. 

Systemic bias

 goes unchecked

 

If we assume AI is fair or infallible, we miss the deep ways it can amplify societal inequalities — e.g., in hiring, policing, or lending.

 

2. 

Human oversight erodes

 

The more we trust AI blindly, the less humans are likely to question, intervene, or second-guess — leading to worse outcomes when errors occur.

 

3. 

Security vulnerabilities

 

AI can be attacked or manipulated in subtle ways (e.g., adversarial images, prompt injection). If we assume these systems are secure “by default,” we open ourselves to exploitation.

 

4. 

Loss of skills and autonomy

 

If humans grow dependent on AI for navigation, diagnosis, or decision-making, they may lose their own ability to think critically or act confidently in unfamiliar situations.

5. 🛡️ How Can We Guard Against a False Sense of Security?

 

Educate the public (and professionals)

AI literacy needs to be part of school curricula, business training, and policymaker awareness.

People should understand both what AI can do — and what it can’t.

 

Maintain human oversight

Human-in-the-loop systems are critical in high-stakes areas (aviation, health, law).

We must resist efforts to “automate away” responsibility.

 

Regulate and audit AI systems

Just like planes or medicines, AI systems should be independently tested, certified, and regulated.

We need algorithmic transparency, explainability standards, and regular reviews for harm.

 

Promote open-source alternatives and accountability

Allowing researchers and watchdogs to inspect AI models helps counter overconfidence, bias, and monopoly power.

🔚 Conclusion

 

AI holds immense potential, but overtrust is as dangerous as undertrust. A false sense of security isn’t just a technical flaw — it’s a psychological and social vulnerability. If we treat AI as infallible, neutral, or immune to error, we risk building systems that fail quietly, subtly — and at scale.

 

We shouldn’t reject AI — but we must learn to treat it like a tool, not a deity. That means building guardrails, not just algorithms.

 

 

You will probably find the Joe Public or even casual users for example the processing of the two dogs in the picture which were changed into a cartoon by the ChatGPT, people will think it is dangerous, politicians will say it is dangerous as will film makers, newspapers etc as they are trying to make a profit out of the info for themselves.

 

The people who use it, software engineers like myself know it does not actually exist, they are machine learning tools connected to logic and databases with high bandwidth systems and are super fast. No AI.

 

I use ChatGPT on a daily basis for my software development, it takes out some of the tedious stuff like finding a particular protocol or system implementation, a bit like an accountant with an accounts clerk.

 

They do save time but any engineering company of any worth know they are tools and will continue to treat them like so.

 

Have all the buildings in the word fallen down since we started using AutoCad to design them 30 years ago, erm no, so why should it with this so called 'AI' ?

 

Ask ChatGPT it if is an AI machine and it will tell you no, they do not exist.

 

I have to double check and correct it on a daily basis and rewrite some stuff.

 

All engineers will take that approach.

 

So no panic unless we are watching a film starring Arnold but do not realise it is fiction, but of course the public will panic and believe all of the hype they are being told. 

 

 

5 hours ago, jvs said:

I think it is both a great asset and also a great danger.

If used to do good it can be fantastic but in the wrong hands or used for the wrong purpose it can be really dangerous.

It is already very hard to tell what is AI generated and what is real.

I think there should be a strong law making sure that all AI generated content or product should be labelled as such.

 

 

That applies to so many things.

 

Nuclear power, machinery, knives, fuel.

 

But as yet no AI exists yet so it is not a threat.

Current LLM's are no threat at all. Stop anthropormorphizing these tools. They don't have any intent of their own.  They are actually relatively simple tools, the essence of which hasn't changed since the 1980's. They've just finally become large enough to be useful. Nobody has ever created an AI that has any kind of independent agency, and unless we can actually build one based around quantum processes it is very unlikely to ever occur. There is absolutely zero credible data to support the idea that consciousness is an emergent property of the brain (or an LLM). That was simply a hypothesis advanced by Roger Sperry, adopted by a few neuroscientists and popularized by Hollywood.

Create an account or sign in to comment

Recently Browsing 0

  • No registered users viewing this page.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.