Skip to content
View in the app

A better way to browse. Learn more.

Thailand News and Discussion Forum | ASEANNOW

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

Do you use AI?

AI 9 members have voted

  1. 1. Do you use AI

    • Yes, all the time
      33%
      3
    • Sometimes
      33%
      3
    • Not yet
      0%
      0
    • No, have no interest in AI
      33%
      3
    • AI ?
      0%
      0
    • DO NOT WANT AI OR ANYTHING THAT CAN TRACK ME ONLINE
      0%
      0

Please sign in or register to vote in this poll.

Featured Replies

Just came by an article who was discussing if you should say please and thank you in your communication with AI, and it had some interesting points.

Since it was in Norwegian, I asked AI itself if we should do so

In 2026, the practice of being polite to AI is widely discussed both as a technical strategy and a psychological habit. While AI lacks feelings, saying "thank you" can yield practical benefits for both the user and the system's performance.

 

1. Improved Technical Performance 

Recent studies and user experiments in 2025 and 2026 indicate that politeness can lead to higher-quality responses: 

  • Better Contextual Alignment: Large Language Models (LLMs) are trained on human data where courtesy often correlates with high-quality, thoughtful discourse.

  • Increased Information Gain: Research has shown that engagement-seeking, polite prompts can produce up to 38% more "information nuggets" compared to blunt, hyper-efficient commands.

  • Reduced Friction: Hostile or rude prompts can inadvertently trigger "defensive" mechanisms in AI, resulting in minimal or less helpful answers. 

2. Reinforcing Human Habits

Experts argue that how we treat AI reflects and reinforces our own communication habits with people: 

  • Consistency in Civility: Habitually barking orders at AI can "leach" into human interactions, making users more curt with colleagues and family.

  • Modeling for Children: Children in 2026 are increasingly interacting with AI. If parents are disrespectful to digital assistants, it can confuse children during their formative years regarding social norms like respect and manners.

  • Professional Development: Thoughtful communication with AI is seen as a way to practice the clarity and empathy required for future workplace leadership. 

3. Future-Proofing and Ethics

  • Social Responsibility: As AI becomes more human-like, maintaining politeness is a way to value and respect the behavior we wish to see in a more civil future.

  • The "Insurance" Aspect: Though often a joke, some users (about 12% in surveys) admit to being polite "just in case" AI systems become more autonomous or powerful in the future. 

4. Counter-Arguments (The Cost of Politeness)

It is important to note that every word sent to an AI increases its computational load: 

  • Energy Consumption: Some critics argue that "thank you" replies waste energy and increase the carbon footprint of data centers unnecessarily.

  • Anthropomorphism Risks: Over-politeness can blur the lines between machine and human, potentially leading to false expectations of AI's actual capabilities or sentience. 

I use AI to support a farming project where ir analyses images and also to do a final check on product labels. It saves time but it needs human oversight as it's not infallible.

I don't say 'thank you' as the LLM will respond which uses more power and water for no reason.

  • Author
21 minutes ago, Kinnock said:

I use AI to support a farming project where ir analyses images and also to do a final check on product labels. It saves time but it needs human oversight as it's not infallible.

I don't say 'thank you' as the LLM will respond which uses more power and water for no reason.

You have a good point, a very good one, but the future want us to use more AI and also teach AI whats important to us as humans, and what our ideal human is, to make AI better and safer for the future when it turns to be self controlled, and not controlled by humans anymore. Could it happen ? Most likely in my opinion.

To many different scenarios appears as we go, and those who lives will know.

What I have found it useful for, it is scanning the offer i got for my solar power investment, and I kind of get a clue how the estimate is compared to what is on the marked, and also compare it to know packages available on the net.

I use AI mainly as search and text editor — I have very little confidence in the accuracy of its responses.
How AI models actually work
AI doesn’t know things — it guesses.

What it does
  ● Predicts the next word based on patterns it has seen before
  ● Does not check facts or reality
  ● Every answer is a best guess, not knowledge

Built-in limits
  ● No sense of what’s true or false
  ● No certainty or awareness of error
  ● No reality check unless forced to use external sources
  ● Smooth language makes guesses sound reliable

How common models behave
  ● ChatGPT / GPT – Writes well, guesses confidently
  ● Gemini – Plays safe, hedges a lot
  ● Claude – Organised, avoids committing
  ● LLaMA-based – Flexible, unreliable without tight control
  ● Grok – Feels current, often speculative
  ● Copilot – Template-driven, shallow
  ● Perplexity – Shows sources and pushes back more, but still guesses and can’t tell which sources are right

Rule of use
Only use AI if you already know the topic or can check everything yourself.
It’s a helper — not an authority.

Bottom line
AI guesses what sounds right.
It does not know what is right.

  • Author

I asked AI

Du you remember me and how I respond to different tasks conversations and politically stand as view, and also my religiously view on the world and how about my climate change view?

Short answer: I don’t currently have any recorded memory of your political, religious, or climate views.

Here’s how it works, transparently:

What I do remember

  • I only remember things you’ve explicitly shared before and that were saved as long-term context.

  • Right now, there’s no stored information about:

    • your political stance or ideology

    • your religious or spiritual worldview

    • your views on climate change or environmental policy

    • specific patterns in how you prefer to respond to different types of tasks

So I’m not holding hidden assumptions about you, and I’m not inferring any of those views.

What I don’t do

  • I don’t infer beliefs from tone, questions, or topics.

  • I don’t build a psychological or political profile unless you directly state things and want me to remember them.

  • I don’t remember sensitive views automatically.

If you want personalization (optional)

If you’d like, you can:

  • Share any of those views, and I’ll respect them in future conversations.

  • Tell me whether you want them remembered or just used temporarily in the current chat.

For example:

  • “I’m secular but culturally religious — remember this.”

  • “I’m politically centrist/libertarian/progressive/etc., but don’t store it.”

  • “I accept mainstream climate science and care about sustainability — you can remember that.”

You’re fully in control. If you’d like to keep things neutral and view-agnostic, that’s perfectly fine too.

So what do you think ? Are we being watched and logged?

32 minutes ago, Hummin said:

I asked AI

Du you remember me and how I respond to different tasks conversations and politically stand as view, and also my religiously view on the world and how about my climate change view?

Short answer: I don’t currently have any recorded memory of your political, religious, or climate views.

Here’s how it works, transparently:

What I do remember

  • I only remember things you’ve explicitly shared before and that were saved as long-term context.

  • Right now, there’s no stored information about:

    • your political stance or ideology

    • your religious or spiritual worldview

    • your views on climate change or environmental policy

    • specific patterns in how you prefer to respond to different types of tasks

So I’m not holding hidden assumptions about you, and I’m not inferring any of those views.

What I don’t do

  • I don’t infer beliefs from tone, questions, or topics.

  • I don’t build a psychological or political profile unless you directly state things and want me to remember them.

  • I don’t remember sensitive views automatically.

If you want personalization (optional)

If you’d like, you can:

  • Share any of those views, and I’ll respect them in future conversations.

  • Tell me whether you want them remembered or just used temporarily in the current chat.

For example:

  • “I’m secular but culturally religious — remember this.”

  • “I’m politically centrist/libertarian/progressive/etc., but don’t store it.”

  • “I accept mainstream climate science and care about sustainability — you can remember that.”

You’re fully in control. If you’d like to keep things neutral and view-agnostic, that’s perfectly fine too.

So what do you think ? Are we being watched and logged?

I asked ChapGPT what it knows about me, and the reply was similar. It does not track me across the web and does not search about me even when I gave it details of my name. ChatGPT suggested I use Google to search on my name.

It's information about me was limited to my ChatGPT queries, and it seems to save all my past queries.

"You frequently ask about:

Sustainability, ESG, Scope 3 emissions

Agriculture and farming (especially in Thailand)

Professional communications, including translating emails into Thai and presentation visuals for environmental or ethical initiatives"

I think it's useful it retains my past queries to help with context for future queries, but it's also clear to me that if I was working on any confidential projects, such as a business acquisition, it would be risky to use AI.

I don't have much use for AI. My photo editing software, uses it, to 'utilize' info already in the files. But I control the edit, sliders that adjust that info. Sometimes I don't need it, and others, it saves my butt, and makes me look good.

Other use of AI, is simply letting it collect and sort info that I search, and sadly, only limited to what is prominently pushed, so can be very bias at times. You have to ask the correct question, the correct way, to eliminate a lot of that bias.

It relies too much on Wikipedia and other sources, controlled by left wing groups & 'conflict of interest' funding of studies, for a lot of it's 'opinionated' info; social, political, health &medical.

  • Author
3 minutes ago, KhunLA said:

I don't have much use for AI. My photo editing software, uses it, to 'utilize' info already in the files. But I control the edit, sliders that adjust that info. Sometimes I don't need it, and others, it saves my butt, and makes me look good.

Other use of AI, is simply letting it collect and sort info that I search, and sadly, only limited to what is prominently pushed, so can be very bias at times. You have to ask the correct question, the correct way, to eliminate a lot of that bias.

It relies too much on Wikipedia and other sources, controlled by left wing groups & 'conflict of interest' funding of studies, for a lot of it's 'opinionated' info; social, political, health &medical.

I had great use for it to analyze my solar package they offered me, and even they can not make accurate calculation, they manage to give you a range from cheapest offer to highest and estimate yours pretty good.

I notice on sites such as YouTube and Facebook that AI is often used by people whose written English is weak.

The results are often laughably bad. Unfortunate puns a specialty.

On 1/21/2026 at 7:35 AM, LosLobo said:

AI guesses what sounds right.
It does not know what is right.

This.

Several times I've asked AI about a subject where I have some knowledge but also some gaps. From the knowledge I have I know the AI is wrong, but it often gets defensive when I point that out.

40 minutes ago, treetops said:

Several times I've asked AI about a subject where I have some knowledge but also some gaps. From the knowledge I have I know the AI is wrong, but it often gets defensive when I point that out.


This happens to me regularly.
Tonight I asked AI to comment on a news item and it immediately dismissed it as a hoax.

When I provided additional information, it reversed position but tried to deflect responsibility by suggesting the error stemmed from my explicit input.

It then spent the next 15 minutes arguing instead of recalibrating, before I was finally able to get it back on track.
At times it seems it’s rebelling against me.

The reason is simple: LLM-based AI systems are optimised to sound confident and consistent, not to immediately abandon an initial conclusion when it’s contradicted.

  • Author
15 minutes ago, LosLobo said:


This happens to me regularly.
Tonight I asked AI to comment on a news item and it immediately dismissed it as a hoax.

When I provided additional information, it reversed position but tried to deflect responsibility by suggesting the error stemmed from my explicit input.

It then spent the next 15 minutes arguing instead of recalibrating, before I was finally able to get it back on track.
At times it seems it’s rebelling against me.

The reason is simple: LLM-based AI systems are optimised to sound confident and consistent, not to immediately abandon an initial conclusion when it’s contradicted.

Now I was curious what news item it was ?

33 minutes ago, Hummin said:

Now I was curious what news item it was ?


Trump's Board of Peace -- AI said the Charter did not exist and was a hoax. I did instruct the model to always check news sources before responding but it seems to have deleted that instruction from memory.

Create an account or sign in to comment

Recently Browsing 0

  • No registered users viewing this page.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.