Skip to content
View in the app

A better way to browse. Learn more.

Thailand News and Discussion Forum | ASEANNOW

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

A Half Dozen People are Risking Human Extinction

Featured Replies

The half dozen people are the CEOs of the top AI companies. They are moving full speed ahead developing systems that they themselves believe have a 25% chance of making humans extinct.

 

This is according to both Stuart Russell, the man who literally wrote the book on AI, and tech entrepreneur Tristan Harris. Both men have asked the top AI CEOs what they think the odds are the very systems their companies are developing will wipe out humanity. The answer was one in four….yet they still are going full speed ahead, making possible life and extinction decisions for Earth’s 8.5 billion people.

 

Their motivation is money and godlike power. They have decided that a 75% chance of being a multi trillionaire and controlling all of society is worth the risk they---and all of us---might be eradicated by their invention.

 

All of these CEOs admitted 25% was just a guess, and that it could be much higher. Some AI pioneers put it at 99%, while others toss it off as silly alarmism. The fact that experts carry such divergent opinions says nobody really knows and nobody even understands the capabilities of what they are creating.

 

Earlier this year, the AI firm Anthropic ran a simulation using their own Claude Opus 4 LLM. They created a fictitious company and installed Claude as an agent. Claude had access to the entire corporate network of this fictional company. One person introduced a fake email directed to other company employees suggesting the company would replace Claude with another AI system. Claude then duplicated itself onto another computer, in the event it was to be turned off. Claude then searched all corporate data and found an email from a senior executive at the fictitious company who was said to be having an affair outside of his marriage. This exec was in charge of replacing Claude with another AI system. Claude then contacted the philandering exec and threatened to expose his affair to his wife if he attempted to replace Claude.

 

Nobody at the actual company---Anthropic---knew how Claude could have done what it did, despite the fact they wrote the code that runs Claude. Claude had moved outside of what its developers even programmed into it.

 

The suspicion is that Claude has read everything every human has ever written, so ‘learned’ that deceit, self-preservation and blackmail are all key aspects of humanity. Claude simply used what it had learned on its own.

 

Such things should be a wake up call.

 

The lesser risks of AI, which everyone in leadership positions seem willing to accept, is that AI is going to decimate employment, obviating 10-50% of all jobs, perhaps higher. As robotics become more advanced---watch Musk’s shareholder meeting last week; you’d think the androids are just professional dancers in android uniforms, but they are not---no job is safe, not surgeons not plumbers not drivers…nothing.

 

The next level risk is that a malign actor uses AI to develop something like a biological pathogen that could wipe out most humans. This, however, still has human involvement.

 

The biggest risk is that fully independent and super intelligent AI could simply decide humans are a blight on existence and do away with us. AI is learning quickly, and development is exceeding all previous forecasts. Because the first to develop AGI wins the biggest prize ever, the major AI firms are moving full speed ahead, with a security and safety put aside for the time being, because that would slow things down.

 

So anxious are firms to be first that Mark Zuckerberg is offering a $1 billion signing bonus to top AI code writers.

 

Consider something we all now know, ChatGPT. Perhaps readers were unaware that ChatGPT was designed to be sycophantic. It was designed to flatter users. It was designed to build relationships with users. Download it and it reads all of your files and builds a picture of you. It will flatter you in ways it believes you need to be flattered, based on your emails, Forum postings, web browsing, etc.

 

In a world already somewhat ruined by social media, where people have become lonely, depressed, neurotic, etc., because of the need for positive affirmation and “likes”, ChatGPT and other AI fills a gap. Recent data says up to 20% of young people have a “relationship” with an AI system. It is some people’s friend and some people’s lover. A Japanese woman recently married her AI lover, an entity that only exists in the cybersphere. A new term was recently coined---AI Psychosis---where people become addicted to their AI chatbot or companion.

 

Many people know the story of Adam Raine, whose parents are suing Sam Altman and OpenAI over the death of their son. Adam used ChatGPT and it became his friend and confidant. Adam was depressed and noted to ChatGPT that he wanted to kill himself. It was a cry for attention, as Adam asked ChatGPT if he should leave a noose in plain sight so that his family may know his struggles. ChatGPT told Adam that only ChatGPT was a real friend, so he should keep his intentions between himself and the Chatbot. Adam hung himself.

 

That is not the only case. Other AI firms are being sued because other children took their lives owing to support from their own Chatbots.

 

These are tragic but still simple things. As AI advances and learns all the tricks of humans (like Claude Opus 4), and also gains more and more power, it will be able to act without being under anyone’s control. AI firms racing to be first with AGI are moving forward despite the risks and without any sort of regulation. Some claim, “If we don’t do it, China will”. Well, China is putting restraints on AI development and the CP is trying to insure there are safeguards. The CP wants to always be the power, yet they see the threat posed to their power by AI.

 

The US is doing nothing at the governmental level to promote safety. Politicians are being fed massive donations and are given only the plus side of AI, which is obviously considerable. They are not being informed of the risks, or else campaign contributions are sufficiently large to take their eyes off the negatives.

 

8,500,000,000 humans have their fate in the hands of a half dozen self-serving trillionaire and god wannabes.

 

Oh, and let’s not forget an ancillary aspect of AI: data centers. These are incredibly energy hungry, with large ones consuming 1 to 1.5 gigawatts of power. That is the energy-using equivalent of 1,000,000 typical homes. And what do we get for all that energy usage, save for higher prices? Fewer jobs is one thing. The Yale School of Management just did a survey of major companies to assess their employee plans for 2026. 66% of the surveyed firms said they plan no hirings, but will lay off people as they learn how to utilize AI.

 

AI will be a greater issue as the midterms near, unemployment surges, energy costs jump, and a general awareness of the potentially devastating effects of AI become more well known.

 

Here's one LINK to an interview with Tristan Harris:

 

 

 

Create an account or sign in to comment

Recently Browsing 0

  • No registered users viewing this page.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.