Skip to content
View in the app

A better way to browse. Learn more.

Thailand News and Discussion Forum | ASEANNOW

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

AI fakes swamp real air crashes and threaten investigations

Featured Replies

AI fakes swamp real air crashes and threaten investigations

 

image.jpeg.33702ae1563ed797369c930469762e3c.jpeg

 

Artificial intelligence is now distorting public understanding of aviation disasters, pumping out fake images, videos, and audio so quickly that even federal investigators admit they’re being briefly fooled. The problem erupted after January’s catastrophic mid-air collision over the Potomac, when a dramatic image of rescuers atop twisted wreckage went viral on X — despite bearing no resemblance to the real crash scene. AI-detection tools flagged it instantly, and the account was later suspended.

 

But similar fabrications are now appearing after nearly every high-profile transportation disaster. A POLITICO review found AI-generated clips circulating after a UPS cargo plane crash in Louisville that killed 14 people. Experts say the videos contain obvious AI artifacts: frozen aircraft, impossible shadows, “hallucinated” police lights, and nonsensical text. Some even splice synthetic 4-second clips together with text-to-speech narration to mimic official briefings.

 

Jennifer Homendy, chair of the National Transportation Safety Board, warned that AI could “sway the perception of the public and passengers,” undermining trust in investigators and delaying safety fixes. Former NTSB and FAA official Jeff Guzzetti called it a threat to the “integrity of the real investigation,” fearing that misinformation could distract agencies at crucial early stages.

 

The fakes are not limited to imagery. A supposed 911 call from last year’s Baltimore bridge collapse — featuring a curiously calm caller allegedly sinking in their car — is also suspected to be AI-generated.

 

Even seasoned investigators can be momentarily deceived. After a deadly Air India crash in June, Homendy said a staffer showed her a clip that looked convincing until she spotted telltale inconsistencies: warped backgrounds, anatomically distorted figures, and aircraft details that didn’t match any real model.

 

As platforms struggle to contain synthetic disaster content, experts warn the U.S. lacks any coherent government strategy to counteract AI-driven misinformation in the critical hours after a crash.

 

Key Takeaways:

  • AI-generated disaster imagery is now routine and often hyper-realistic.

  • Investigators fear fake content will distort public understanding and delay safety responses.

  • The U.S. has no unified strategy to combat AI misinformation after aviation crashes.

 

Source: POLITICO

 

 

 

One of the many benefits of Musk's absolutist approach to free speech I guess.

 

The guy is a genius.

Create an account or sign in to comment

Recently Browsing 0

  • No registered users viewing this page.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.