I presume that your comment was addressed to Steve Kirsch, and not to me (just posting Steve's article on this sub-forum). For what's it worth > I quickly learned that making a query on a controversial topic (covid, vaccines, alternative medicine, etc.) on one of the mainstream AI-engines is an exercise in futility, as these engines are programmed to ignore information that is not on their 'accepted' sources list. And as a result they will, by using, quoting and refering to the mainstream narrative, indeed gaslight you on those issues. I found the article interesting, as it shows that when you challenge AI responses with factual data, that they will ultimately concede when you are right and know your stuff. Obviously Steve Kirsch was well aware of the fallacies in the responses he got from the engine, and in that sense the article can be seen as a warning not to blindly trust the slick prose and conclusions based on the data that the engine generates. Using a mainstream AI-engine is actually not different than reading the mainstream media like the NewYork Times, Guardian or Washington Post, which under the guise of informing you are also actually pushing the narrative that their owners want you to embrace on topics which advance their agenda.
Create an account or sign in to comment