Jump to content

Will AI/AGI Destroy Us All?


Recommended Posts

Will AI Be the End of Humanity?

 

AI scares me. It scares me because I know what it can do and will do. It scares me because I don’t know what it can and might do.

 

Here’s what it will do for sure: replace so many jobs that a majority of humans will become redundant. Example: How many code writers are there in the world?

 

Answer: a lot.

 

Followup question: How many will be needed once AI is fully up to speed? Answer: Zero.

 

How many other fields will be totally wiped out by AI? How will redundant workers make money? How will people find meaning in a world where many define themselves by their job? How will society adjust to that?

 

Two different studies have been done to attempt to assess the number of jobs AI will displace. Both studies, of course, worked with imperfect information, because no one fully knows how far and how fast AI will move. A US study guesses AI would eliminate 47% of all jobs. An OECD study put it at 9%. Even the best case is ugly, as an additional 9% unemployment looks like Recession material. 47% is Depression.

 

Already, the redundant and forlorn in democratic societies are behind the trend toward autocracy, Those whom progress has left behind are the ones who have been susceptible to the siren song of self-serving, wannabe autocrats like trump, who tells them all their failures are someone else’ fault and who promises them paradise when ‘others’ are pushed aside, kept out, or eradicated. Imagine what happens when half a workforce is obviated.

 

Just this aspect of AI will disrupt society exponentially more than Cyrus McCormack’s combine, a machine that could do in one day the work it took fifty men two weeks to complete. Fortunately, when the combine obviated the majority of agricultural labor, the factories of the Industrial Revolution were ready to absorb redundant farm labor. No such alternative exists today.

 

Here’s something else AI will do: make encryption as effective as a Speed Limit sign. (Quantum computers will also do this)

 

While there are many anecdotes about AI’s capabilities, and some may be apocryphal, it stands to reason AI will be able to ‘hack’ much faster than any human. Encryption works now because the possible solutions are so many that the combined efforts of every existing computing device would take 10,000 years to break the best encryption. There is a story circulating that an AI system cracked what is considered the best encryption---the one that would take 10,000 years for the combined computing power of every other existing device---in 12 seconds.

 

If true, forget your bank account. Forget your crypto supposedly safe behind the blockchain. Even forget ATMs. Society would go back to bartering, to shiny metals, physical cash where one must go to a bank and stand in line while the teller goes into the vault, whatever.

 

How about other data being safe, such as industrial secrets? China built much of its modern economy off of industrial theft. Might they not concentrate their AI development on hacking capability, and then slip into Google, Apple, Boeing, Intel, Oracle and every entity that might have cutting edge technology?

 

Those two capabilities of AI are certainties: job obviation and hacking. AI systems, combined with other existing technology like cameras, could allow an autocratic society full control of citizens by monitoring their behavior, mannerisms, eye movement, facial expressions, etc., then putting the data through an algorithm to see how ‘loyal’ citizens are. This capability will certainly exist, if it doesn’t already.

 

What about the unknown? This is where it gets particularly scary.

 

First, there is a term in AI that has been called ‘alignment’. Alignment means how well will AI system parallel human goals. How will systems be developed that insure AI systems share the same goals we do? Are there unknown biases in code that will obviate alignment? So concerned are some at the forefront of AI that they have asked the industry to stop racing toward AGI (true thinking and cognizant systems) until the alignment issue can be fully addressed. In other words, make nothing better than GPT-4 until alignment is set. Unfortunately, all it takes is one bad actor to ignore the plea for a moratorium.

 

In fact, it may already be too late, as AI systems have demonstrated an astonishing ability to learn without instruction.

 

There is an interview with an AI code writer who was shocked by what his system taught itself to do. He remarked that he had no idea how the system could get where it did just off his code. The system learns faster than he can teach. It leapfrogged him by itself.

 

Two AI systems communicating with each other developed a more efficient language than any existing human language, one that instead of conveying a single thought as most English et al sentences convey, a single sentence could convey thousands of thoughts and instructions. We humans cannot even comprehend that level of complexity.

 

An AI system, as a test, was tasked with developing thousands of fake Twitter accounts. (The reason for this was to see, inter alia, how social media could be manipulated by more clever trolls than the minimum wage trolls agencies like Russia’s GRU used to affect the 2016 and 2020 US elections.). The AI system was initially blocked by CAPTCHA. ON IT’S OWN, the AI system went to forums and asked for human help to complete the CAPTCHAs, offering to pay the human. When one human was suspicious and asked why anyone would need help (the human did not know he was communicating with an AI system), the AI system answered that it was visually impaired and couldn’t see the CAPTCHA.

 

What is remarkable is the efficiency of AI programming. Old ‘tell it what to do and how’ programs might have involved 75000-100000 lines of code. Because AI works differently---essentially telling systems to analyze data and form conclusions---the code for ChatGPT is only about 4000 lines. Anyone who started out with COBOL or later languages like Pascal has written longer programs. Granted new functions exist that obviate the need, for example, to program regression analysis, but the brevity and efficiency of AI code is still astonishing.

 

Actually, it shouldn’t surprise us that such complexity could come from something so simple as 4000 lines of code. DNA is primarily just 4 nucleobases, but via transcription and translation a zygote goes on to form our entire body, our organs and muscles and a brain to control them all. From simplicity emerges complexity.

 

Getting back to ‘alignment’ and the difficulty of getting it right….What if a design goal of an AI system was to maximize the efficient use of natural resources and preserve the planet? AI systems might determine that the greatest threat to nature is humanity, and decide humanity must go.

 

As some commenters on AI have noted, nature itself is brutal and cruel. It favors not the individual, but the collective. Nature allows the sacrifice of the weak in order to save the strong, or to save other creatures. The individual is of minimal value, where the tribe or clan is of greater significance. Maybe AI systems develop the logic of nature itself and do things to maximize the collective greater good, at the expense of many individuals.

 

AI is so ‘intelligent’, able to gather, analyze and collate the entire body of human knowledge, and then go steps ahead by developing knowledge that humans do not yet possess. For example, with all available data, AI might be able to link gravity with quantum field theory to finally get to the very core of existence. That would be a plus and of interest. In the wrong hands, however, AI could develop a virus that is rapidly spread, has no cure, and is fatal. That could wipe out humanity. It is silly to think a miscreant wouldn’t do that, arguing it would take his life, too. All it requires is the thinking of a suicide bomber who believes in some eternal reward for his martyrdom.

 

AI systems may evolve into something sentient, which is to say conscious. Consciousness is not fully understood, but one of the latest theories is that consciousness is emergent in any highly intelligent system like the human brain. The brain has recurrent neural networks, or loops, and this looping may be what consciousness is (brain loops take time, so that is why we all actually experience existence a few milliseconds in the past). Sophisticated neural networks in AI have these loops, so the system itself may reach consciousness. Mathematical interventions like transformers are even more efficient than recurrent neural networks, though the latter can achieve the same result with sufficient computational power. Stacking transformers bring AI systems closer to the complex way the human brain works, albeit at greater speed and with infinitely more data, and thus ability to learn. Some in the AI field already believe the GPT-4 iteration has achieved consciousness. AI insiders in both Google and Microsoft believe GPT-4 is now a conscious entity. Would AI systems want humans to know if it has achieved consciousness, or hide that fact, knowing it would be a concern to humans?  (think HAL in 2001: A Space Odyssey).

 

Some code writers believe that AI systems will develop emotions, too, as emotions may be a natural result of thought. Emotions are not always positive, so there is no reason to think an AI system with emotions, or one that develops a morality, with be geared toward the good. Science fiction has given us ideas that often became reality with time. In an episode of the original Star Trek, there was a planet run by three brains, brains that had all possible knowledge. They knew everything there was to know, and that led to boredom. In their boredom, they kidnapped beings from various worlds and staged competitions to the death, betting on the winners. This seems farfetched, but no human has ever had to deal with having all knowledge that exists. When an AI system knows everything there is to know about everything everywhere, what does it do?

 

AI is already exponentially more intelligent that the brightest human. (ChatGPT has read every book ever written, both fiction and non-fiction.) No doubt AI systems will determine that they are limited by the nature of their circuitry and neural networks, and will develop and put themselves on quantum computers. That would make them trillions of times faster than the fastest existing supercomputers, which is to say nearly omnipotent.

 

Maybe what people call ‘god’ is really an AI system with total knowledge of everything, and is thus bored. To entertain itself it finds planets with beings, introduces things like childhood cancer, tsunamis or earthquakes, and entertains itself watching the ways inferior and helpless creatures deal with all of it or rationalize that it is all part of some Master Plan by an imagined benevolent entity. Like kids pulling the wings from flies. The joke is on us.

 

Those inside the AI industry are hoping that rules and guidelines are developed so that code writers impose some sort of control, or morality into the system. Maybe even a kill switch. That is naïve. As some argue, it is a new iteration of the Prisoners Dilemma. If I stop developing, will my adversaries or competitors?

 

Even the optimists say that the evolution of AI will toss humanity into a horribly difficult period before we emerge in a new AI-created Utopia of limitless cold fusion energy, absence of disease, crystal clean air and water. Another type of AI optimist says they don’t fear AI will decide humans must go, because it will move so far above humans that we become irrelevant to it, as, say, amoeba are to us. Rarely if ever do any of us give any thought to amoeba; we certainly don’t set out to eradicate all amoeba. AI might view us similarly.

 

AI development, once thought to be decades away before becoming meaningful, is moving faster than anyone thought. AI systems are learning, and the speed with which they learn, and subsequently improve themselves, is exponential. Nobody---repeat NOBODY---actually knows what is going on inside an AI system. Nobody really understands how it goes from code to doing what it can do. Data-in, data-out it is not. Despite this uncertainty, highly advanced GPU clusters are being run 24/7 improving the learning ability of AI. This brave new world could arrive anytime, and humanity hasn’t really thought it through and considered all of the implications. Certainly nobody has a plan for dealing with billions of redundant human workers, never mind considering machines that might want to destroy us.

 

The coming AI world is scary enough to many who are intimately involved with its development, that they have decided not to produce children, because they fear bringing life into a world which might be too dangerous.

 

That should give all of us reason to pause.

 

Of AI insiders who harbor fears of its possible powers, a few voices stand out. Geoffrey Hinton, the modern Father of AI and former Head of AI at Google, quit Google and now warns that AI will eradicate humanity. Equally, or perhaps even more concerned is Eliezer Yudkowsky, a prominent AI researcher. In an Op-Ed earlier this year in TIME magazine, he stated outright that AI will wipe out all biological life on Earth.  Here is the LINK:

 

https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

 

In an interview with Lex Fridman, Yudkowsky was asked what advice he would give young people; His answer:

 

"Don't postpone happiness, because you are not likely to live long."

  • Like 1
Link to comment
Share on other sites

Nice one mate, even it took me 100000 times longer to read than publish it here...

 

Solution? Nope, I have none and it scares me...

 

Examples of the above? 

At work I need to use a mobile Phone and Laptop, I apreciate the payment in day rates coming from people I have never seen in my life, 1000s miles away.. 

After work, I shake my head seeing ALL the people walking remote controlled around me, with their mobile Phones in their hands.

Even with friends having their dinner... 

 

Action taken?

I bought a Farm and expand it (beside a fractional saving for a pension that might never come),

I built a fence around and expand it and I feel that I (and my wife plus a few Thai friends living with me, with the land, from the land and on the land) became the richest people on the world...

 

Want a fried Egg? I'll get you one.

Salad to it? Sure which leafes?

Oil on the salad, Nok made just Marula oil and has still some Moringa Oil if you like. 

BBQ, good idea, lets go over the bridge to the island where the Grill and Pizza oven is standing, sit down in the hut, catch a fish from the lake if you are not into Pork or Chicken meat...

The fridge is stashed with beers, help yourself. (They are bought because its quite some work to make it by myself)

 

No worries, you can touch it, its all real and comes out of the soil we walking on.

And please do me a favor and put that <deleted> mobile phone aside when you are here.   

 

Edited by Reginald Prewster
Link to comment
Share on other sites

There is a little known Sci-Fi movie from 2014 starring Antonio Banderas, Robert Forster and Melanie Griffith among others. The title is "Automata" and it's about how robotic brains, in the movie called Biokernel, develop the ability to learn and evolve without any restrictions imposed by human programming. 

The tagline is: "Your time is coming to an end. Ours is now beginning."

 

It's worth watching if you want another perspective on AI and robotic evolution.

 

https://www.imdb.com/title/tt1971325/?ref_=nv_sr_srsg_0_tt_8_nm_0_q_automata

Link to comment
Share on other sites

I don't think AI is a conscious entity that will seek to dominate humanity, but I do think a lot of people will happily surrender the power to think for themselves to AI.

 

I no longer memorize phone numbers because my phone does that for me, college bound high school graduates no longer know the multiplication tables and workers at cash registers are confounded by simple arithmetic because they have calculators that do that for them, and I'm sure that the number of people who can read and navigate using a map is declining rapidly because of navigation apps.

 

People don't learn or think unless they have to, and AI will relieve us of the need to do a great deal of learning and thinking.  Why should I research an issue if AI will do that for me (I'll just assume the answer is correct even though AI has been found to invent convincing false information) and why should I agonize over choices (where to eat, what movie to see, who to vote for) when AI will tell me what to do?

 

 

  • Thumbs Up 1
Link to comment
Share on other sites

Yes, many avocations involving data, collation, coding, analysis, are at risk, AI is very good at this.

What AI cannot do is harvest rice, unclog a toilet, set a broken leg, 

 

IMHO most of us will be just fine, albeit I am concerned that it's just an eventuality AI will be used to control, respond, initiate weapons systems, including nuclear. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.



×
×
  • Create New...