Benefits and Risks of Artificial Intelligence
Wednesday, April 24, 2024
Home  >  Beams of light  >  Spectrum of lights  >  Inventions  >  Benefits and Risks of Artificial Intelligence

User Rating: 5 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Active
 

 

WHY RESEARCH AI SAFETY?

 

From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons. Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.

 

AI Benefits and difficulties 

 

In the near term, the goal of keeping AI’s impact on society beneficial motivates research in many areas, from economics and law to technical topics such as verification, validity, security and control. Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons.

 

 

In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.

 

347846 artificial intelligence

 

It may sometimes feel like AI is a recent development in technology. After all, it’s only become mainstream to use in the last several years, right? In reality, the groundwork for AI began in the early 1900s. And although the biggest strides weren’t made until the 1950s, it wouldn’t have been possible without the work of early experts in many different fields.  Knowing the history of AI is important in understanding where AI is now and where it may go in the future. In this article, we cover all the major developments in AI, from the groundwork laid in the early 1900s, to the major strides made in recent years.

 

o ARTIFICIAL INTELLIGENCE facebook

 

The idea of “artificial intelligence” goes back thousands of years, to ancient philosophers considering questions of life and death. In ancient times, inventors made things called “automatons” which were mechanical and moved independently of human intervention. The word “automaton” comes from ancient Greek, and means “acting of one’s own will.” One of the earliest records of an automaton comes from 400 BCE and refers to a mechanical pigeon created by a friend of the philosopher Plato. Many years later, one of the most famous automatons was created by Leonardo da Vinci around the year 1495. So while the idea of a machine being able to function on its own is ancient, for the purposes of this article, we’re going to focus on the 20th century, when engineers and scientists began to make strides toward our modern-day AI.

 

Alan Turning 1

Alan Turning - Father of Modern Computer Science

 

Often considered the father of modern computer science, Alan Turing was famous for his work developing the first modern computers, decoding the encryption of German Enigma machines during the second world war, and detailing a procedure known as the Turing Test, forming the basis for Artificial Intelligence (AI).

Alan Turning running

 

 

Alan Turing was one of the most influential British figures of the 20th century. In 1936, Turing invented the computer as part of his attempt to solve a fiendish puzzle known as the Entscheidungsproblem.  This mouthful was a big headache for mathematicians at the time, who were attempting to determine whether any given mathematical statement can be shown to be true or false through a step-by-step procedure – what we would call an algorithm today.  Turing attacked the problem by imagining a machine with an infinitely long tape. The tape is covered with symbols that feed instructions to the machine, telling it how to manipulate other symbols. This universal Turing machine, as it is known, is a mathematical model of the modern computers we all use today. 

 

 image.gif

Using this model, Turing determimage.gifined that there are some mathematical problems that cannot be solved by an algorithm, placing a fundamental limit on the power of computation. This is known as the Church–Turing thesis, after the work of US mathematician Alonzo Church, who Turing would go on to study his doctorate under at Princeton University in the United States.

 

Birth of AI: 1950-1956:

This range of time was when the interest in AI really came to a head. Alan Turing published his work “Computer Machinery and Intelligence” which eventually became The Turing Test, which experts used to measure computer intelligence. The term “artificial intelligence” was coined and came into popular use.

 

Dates of Note:

 

1950: Alan Turing published “Computer Machinery and Intelligence” which proposed a test of machine intelligence called The Imitation Game.

 

1952: A computer scientist named Arthur Samuel developed a program to play checkers, which is the first to ever learn the game independently.

 

1955: John McCarthy held a workshop at Dartmouth on “artificial intelligence” which is the first use of the word, and how it came into popular usage.

 

 

1958: John McCarthy created LISP (acronym for List Processing), the first programming language for AI research, which is still in popular use to this day.

 

1959: Arthur Samuel created the term “machine learning” when doing a speech about teaching machines to play chess better than the humans who programmed them.1961: The first industrial robot Unimate started working on an assembly line at General Motors in New Jersey, tasked with transporting die casings and welding parts on cars (which was deemed too dangerous for humans).

 

1965: Edward Feigenbaum and Joshua Lederberg created the first “expert system” which was a form of AI programmed to replicate the thinking and decision-making abilities of human experts.

 

1966: Joseph Weizenbaum created the first “chatterbot” (later shortened to chatbot), ELIZA, a mock psychotherapist, that used natural language processing (NLP) to converse with humans.1968: Soviet mathematician Alexey Ivakhnenko published “Group Method of Data Handling” in the journal “Avtomatika,” which proposed a new approach to AI that would later become what we now know as “Deep Learning.”

 

1973: An applied mathematician named James Lighthill gave a report to the British Science Council, underlining that strides were not as impressive as those that had been promised by scientists, which led to much-reduced support and funding for AI research from the British government.

 

1979: James L. Adams created The Standford Cart in 1961, which became one of the first examples of an autonomous vehicle. In ‘79, it successfully navigated a room full of chairs without human interference.1979: The American Association of Artificial Intelligence which is now known as the Association for the Advancement of Artificial Intelligence (AAAI) was founded.

 

There are some who question whether strong AI will ever be achieved, and others who insist that the creation of superintelligent AI is guaranteed to be beneficial. At FLI we recognize both of these possibilities, but also recognize the potential for an artificial intelligence system to intentionally or unintentionally cause great harm. We believe research today will help us better prepare for and prevent such potentially negative consequences in the future, thus enjoying the benefits of AI while avoiding pitfalls.  Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:

 

maxresdefault 

 

The AI is programmed to do something devastating:  Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.

 

 3 major types of artificial intelligence systems

 

As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. A key goal of AI safety research is to never place humanity in the position of those ants.

 

future of ai

  The Recent Interest in Artificial Intelligence Safety

 

Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other big names in science and technology have recently expressed concern in the media and via open letters about the risks posed by AI, joined by many leading AI researchers. Why is the subject suddenly in the headlines?

 

 Growing food and groceries

 

The idea that the quest for strong AI would ultimately succeed was long thought of as science fiction, centuries or more away. However, thanks to recent breakthroughs, many AI milestones, which experts viewed as decades away merely five years ago, have now been reached, making many experts take seriously the possibility of superintelligence in our lifetime. While some experts still guess that human-level AI is centuries away, most AI researches at the 2015 Puerto Rico Conference guessed that it would happen before 2060. Since it may take decades to complete the required safety research, it is prudent to start it now.

 

artificial intelligence revolution 

 

Because AI has the potential to become more intelligent than any human, we have no surefire way of predicting how it will behave. We can’t use past technological developments as much of a basis because we’ve never created anything that has the ability to, wittingly or unwittingly, outsmart us. The best example of what we could face may be our own evolution. People now control the planet, not because we’re the strongest, fastest or biggest, but because we’re the smartest. If we’re no longer the smartest, are we assured to remain in control? FLI’s position is that our civilization will flourish as long as we win the race between the growing power of technology and the wisdom with which we manage it. In the case of AI technology, FLI’s position is that the best way to win that race is not to impede the former, but to accelerate the latter, by supporting AI safety research.

 

 myths

The Top Myths About Advanced Artificial Intelligence

 

A captivating conversation is taking place about the future of artificial intelligence and what it will/should mean for humanity. There are fascinating controversies where the world’s leading experts disagree, such as: AI’s future impact on the job market; if/when human-level AI will be developed; whether this will lead to an intelligence explosion; and whether this is something we should welcome or fear. But there are also many examples of of boring pseudo-controversies caused by people misunderstanding and talking past each other. To help ourselves focus on the interesting controversies and open questions — and not on the misunderstandings — let’s clear up some of the most common myths.

 

 sourceforge artificial intelligence

 

TIMELINE MYTHS:

The first myth regards the timeline: how long will it take until machines greatly supersede human-level intelligence? A common misconception is that we know the answer with great certainty.   One popular myth is that we know we’ll get superhuman AI this century. In fact, history is full of technological over-hyping. Where are those fusion power plants and flying cars we were promised we’d have by now? AI has also been repeatedly over-hyped in the past, even by some of the founders of the field. For example, John McCarthy (who coined the term “artificial intelligence”), Marvin Minsky, Nathaniel Rochester and Claude Shannon wrote this overly optimistic forecast about what could be accomplished during two months with stone-age computers: “We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College [...] An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”

 

kid robot with laptop 

 

On the other hand, a popular counter-myth is that we know we won’t get superhuman AI this century. Researchers have made a wide range of estimates for how far we are from superhuman AI, but we certainly can’t say with great confidence that the probability is zero this century, given the dismal track record of such techno-skeptic predictions. For example, Ernest Rutherford, arguably the greatest nuclear physicist of his time, said in 1933 — less than 24 hours before Szilard’s invention of the nuclear chain reaction — that nuclear energy was “moonshine.” And Astronomer Royal Richard Woolley called interplanetary travel “utter bilge” in 1956. The most extreme form of this myth is that superhuman AI will never arrive because it’s physically impossible. However, physicists know that a brain consists of quarks and electrons arranged to act as a powerful computer, and that there’s no law of physics preventing us from building even more intelligent quark blobs.

 

 What the future will look

 

There have been a number of surveys asking AI researchers how many years from now they think we’ll have human-level AI with at least 50% probability. All these surveys have the same conclusion: the world’s leading experts disagree, so we simply don’t know. For example, in such a poll of the AI researchers at the 2015 Puerto Rico AI conference, the average (median) answer was by year 2045, but some researchers guessed hundreds of years or more.

 

AI face and brain  

 

There’s also a related myth that people who worry about AI think it’s only a few years away. In fact, most people on record worrying about superhuman AI guess it’s still at least decades away. But they argue that as long as we’re not 100% sure that it won’t happen this century, it’s smart to start safety research now to prepare for the eventuality. Many of the safety problems associated with human-level AI are so hard that they may take decades to solve. So it’s prudent to start researching them now rather than the night before some programmers drinking Red Bull decide to switch one on.

facts myths

Another common misconception is that the only people harboring concerns about AI and advocating AI safety research are luddites who don’t know much about AI. When Stuart Russell, author of the standard AI textbook, mentioned this during his Puerto Rico talk, the audience laughed loudly. A related misconception is that supporting AI safety research is hugely controversial. In fact, to support a modest investment in AI safety research, people don’t need to be convinced that risks are high, merely non-negligible — just as a modest investment in home insurance is justified by a non-negligible probability of the home burning down.  It may be that media have made the AI safety debate seem more controversial than it really is. After all, fear sells, and articles using out-of-context quotes to proclaim imminent doom can generate more clicks than nuanced and balanced ones. As a result, two people who only know about each other’s positions from media quotes are likely to think they disagree more than they really do. For example, a techno-skeptic who only read about Bill Gates’s position in a British tabloid may mistakenly think Gates believes superintelligence to be imminent. Similarly, someone in the beneficial-AI movement who knows nothing about Andrew Ng’s position except his quote about overpopulation on Mars may mistakenly think he doesn’t care about AI safety, whereas in fact, he does. The crux is simply that because Ng’s timeline estimates are longer, he naturally tends to prioritize short-term AI challenges over long-term ones.

 

 artificial intelligence robot and brain

Myths About the Risks of Superhuman Artificial Intelligence

 

Many AI researchers roll their eyes when seeing this headline: “Stephen Hawking warns that rise of robots may be disastrous for mankind.” And as many have lost count of how many similar articles they’ve seen. Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because they’ve become conscious and/or evil. On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers don’t worry about.

 

robots gone mad 

 

That scenario combines as many as three separate misconceptions: concern about consciousness, evil, and robots.  If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car? Although this mystery of consciousness is interesting in its own right, it’s irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AI does, not how it subjectively feels.

 

 square at end and sides with designs and colors

 

The fear of machines turning evil is another red herring. The real worry isn’t malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans don’t generally hate ants, but we’re more intelligent than they are – so if we want to build a hydroelectric dam and there’s an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

 

 Man face with aura 1

 

The consciousness misconception is related to the myth that machines can’t have goals. Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target. If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose. If that heat-seeking missile were chasing you, you probably wouldn’t exclaim: “I’m not worried, because machines can’t have goals!”

 

 

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids, because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isn’t with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection – this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding. The robot misconception is related to the myth that machines can’t control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, it’s possible that we might also cede control.

 

Screen Shot 2021 02 17 at 2.27.20 PM 

The Interesting Controversies of Artificial Intelligence 

 

Not wasting time on the above-mentioned misconceptions lets us focus on true and interesting controversies where even the experts disagree. What sort of future do you want? Should we develop lethal autonomous weapons? What would you like to happen with job automation? What career advice would you give today’s kids? Do you prefer new jobs replacing the old ones, or a jobless society where everyone enjoys a life of leisure and machine-produced wealth? Further down the road, would you like us to create superintelligent life and spread it through our cosmos? Will we control intelligent machines or will they control us? Will intelligent machines replace us, coexist with us, or merge with us? What will it mean to be human in the age of artificial intelligence? What would you like it to mean, and how can we make the future be that way?

 

 

AI like humans

 

Futurist and former Google engineer Ray Kurzweil said that he believes humans will reach immortality by 2045, according to Youtuber Adagio who discussed Kurzweil's theory in a two-part Youtube series. Kurzweil believes that humanity is moving toward a singularity in 2045 which is powered by exponential growth - a theory that says that when an industry becomes interesting, more resources are invested into it, meaning that more research can be done and causing the industry to advance at greater speeds as time goes on. The singularity will come at the end of six epochs, according to Kurzweil who added that we have gone through four: physics, chemistry, DNA and biology and brains and technology. According to the theory, humanity is currently in the midst of the fifth epoch which is human technology and human intelligence.

 

Humans as AI

Should we be wary of the Risks that Powerful Artificial Intelligence Systems

Could Pose to Humanity?

 

How will people become immortal?  Kurzweil says that because of exponential growth in the fields of genetics, nanotech and robotics, people will eventually have the ability to map the human brain to a high enough resolution that it can be mimicked exactly in artificial intelligence.  As to humanity's fear of AI being so advanced it's an exact copy of humans, Kurzweil believes that not only will people develop technology to safeguard their existence, but they will also accept the AI's advanced human-like abilities.   So while humanity is constantly advancing in technology and in the AI field specifically, we will have to wait until 2045 to see if humanity can indeed live forever.