Artificial Intelligence: Will Machine Be Smarter Than Us In The Future? Introduction

Artificial Intelligence: Will Machine Be Smarter Than Us In The Future? Introduction

 

Starting from Turing test in 1950, Artificial Intelligence has been brought on public notice for decades. It flourished and stagnated over times in the past, which followed Gartner hype cycle. However, because of the development of big data, machine learning and deep learning technology, Artificial Intelligence returns back to the stage again in the 21st century, and play a growing role in all aspects. Millions of consumers interact with AI directly or indirectly on a day-to-day basis via virtual assistants, facial-recognition technology, mapping applications and a host of other software (Divine, 2019).

History and development

When talking about Artificial Intelligence, robots jump into most people’s mind first. However, robots are just one kind of applications of Artificial Intelligence. Artificial Intelligence has a broad definition and refers to all intelligence demonstrated by machines. Therefore, Artificial Intelligence evolve into three new terms: Artificial Narrow Intelligence, Artificial General Intelligence and Artificial Superintelligence.

Save your time!
We can take care of your essay

  • Proper editing and formatting
  • Free revision, title page, and bibliography
  • Flexible prices and money-back guarantee

Place an order

document

Artificial Narrow Intelligence, which is also known as Weak AI, is the Artificial Intelligence that implements a limited part of mind of focused on one narrow task. Artificial General Intelligence, which is also referred to strong AI, is the intelligence of a machine that can understand or learn any intellectual task that a human being can. Artificial Superintelligence usually means a hypothetical system that possesses intelligence far surpassing that of the brightest and most talented human minds. However, most of the Artificial Intelligence we talk about nowadays are Artificial Narrow Intelligence.

By the 1950s, a British polymath Alan Turing suggested that if humans use available information as well as reason in order to solve problems and make decisions, so do machines (Anyoha, 2017). Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. By text-only channel such as a computer keyboard and screen, if the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test.

Five years later, Allen Newell, Cliff Shaw, and Herbert Simon presented their proof of Turing’s concept at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956 (Anyoha, 2017). Although the conference fell short of McCarthy’s expectations, Artificial intelligence was still founded as an academic discipline since then and John McCarthy therefore was honored as one of the “founding fathers” of Artificial Intelligence.

From 1957 to 1974, AI flourished. Computers could store more information and became faster, cheaper, and more accessible. Machine learning algorithms also improved and people got better at knowing which algorithm to apply to their problem (Anyoha, 2017). However, the limitations of hardware came soon: computers did not have enough storage to require computations. The development of AI stagnated for the following several years until “deep learning” techniques and “expert systems” were popularized in the 1980’s.

AI techniques did not gain enough growth in the late 80’s and early 90’s limited by technology and funds. During the 1990s and 2000s, many of the landmark goals of artificial intelligence had been achieved. In 1997, reigning world chess champion and grand master Gary Kasparov was defeated by IBM’s Deep Blue, a chess playing computer program. This highly publicized match was the first time a reigning world chess champion loss to a computer and served as a huge step towards an artificially intelligent decision making program. In the same year, speech recognition software, developed by Dragon Systems, was implemented on Windows. This was another great step forward (Anyoha, 2017).

Today, we are living in the age of big data. Artificial Intelligence applications are everywhere.

Risk and ethical issues

The development demonstrates how AI is transforming many walks of human existence. The increasing penetration of AI and autonomous devices into many aspects of life is altering basic operations and decision-making within organizations, and improving efficiency and response times (West, 2018). Ho

Order a similar paper

Get the results you need