GET THE APP

 
International Journal of Adult and Non Formal Education

Commentary - International Journal of Adult and Non Formal Education ( 2021) Volume 2, Issue 4

A Brief History of AI and Education

Margie Meacham*
 
Department of NeuroScience, Learningtogo, USA
 
*Corresponding Author:
Margie Meacham, Department of NeuroScience, Learningtogo, USA, Email: margie@learningtogo.info

Received: 29-Jul-2021 Published: 19-Aug-2021

Commentary

Humans have been using machines to augment our capabilities for a long time, so it’s only natural that we’ve come to a point where we’re looking to replicate our cognitive processes in some of those machines. Here’s a brief history of artificial intelligence. If you search the events online, you will find that each result provides a slightly different list of key events. These are a few that I’ve called out for our purposes as learning professionals.

1763

Mathematician Thomas Bayes develops Bayesian inference, a decision-making technique that becomes adopted for teaching machines (and people) how to make decisions using pattern recognition and predictions based on probability.

1837

Charles Babbage invents “the analytical engine,” a machine designed to perform mathematical calculations. The machine requires instructions—a program—to perform this task. His colleague Ada Lovelace writes the first program to work on his prototype. Many historians consider Babbage to be the inventor of what would later be called the computer, and Lovelace the first programmer.

1898

Inventor and electrical engineer Nikola Tesla suggests that it might be possible to build a machine that is operated through a program, using a “borrowed mind” and wireless communication.

1939

Westinghouse unveils Elektro, the first robot. This machine can deliver a recorded response to a limited number of questions, walk, smoke a cigarette, and blow up balloons. He is accompanied by his robotic dog, Sparko. The machine is an entertaining curiosity, and not a serious attempt at artificial intelligence, but it draws attention to the potential of what would eventually become known as robotics.

1943

Warren S. McCulloch and Walter Pitts suggest that building a network of artificial neurons could create a machine that could think, using the neurons’ on-or-off firing system (later binary code).

1950

With his famous opening line, “I propose the question, ‘Can machines think?’” Alan Turing predicts that machines might one day mimic the cognitive functions of humans. He proposes a test to identify this phenomenon, which later becomes known as the Turing Test. While the test sidesteps the definition of intelligence altogether, Turing proposes that as long as we can be convinced that we are communicating with a person, we can consider that machine to be “intelligent.”

1955

John McCarthy coins the term artificial Intelligence at a conference convened at Dartmouth College, in Hanover, New Hampshire. The conference is one of the first times that computing scholars contemplate the use of human language to program computers; the use of neural nets to simulate human thought-processing in computers; machine learning, a “truly intelligent machine that will carry out activities which may best be described as selfimprovement”; the ability of a computer to form abstract conclusions and “orderly thinking”; and creativity. While the ambitious conference doesn’t achieve everything it set out to do, it establishes the blueprint for progress in AI and machine learning from that point up until the present day.

1997

IBM’s “supercomputer,” Deep Blue, becomes the first computer to beat a human chess champion in a match against grandmaster Garry Kasparov. Many doubt that a machine could really have performed so well and accuse IBM of cheating. The computer is “too human” to be credible.

2011

IBM Watson defeats the best human players in the popular television game show Jeopardy! Although a stunning achievement, the victory is nowhere near as “intelligent” as it appears. Watson is running a simple program that searches a database and provide a response faster than the human competitors. It is however one of the first times that a computer is able to understand and respond to human speech, paving the way for many uses of natural language processing in future applications.

2014

“Eugene Goostman,” a computer program known as a chatbot, appears to have fooled enough judges to be the first AI to pass the Turing Test. Further scrutiny shows that although the program’s performance is interesting, it fools only a third of the judges and avoids answering some of the questions on the test, invalidating the result.

2016

Russia deploys AI to successfully influence the U.S. presidential election by using bots to post comments in social media designed to mislead voters and suppress voting activity by certain types of people. This is neither the first time nor the last that Russia and other actors have successfully influenced the outcome of an election.

2017

DeepMind’s AlphaGo caps a series of victories against humans in what is considered the most complex game in the world, Go. In a three-game match, the machine defeats world champion Ke Jie, who comments, “I thought I was very close to winning the match in the middle of the game, but that might not have been what AlphaGo was thinking”.

Where Is AI in education and talent development?

The International Data Corporation (IDC) forecasts that businesses worldwide will be spending $ 77.6 billion on cognitive and AI systems by 2022. The highest anticipated spending is for:

• Automated customer service agents

• Automated threat intelligence and prevention systems

• Sales process recommendation and automation

• Automated preventive maintenance

• Pharmaceutical research and discovery

• Consumer shopping advisors and product recommendations

• Digital assistants for enterprise knowledge workers

• Intelligent data-processing automation.

Conflict of Interest

The author has no area of interest.

Acknowledgement

None.