Artificial intelligence – What is AI? In the world of Computer Science, AI research is defined as the study of “intelligent agents”, basically, any device that perceives its environment and takes actions that maximize its chance of success. The term “artificial intelligence” is applied when a machine mimic functions that humans associate with other human minds, such as learning and problem solving.

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. It’s not as futuristic as it seems. In fact, artificially intelligence is already used widely in digital assistants like Apple’s Siri or her rivals built by Google, Microsoft and Facebook, as well as in smart-home products like Google Home and Amazon Echo

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism. The idea of actually trying to build a machine to perform useful reasoning may have begun with Ramon Llull. But by the early 1980s, AI research was revived by the commercial success of expert systems, and by 1985 the market for AI had reached over a billion dollars.
In the twenty-first century, AI techniques have experienced a resurgence following advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry.

By the mid-2010s, machine learning applications were used throughout the world. According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a “sporadic usage” in 2012 to more than 2,700 projects. Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people.