Separator

A Sneak View AT : Artificial Intelligence, Machine Learning And Deep Learning

Separator
Murale Narayanan, Director - Global Competency, DELL EMC A digital transformation leader with over 20 years of IT experience and over 10 years of experience presenting & implementing digital transformation strategies.

AI is a very recent field in the history of science and human civilization. Typically, only about sixty year sold, it started with a very simple but fundamental quest: can machines think? Though this was pondered over by many leading scientists, it was Alan Turing who postulated practical theories on programmable and ‘intelligent’ machines. It was in the year 1956, a group of pioneering computer scientists including Marvin Minsky and John McCarthy started this field. In fact, it was John McCarthy, founder Stanford AI lab, coined the very term ‘artificial intelligence’, a stepping stone for the field - build machines that think.

As the first attempt, scientists drew inspiration from and started building machines that resemble and replicate to an extent, human thinking. The idea of human intelligence, include the ability to reason, see, hear, speak, move around and make decisions among others.

AI stemmed from a core and foundational dream many years ago – the thinking machines. Since inception, it proliferated into multiple sub fields such as robotics, computer vision, natural language processing, and speech recognition to name few. Then a very important development happened around the 80s and 90s -a sister field called Machine Learning started to blossom; a field that combined statistics and mathematics with computer science to advance the era of ‘intelligent machines’, bringing us closer to AI. Fast forward to 21st century, deep learning with its deep roots into neurosciences, taking cues from human brain functionality, the neural networks have surpassed accuracies of earlier machine learning algorithms becoming the defacto for many applications.

Also thanks to past decade, that witnessed three critical factors contributing to exponential advancement to the field AI. Availability and access to huge amounts of data(contributed by the bigdata powered by crowd sourcing), high compute hardware (GPUs, cheaper cloud computing) and advances in commercial and academic research (Deep Learning papers).

Convergence of these factors brought us the AI boom that we
witness today and many organizations are investing heavily in this and related areas. Most of the efforts started in the early 2000s by companies that invested in AI related work, now see results from research moving into products. Smart phones revolution driven by wireless connectivity and availability of cloud has allowed for virtually unlimited storage of data and unending ability to churn it. However, there do exist few concerns about the impact of AI on our society and our future, the ‘technological singularity’. But as advancements and adoption of AI continue to accelerate, one thing is certain that the impact is going to be profound.

Typically, machine learning and deep learning are tools/techniques under the umbrella of AI. To understand their application, let us look at self-driving cars.

Today, machine learning algorithms are extensively used in self-driving cars. Integration of sensor data processing to an ECU (Electronic Control Unit) of a car allows the use of machine learning to accomplish various tasks. Few applications include evaluating driver’s condition, lane keep assist, pedestrian identification and tracking among other objects of interest and analyzing the scenario. This is achieved by fusing data from different sensors like radars, cameras and/or the IoT(Internet of Things).

Broadly, machine learning algorithms are two types – supervised and unsupervised. The difference is that the supervised learning requires training data to learn till they get to the level of confidence they aspire for (the minimization of error). Supervised learning algorithms can be sub categorized into regression, classification and anomaly detection supported by dimension reduction. Unsupervised learning algorithms on other hand are more complex in the sense that, there is no training/ground truth data and their results require human (Subject Matter Experts’) interpretation.

Deep learning (DL) is a part of an extended family of machine learning techniques based on training data representations as opposed to task-specific algorithms. Deep learning architectures are recurrent neural networks, deep belief networks, and convolutional neural networks which are applied in areas such as computer vision, NLP, speech recognition, machine translation, where they help to produce better results compared to human experts.

DL is extensively used in self-driving cars to process sensory data and make informed decisions. They are used for detection of roads, footpaths, signs, traffic lights, pedestrians, cars, obstacles, enviroment and human actions among others. DL systems proved to be powerful tools but there are some properties that may affect their practicality especially when it comes to autonomous cars.

The two major concerns I can give are unpredictability and their vulnerability to being fooled.

Despite their accuracy (they outperform humans in many cases), they still are unable to generalize to situations making them less trustworthy to be fully autonomous. Since we do not yet fully understand how they work it can be challenging to diagnose and correct them immediately for their poor performance in novel/unseen conditions. Thus, for now DL driven autonomouscar research islimited to the experimental phase. I believe more advancements are needed before DL or other algorithms can move from the labs to the real world.

Having said that, deep learning algorithms revolutionized many real- world use-cases enabling them with the usage of AI. Self-driving cars, heathcare AI and movie recommendations are better today across various horizons. AI turns science fiction into reality with Deep Learning helping it to get from present to future state.