August 28, 2019

Rise of Model-Based Reasoning

By: Adel Elmessiry

Artificial intelligence (AI), sometimes known as machine intelligence, has been rapidly invading every aspect of our life. But what is AI? AI is defined as intelligence demonstrated by machines, as opposed to the natural intelligence displayed by humans.

However, increasingly, the term “artificial intelligence” is used to describe machines that mimic “cognitive” functions that humans associate with the human mind, like “problem-solving”. A major segment of recent AI development has been “sentiment” analysis. Training AI to “read” and interpret human emotion in facial features and the words used to communicate is not science fiction, but reality.

The first generation of rudimentary AI depended on compiling a profile based on collected data to represent the world of human knowledge in order to provide answers for new questions. IBM Watson won the Jeopardy Challenge by accessing an immense amount of information as well as learning how its world class competitors played the game.

IBM now predicts that by the end of 2020, the global creation of digital data will double every day. As data generation achieves exponential scale, data truly has become the world’s most valuable commodity.

As more and more data is being available, a different approach is being considered: model-based reasoning. Model-based reasoning primarily is an inference method used in expert systems based on a model of the physical world. The model is the lens through which the data is interpreted.

With this approach, the main focus of application development is developing the model. Then at run time, an “engine” combines this model knowledge with observed data to derive conclusions such as a diagnosis or a prediction.

The adoption of such an approach requires training the AI model and then using it as an “engine” for our applications. This shift in implementation drives us to think about the ramifications of the model in the context of how it is taught.

The same data set can be used to teach the same AI construct but yielding two different models. How so? Because the biases in selecting the training subset will be reflected in the construction of the model and thus will metastasize in the predictions it provides.

It is more important now than ever to understand how the lens of model-based reasoning is being created. When identical data sets can generate different interpretations depending on the model used, bias (intended or not) becomes a critical concern.

In the end, we need to be very aware of the context in which the model has been created, not just the data it was fed.

For more information see:

Artificial Intelligence

Model Based Reasoning

__________________________________________________________

Dr. Adel Elmessiry is a computer engineer and data scientist holding a doctorate in artificial intelligence and machine learning. A successful serial entrepreneur, Adel has created tech platforms with AI in health care, education and law.