Software and development tools
PC & Mobile technology
Safety
25.05.2023 13:00

Share with others:

Share

Scientists are researching to understand artificial intelligence

Researchers have developed an innovative method for evaluating how artificial intelligence (AI) understands data, improving transparency and trust in AI-based diagnostic and predictive tools.
Photo: Unsplash
Photo: Unsplash

This approach helps users understand the inner workings of the "black boxes" of AI algorithms, especially in medical applications and in the context of the upcoming European AI Act.

A team of researchers from the University of Geneva (UNIGE), the University Hospitals of Geneva (HUG) and the National University of Singapore (NUS) have developed a new approach for assessing understanding of artificial intelligence technologies.

This breakthrough enables greater transparency and credibility of AI-powered diagnostic and predictive tools. The new method reveals the mysterious workings of the so-called "black boxes" of AI algorithms, helping users understand what influences the results that AI produces and whether they can be trusted.

This is especially important in scenarios that have a significant impact on human health and well-being, such as the use of AI in a medical setting. The research has particular significance in the context of the upcoming European Union Artificial Intelligence Act, which seeks to regulate the development and use of AI in the EU.

The findings were recently published in the journal Nature Machine Intelligence. Time series of data representing the development of information over time are present everywhere: for example, in medicine when recording heart activity with an electrocardiogram (ECG), when studying earthquakes, following weather patterns, or in economics for monitoring financial markets.

This data can be modeled using AI technologies to build diagnostic or predictive tools. Advances in AI and especially deep learning, which involves training a machine with the help of large amounts of data in order to interpret it and learn useful patterns, are opening the way to increasingly accurate tools for diagnosis and prediction. Still, the lack of insight into how AI algorithms work and what influences their results raises important questions about the reliability of "black box" AI technology.

"The way these algorithms work is at best non-transparent," says Professor Christian Lovis, Director of the Department of Radiology and Medical Informatics at the UNIGE Faculty of Medicine and Head of the Department of Medical Information Science at HUG and one of the authors of the study on understanding AI.

"Of course, the stakes, especially financial ones, are extremely high. But how can we trust a machine without understanding the basis of its reasoning? These questions are crucial, especially in sectors like medicine, where AI-driven decisions can affect people's health and even their lives, and in finance, where they can lead to huge capital losses.”

Understanding methods seek to answer these questions by revealing why and how the AI came to a particular decision and the reasons behind it. "Knowing the elements that tipped the scales in favor or against a solution in a given situation, allowing at least some transparency, increases our confidence in the performance of the tools," says Assistant Professor Gianmarco Mengaldo, director of the MathEXLab laboratory at the Faculty of Design and Engineering, National University of Singapore.

“However, current understanding methods that are widely used in practical applications and industrial circuits produce very different results, even when applied to the same task and data set. This raises an important question: Which method is correct if there is supposed to be a unique, correct answer? Therefore, evaluating methods of understanding is becoming just as important as understanding itself?”

understanding artificial intelligence ChatGPT Google Bard

Differentiating between important and irrelevant information

Differentiating the data is key in developing AI technologies to fully understand them. For example, when AI analyzes images, it focuses on a few salient features.

Hugues Turbé, a PhD student in Professor Lovis' lab and the first author of the AI study, explains: “For example, AI can distinguish between a picture of a dog and a picture of a cat. The same principle applies to the analysis of time sequences: the machine must be able to select the elements on which to reason. In the case of EKG signals, this means coordinating the signals from different electrodes to assess possible fluctuations that would be a sign of a certain heart disease."

Choosing an understanding method among all those available for a particular purpose is not easy. Different AI methods often deliver very different results, even when applied to the same data set and task.

To address this challenge, the researchers developed two new evaluation methods to help understand how AI makes decisions: one to identify the most important parts of the signal and the other to assess their relative importance to the final prediction.

To assess comprehension, they hid part of the data to see if it was relevant to AI decision-making. However, this approach sometimes produced errors in the results. To fix this, they trained the machine on an augmented data set that includes hidden data, which helped maintain balance and accuracy. The team then created two ways to measure the performance of the understanding methods, showing whether the AI was using the right data to make decisions and whether all the data was properly considered.

"Our method aims to evaluate the model that we will actually use in our operational area, thereby ensuring its reliability," explains Hugues Turbé. To continue the research, the team developed a synthetic data set that is available to the scientific community to easily evaluate any new AI designed to interpret sequences over time.

The future of artificial intelligence in medicine

The team plans to test their method in a clinical setting, where fear of AI is still widespread. "Building trust in AI assessment is a critical step towards adoption in clinical settings," explains Dr. Mina Bjelogrlic, who leads the machine learning team at Lovis' division and is a co-author of this study. “Our research focuses on evaluating AI based on time sequences, but the same methodology could be applied to AI based on other types of data, such as image or text data. The goal is to ensure transparency, comprehensibility and reliability of AI for users.”

Understanding the inner workings of AI is key to building confidence in its use, especially in critical sectors such as medicine and finance. The research, conducted by a team of researchers from the University of Geneva and the National University of Singapore, offers an innovative method of assessing the understanding of artificial intelligence that helps users understand why and how decisions are made. This approach is particularly important for medical applications, where decisions made by artificial intelligence can be life-threatening.


Interested in more from this topic?
ChatGPT Google artificial intelligence


What are others reading?