For Artificial Intelligence (AI) to become more reliable and adaptable, it must develop a form of “understanding” of the world that allows it to make sound decisions. Dr Jesse Heyninck, an Honorary Research Associate in the Department of Computer Science at the University of Cape Town (UCT), has his sights on developing a framework that integrates symbolic and sub-symbolic AI, thus creating a hybrid AI model that aims to improve trust in AI systems.
Dr Heyninck was recently awarded a prestigious P-rating by the National Research Foundation (NRF), a recognition reserved for promising young researchers who have held a doctorate or equivalent qualification for less than five years. The rating identifies those whose published work demonstrates exceptional potential to become future international leaders in their field. UCT is home to eight of the 13 P-rated researchers nationwide.
Since earning his PhD in Philosophy from Ruhr-Universität Bochum, Heyninck has held positions in computer science departments and worked with leading AI research groups across the world, including his current role in the Department of Computer Science.
“It is important to understand why an AI-system came to a certain conclusion.”
As his PhD was on philosophical logic and formal argumentation, the shift to AI, in which formal logic can play an important role as well, was both timely and natural. Furthermore, his philosophical background brings a refreshing perspective to the field of AI.
His research expertise lies in Knowledge Representation and Reasoning (KRR), a branch of AI focused on enabling machines to reason logically and transparently. This branch uses rule-based systems that give AI a structured framework from which to draw its reasoning and decision-making.
According to Heyninck, many popular AI systems (like ChatGPT and BERT) do not genuinely “understand” the world. Instead, they rely on identifying patterns in astronomical datasets without “knowing” the meaning behind the information. This can lead to errors and unpredictable patterns in the AI. “Furthermore, in many fields of applications it is important to understand why an AI-system came to a certain conclusion. Using KRR in AI-systems is thus crucial to ensure these systems are reliable, safe and transparent,” Heyninck added.
Hybrid methodology improves AI accessibility
To address this, the researcher aims to develop a hybrid AI methodology that combines the strengths of both symbolic AI and sub-symbolic AI. Symbolic AI is more rule-based, uses clear, logical rules and is transparent and reliable, but often regarded as rigid. Sub-symbolic AI, on the other hand, is flexible and learns from large datasets. Its internal workings are invisible to the user, making it hard to interpret and verify.
“Combining these approaches will improve the state-of-the-art algorithms and techniques as it allows for a plug- and play application of high-quality hybrid AI, opening these techniques to researchers and AI-developers without a need for deep theoretical expertise,” Heyninck said.
Some of his key contributions include developing methods to prioritise rules in argument-based AI while keeping reasoning reliable. He has also developed novel ways to rank arguments by strength to help the AI decide on the most reliable reasoning path. Ultimately, his work has improved the explainability of AI, making it easier to trace why a system came to a particular decision, and more flexible to working with a wider pool of reasoning styles.
His influence in the field is backed by the multiple citations of his work in key texts, including the 2018 and 2021 Handbooks of Formal Argumentation, and in the empirical validation of his work on defeasible conditionals by researchers in China. He holds an H-index of 13 according to Google scholar.
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
Please view the republishing articles page for more information.