Yiwen Peng, Thomas Bonald and Fabian Suchanek received the best paper award at ISWC 2025 for their paper on FLORA: Unsupervised Knowledge Graph Alignment by Fuzzy Logic.


Télécom Paris
Yiwen Peng, Thomas Bonald and Fabian Suchanek received the best paper award at ISWC 2025 for their paper on FLORA: Unsupervised Knowledge Graph Alignment by Fuzzy Logic.

Simon Coumes
Contextual knowledge representation for neurosymbolic Artificial Intelligence reasoning
The field of Knowledge Representation and Reasoning is concerned with the representation of information about reality in a form that is both human-readable and machine-processable. It has been a part of artificial intelligence since its inception, and has produced many important formalisms and systems. One key aspect of knowledge is the context in which it is expressed. This has been identified early on in the field and matches with our common experience: understanding a statement or judging its validity often require to know in what context it was meant. Historically, there has been some work aiming at producing logics implementing a general notion of context. None of them saw a lot of adoption, in part because they lack either sufficient expressive power or because they were not sufficiently usable.
This dissertation presents Qiana, a logic of context powerful enough for almost all types of context representation. It is also compatible with various automated reasoning tools, as is demonstrated by the code provided which allows automated reasoning with Qiana. This makes use of the pre-existing theorem prover Vampire — though any other compatible prover can freely be used instead. By providing a powerful logic for context representation with the possibility of concrete computations without (much) overhead, Qiana paves the way for larger prevalence of logics of context, making it possible to build other reasoning on top of such logics like has been done –for example– for epistemic logics or description logics.
François Amat
Mining Expressive Cross-Table Dependencies in Relational Databases
This thesis addresses the gap between what relational database schemas declare and the richer set of cross-table rules that actually govern real-world data. It introduces MATILDA, the first deterministic system capable of mining expressive first-order tuple-generating dependencies (FO-TGDs) with multi-atom heads, existential witnesses, and recursion directly from arbitrary relational databases, using principled, database-native definitions of support and confidence. MATILDA uncovers hidden business rules, workflow constraints, and multi-relation regularities that schemas alone cannot capture, while ensuring reproducible results through canonicalized search and tractable pruning guided by a constraint graph. To understand when simpler formalisms suffice, the thesis also presents MAHILDA, a relational Horn-rule baseline equipped with disjoint semantics to prevent self-justifying recursion. Overall, the work shows that expressive rule mining on realistic databases is both feasible and insightful, enabling more systematic, explainable, and schema-grounded analyses of complex relational data.
Cristian Santini (University of Macerata)
Entity Linking and Relation Extraction for Historical Italian Texts: Challenges and Potential Solutions
Entity Linking and Relation Extraction enable the automatic identification of named entities mentioned in texts, along with their relationships, by connecting them to external knowledge graphs such as Wikidata. While these techniques work well on modern documents, applying them to historical texts presents significant challenges due to the diachronic evolution of language and limited resources for training computational models. This seminar presents recent work on developing methods and datasets for processing historical Italian texts. It will discuss the creation of a new benchmark dataset extracted from digital scholarly editions covering two centuries of Italian literary and political writing. The talk will then present approaches that enhance entity disambiguation by incorporating temporal and contextual information from external Wikidata. Finally, it will detail a method for automatically constructing knowledge graphs from historical correspondence that combines multiple language models in sequence, demonstrating how these technologies can facilitate the exploration and understanding of historical archives without requiring extensive manual annotation or model training.
Yiwen Peng
FLORA: Unsupervised Knowledge Graph Alignment by Fuzzy Logic
Knowledge graph alignment is the task of matching equivalent entities (that is, instances and classes) and relations across two knowledge graphs. Most existing methods focus on pure entity-level alignment, computing the similarity of entities in some embedding space. They lack interpretable reasoning and need training data to work. To solve these issues, we introduce FLORA, a simple yet effective method that (1) is unsupervised, i.e., does not require training data, (2) provides a holistic alignment for entities and relations iteratively, (3) is based on fuzzy logic and thus delivers interpretable results, (4) provably converges, (5) allows dangling entities, i.e., entities without a counterpart in the other KG, and (6) achieves state-of-the-art results on major benchmarks.
Fabian Suchanek
Fabian spent the month of June 2025 at the Sapienza in Rome, and will tell us about his experience.
Adrien Coulet
Data- and knowledge-driven approaches for step-by-step guidance to differential diagnosis
Diagnosis guidelines provide recommendations based on expert consensus that cover the majority of the population, but often overlook patients with uncommon conditions or multiple morbidities. We will present and compare two alternative approaches that provide a step-by-step guidance to the differential diagnosis of anemia and lupus. The first approach relies on reinforcement learning and observational data. The second on large langage models and domain knowledge.
Zacchary Sadeddine
Meaning Representations and Reasoning in the Age of Large Language Models
This thesis explores how to make large language models (LLMs) more reliable and transparent in their reasoning. It first examines around fifteen societal issues related to these models, such as disinformation or user overreliance, and then investigates symbolic structures from linguistics and how they can be used to improve the performance and transparency of LLMs. It presents VANESSA, a reasoning neuro-symbolic system that combines the power of neural models with the rigor of symbolic reasoning, achieving performance comparable to LLMs while remaining transparent. Finally, it addresses the problem of verifying LLM outputs by introducing a step-by-step verification benchmark, paving the way for more interpretable, controllable and trustworthy artificial intelligence systems.
Maximilian Egger
Robust Knowledge Graph Cleaning
Data quality is needed to properly and reliably use the information represented in the dataset. The increasing volume of data renders data preparation and cleaning increasingly difficult. Additionally, more diverse types of data structures for databases, like graphs, get used and need to be handled differently. This leads to the necessity of robust methods to increase data integrity, scalable approaches for finding and fixing errors, and local-oriented algorithms that can be used to pinpoint attention where needed.
This talk provides an overview of my past, present, and future projects on knowledge graphs, exploring their potential for improving data cleanliness and robustness.
Nikola Simidjievski
Synthesis & Augmentation of Tabular Data In the Age of Foundation Models
Foundation models – large pre-trained performant models – have shown remarkable success in applications that predominately focus on vision, language, and sound data. On the other hand, tabular data – one of the most prevalent data modalities in many critical domains of business, science, and healthcare – has seen limited benefits from these advances. Tabular data poses unique challenges that relate to heterogeneity, dimensionality, and scarcity as well as lack of explicit symmetries, implicit structures and incomplete prior knowledge — all of which have limiting effects on how we construct, train and apply/transfer large models for tabular data.
Data synthesis is one of the remedies for overcoming some of these challenges: It can help improve model performance in data-scarce but critical applications, but it can also be utilized as a data augmentation mechanism for training more robust models. Although previous research has sought to adapt the successes of generative modeling of homogeneous modalities to tabular modalities, defining an effective generator for tabular data remains an open problem. In this talk, I will present several novel data-centric approaches for data synthesis that focus on tabular data. Our key innovation is transforming recent pre-trained tabular classifiers into data generators and leveraging their learned information in the input and manifold space. These methods are fast, require no additional training, and can be applied to any downstream predictive model. They consistently improve performance, especially on small datasets where training well-performing models is hard. Consequently, we also uncover several properties and benefits that can help in the way how we design robust and performant general-purpose tabular foundation models.
Simon Razniewski (TU Dresden)
GPTKB: Comprehensively Materializing Factual LLM Knowledge
LLMs have majorly advanced NLP and AI, and next to their ability to perform a wide range of procedural tasks, a major success factor is their internalized factual knowledge. Since (Petroni et al., 2019), analyzing this knowledge has gained attention. However, most approaches investigate one question at a time via modest-sized pre-defined samples, introducing an “availability bias” (Tversky and Kahneman, 1973) that prevents the discovery of knowledge (or beliefs) of LLMs beyond the experimenter’s predisposition. To address this challenge, we propose a novel methodology to comprehensively materialize an LLM’s factual knowledge through recursive querying and result consolidation. As a prototype, we employ GPT-4o-mini to construct GPTKB, a large-scale knowledge base (KB) comprising 101 million triples for over 2.9 million entities. This work marks a milestone in two areas: For LLM research, for the first time, it provides constructive insights into the scope and structure of LLMs’ knowledge (or beliefs), and its strengths and weaknesses. For KB construction, it pioneers new pathways for the long-standing challenge of general-domain KB construction. GPTKB is accessible at https://gptkb.org.