Call for Papers Now Open!
We are pleased to announce that this joint workshop brings together three communities: Next-Generation Language Models for Knowledge Representation and Reasoning (NeLaMKRR), Reasoning and Ethics in Large Language Models (ReLLM), and Statistics and Knowledge Integration for Logic and Learning (SKILL).
Call for Contributions
This joint workshop brings together researchers and practitioners working on the intersections of language models, knowledge representation, reasoning, ethics, and statistics. We encourage submissions that discuss novel techniques, approaches, and innovative ideas related to these topics. The workshop aims to create a platform for researchers from different disciplines and AI perspectives to explore approaches and techniques that reconcile reasoning between language models using transformers and logic-based representations, while also addressing ethical considerations and statistical foundations of knowledge integration.
Workshop Scope
This workshop is a merger of three previously separate workshops:
Next-Generation Language Models for Knowledge Representation and Reasoning (NeLaMKRR): Focuses on analyzing reasoning abilities of language models, injecting KR-style reasoning into language models, and formalizing the kind of reasoning language models carry out.
Reasoning and Ethics in Large Language Models (ReLLM): Addresses ethical considerations, limitations, and responsible deployment of language models in various domains.
Statistics and Knowledge Integration for Logic and Learning (SKILL): Explores the integration of statistical methods with knowledge representation and logical reasoning in learning systems.
Topics of Interest
Topics of interest include, but are not limited to, the following:
Language models' reasoning abilities and knowledge representation analysis
Argumentation, negotiation, and agent-based reasoning in language models
Infusing KR-style reasoning into language models
Knowledge injection and extraction mechanisms in language models
Qualitative assessment of reasoning accuracy in language models
Techniques for enhancing language model reasoning predictability
Formalizing language models' reasoning types
Reasoning applications in medicine, law, and science domains
Ethics and limitations of reasoning in language models
Language model reasoning categories: Deductive, Inductive, Abductive
Formal vs. informal 'common sense' reasoning comparison
Chain of thought prompting investigation
Prompting and in-context learning examination
Problem decomposition strategies exploration
Rationale engineering studies
Bootstrapping and self-improvement methods evaluation
Language models integration with knowledge graphs
Unstructured data conversion to knowledge graphs
Domain-specific language models development
Research on neurosymbolic knowledge representation models
Ethical considerations in LLM deployment and use
Bias detection and mitigation in language models
Fairness and transparency in LLM-based systems
Statistical foundations of knowledge integration
Probabilistic reasoning and uncertainty quantification in LLMs
Integration of symbolic and statistical approaches
Evaluation metrics for reasoning and ethical considerations
Submission Details
We invite submissions in three categories: long papers (10-12 pages), short/position papers (up to 6 pages), and extended abstracts (up to 2 pages). Submissions must follow the CEUR-WS formatting guidelines. Authors are encouraged to use the official CEUR Overleaf template. Papers must be submitted through the submission page. Each submission will be reviewed by at least two program committee members.
Important Dates
Paper Submission Deadline: TBD
Paper Notification: TBD
Camera Ready: TBD
Workshop Registration Deadline: TBD
Workshop Date: July 18-19, 2026
Workshop Description
Reasoning is an essential component of human intelligence as it plays a fundamental role in our ability to think critically, support responsible decisions, and solve challenging problems. Traditionally, AI has addressed reasoning in the context of logic-based representations of knowledge. However, the recent leap forward in natural language processing, with the emergence of language models based on transformers, is hinting at the possibility that these models exhibit reasoning abilities, particularly as they grow in size and are trained on more data. Despite ongoing discussions about what reasoning is in language models, it is still not easy to pin down to what extent these models are actually capable of reasoning.
The goal of this workshop is to create a platform for researchers from different disciplines and/or AI perspectives, to explore approaches and techniques with the aim to reconcile reasoning between language models using transformers and using logic-based representations. The specific objectives include analyzing the reasoning abilities of language models measured alongside KR methods, injecting KR-style reasoning abilities into language models (including by neuro-symbolic means), and formalizing the kind of reasoning language models carry out. This exploration aims to uncover how language models can effectively integrate and leverage knowledge and reasoning with it, thus improving their application and utility in areas where precision and reliability are a key requirement.
Furthermore, as language models become increasingly prevalent in society, it is crucial to address the ethical implications of their deployment and use. This workshop brings together perspectives on ethical considerations in LLM development and deployment, including issues of bias, fairness, transparency, and accountability. Additionally, the workshop explores the statistical foundations underlying knowledge integration and the interplay between symbolic and statistical approaches in learning systems.
By merging these three communities—NeLaMKRR, ReLLM, and SKILL—we aim to foster interdisciplinary collaboration and create a comprehensive forum for discussing the latest advances in language models, their reasoning capabilities, ethical implications, and the statistical and logical foundations that underpin effective knowledge integration in AI systems.
Organisation and PC Members
Organisers
Ha Thanh Nguyen, Research and Development Center for Large Language Models, National Institute of Informatics (NII), Tokyo, Japan
Francesca Toni, Department of Computing, Imperial College London, United Kingdom
Kostas Stathis, Department of Computer Science, Royal Holloway University of London, United Kingdom
Ken Satoh, Center for Juris-Informatics, ROIS-DS, Tokyo, Japan
Randy Goebel, Alberta Machine Intelligence Institute, University of Alberta, Canada
Nourhan Ehab, German University in Cairo, Egypt
Mervat Abuelkheir, German University in Cairo, Egypt
Elena Umili, Department of Computer, Control and Management Engineering, Sapienza University of Rome, Italy
Francesco Chiariello, RWTH Aachen University, Germany
Yves Lesperance, Department of Electrical Engineering and Computer Science, York University, Canada
Matteo Magnini, Department of Computer Science and Engineering, University of Bologna, Italy
Federico Sabbatini, Department of Pure and Applied Sciences, University of Urbino Carlo Bo, Italy
Program Committee Members (Tentative)
Agnieszka Mensfelt, Royal Holloway, University of London, United Kingdom
Daniel Sonntag, German Research Center for Artificial Intelligence (DFKI), Saarbrücken & Oldenburg, Germany
Gabriel Freedman, Imperial College London, United Kingdom
John D. Martin, Openmind Research Institute, Edmonton, Alberta, Canada
Lihu Chen, Imperial College London, United Kingdom
María Navas-Loro, Universidad Politécnica de Madrid, Spain
May Myo Zin, Center for Juris-Informatics, ROIS-DS, Tokyo, Japan
Minh-Phuong Nguyen, Japan Advanced Institute of Science and Technology, Ishikawa, Japan
Sabine Wehnert, Otto-von-Guericke University Magdeburg, Germany
Thi-Hai-Yen Vuong, VNU University of Engineering and Technology, Vietnam National University, Hanoi, Vietnam
Vince Trencsenyi, Royal Holloway University of London, United Kingdom
Vu Tran, Japan Advanced Institute of Science and Technology, Ishikawa, Japan
Wachara Funwacharakorn, Center for Juris-Informatics, ROIS-DS, Tokyo, Japan