Joint Workshop on Statistics and Knowledge Integration for Logic, Learning, Ethical Decisions, and LLMs (SKILLED-LLMs 2026)

associated with Federated Logic Conference 2026

Call for Contributions

This joint workshop brings together researchers and practitioners working on the intersections of language models, knowledge representation, reasoning, ethics, and statistics. We encourage submissions that discuss novel techniques, approaches, and innovative ideas related to these topics. The workshop aims to create a platform for researchers from different disciplines and AI perspectives to explore approaches and techniques that reconcile reasoning between language models using transformers and logic-based representations, while also addressing ethical considerations and statistical foundations of knowledge integration.

Workshop Scope

This workshop is a merger of three previously separate workshops:

Topics of Interest

Topics of interest include, but are not limited to, the following:

Submission Details

We invite submissions in three categories: long papers (10-12 pages), short/position papers (up to 6 pages), and extended abstracts (up to 2 pages). Submissions must follow the CEUR-WS formatting guidelines. Authors are encouraged to use the official CEUR Overleaf template. Papers must be submitted through the submission page. Each submission will be reviewed by at least two program committee members.

Important Dates

Workshop Description

Reasoning is an essential component of human intelligence as it plays a fundamental role in our ability to think critically, support responsible decisions, and solve challenging problems. Traditionally, AI has addressed reasoning in the context of logic-based representations of knowledge. However, the recent leap forward in natural language processing, with the emergence of language models based on transformers, is hinting at the possibility that these models exhibit reasoning abilities, particularly as they grow in size and are trained on more data. Despite ongoing discussions about what reasoning is in language models, it is still not easy to pin down to what extent these models are actually capable of reasoning.

The goal of this workshop is to create a platform for researchers from different disciplines and/or AI perspectives, to explore approaches and techniques with the aim to reconcile reasoning between language models using transformers and using logic-based representations. The specific objectives include analyzing the reasoning abilities of language models measured alongside KR methods, injecting KR-style reasoning abilities into language models (including by neuro-symbolic means), and formalizing the kind of reasoning language models carry out. This exploration aims to uncover how language models can effectively integrate and leverage knowledge and reasoning with it, thus improving their application and utility in areas where precision and reliability are a key requirement.

Furthermore, as language models become increasingly prevalent in society, it is crucial to address the ethical implications of their deployment and use. This workshop brings together perspectives on ethical considerations in LLM development and deployment, including issues of bias, fairness, transparency, and accountability. Additionally, the workshop explores the statistical foundations underlying knowledge integration and the interplay between symbolic and statistical approaches in learning systems.

By merging these three communities—NeLaMKRR, ReLLM, and SKILL—we aim to foster interdisciplinary collaboration and create a comprehensive forum for discussing the latest advances in language models, their reasoning capabilities, ethical implications, and the statistical and logical foundations that underpin effective knowledge integration in AI systems.

Organisation and PC Members

Organisers

Program Committee Members (Tentative)