🔍 Workshop Description
Reasoning is an essential component of human intelligence as it plays a fundamental role in our ability to think critically, support responsible decisions, and solve challenging problems. Traditionally, AI has addressed reasoning in the context of logic-based representations of knowledge. However, the recent leap forward in natural language processing, with the emergence of language models based on transformers, is hinting at the possibility that these models exhibit reasoning abilities, particularly as they grow in size and are trained on more data.
The goal of this workshop is to create a platform for researchers from different disciplines and AI perspectives to explore approaches and techniques for reconciling reasoning between deep learning and logic-based representations. This includes integrating KR-style reasoning with deep learning models to build neuro-symbolic systems, as well as analyzing the reasoning abilities of large language models and formalizing the kinds of reasoning they carry out.
Furthermore, as language models become increasingly prevalent in society, it is crucial to address the ethical implications of their deployment and use, including bias, fairness, transparency, and accountability.
Finally, the workshop is open to any approach that combines data-driven techniques with knowledge representation and reasoning, whether based on deep learning, statistics, or other paradigms.
By merging these three communities—NeLaMKRR, ReLLM, and SKILL—we aim to foster interdisciplinary collaboration and create a comprehensive forum for discussing reasoning, ethics, and knowledge integration in AI systems.