Home

Current developments in data analytics result in better prediction models.  However, those models are often black-boxes. Although the model prediction quality is high, their practical use is hampered by the lack of answers to important questions about how the algorithms reach their decisions. Explainability, interpretability and transparency of machine learning models are currently hot topics in data analysis.  These topics are relevant to critical issues of reliability and validity for complex models, with practical implications for improving model performance, for instance by controlling bias, as well as legal and ethical consequences.  Acknowledging this trend, the Explainable Machine Leaning task force was established.

Goal:

The main goal of this task force is to promote the research, development, valorization, education, and understanding of explainable, transparent or interpretable machine leaning models. We want also to create a forum for discussion of the challenges ahead and advancing the research in this topic. This will share best practice and facilitate the development of common platforms and benchmarks.

News

Special Session on “Interpretable Machine Learning” for the European Conference on Data Analysis (ECDA)

Special Session on “Interpretable Machine Learning” for the European Conference on Data Analysis (ECDA)  which will take place in Bayreuth from March 18th to 20th 2019. This is a follow-up to a session to the same topic which has been held at ECDA 2018. For the session “Interpretable Machine Learning”, organizers solicit contributions that (i) …

Special Session on Explainable Machine Learning at IJCNN 2019

We would like to draw your attention to the  Special Session on Explainable Machine Learning at 2019 International Joint Conference on Neural Networks IJCNN2019 in Budapest, Hungary, July 14-19, 2019 organized by Paulo J.G. Lisboa, José D. Martín-Guerrero, Davide Bacciu, Alfredo Vellido. This special session will report methodologies and applications to explain the operation of machine learning models. It will focus …

Special Session Advances on eXplainable Artificial Intelligence (at FUZZ-IEEE 2019)

We would like to draw your attention to the  Special Session Advances on eXplainable Artificial Intelligence at Internetional Conference on Fuzzy Systems (FUZZ-IEEE 2019) in New Orleans, USA, 23-26 June, 2019 organized by Jose M. Alonso, Ciro Castiello, Corrado Mencar. The aim of this session is to provide a forum to disseminate and discuss XAI, with special attention …

Members

Anna Wilbik   – chair
(Eindhoven University of Technology, The Netherlands)

Paulo Lisboa – co-chair
(Liverpool John Moores University, United Kingdom)

Qi Chen  – co-chair
(Victoria University of Wellington ,  New Zealand)

Members:

Jose M. Alonso (University of Santiago de Compostela, Spain)

José D. Martín Guerrero (Universitat de València, Spain)

Alfredo Vellido  (Universitat Politècnica de Catalunya, Spain)