Current developments in data analytics result in better prediction models. However, those models are often black-boxes. Although the model prediction quality is high, their practical use is hampered by the lack of answers to important questions about how the algorithms reach their decisions. Explainability, interpretability and transparency of machine learning models are currently hot topics in data analysis. These topics are relevant to critical issues of reliability and validity for complex models, with practical implications for improving model performance, for instance by controlling bias, as well as legal and ethical consequences. Acknowledging this trend, the Explainable Machine Leaning task force was established.
The main goal of this task force is to promote the research, development, valorization, education, and understanding of explainable, transparent or interpretable machine leaning models. We want also to create a forum for discussion of the challenges ahead and advancing the research in this topic. This will share best practice and facilitate the development of common platforms and benchmarks.