Master thesis defense invitation Maximilian Muschalik

Master thesis defense invitation Maximilian Muschalik

autor Johannes Rabold -
Počet odpovedí: 0

Dear all,

you are cordially invited to join us in presence for the master thesis defense of Maximilian Muschalik. We will meet on Thursday, 23rd of September, 16:00 s.t. in our lab in room 05.013. Please make sure you fall into one of the 3G criteria when entering the university buildings. We are looking forward to you.

Best regards
Johannes


Topic: Beyond Contrastive Explanations for Deep Learning Models in Physical Relational Domains

Abstract:

Increasingly, tasks traditionally performed by humans are becoming automated by deep neural networks. However, these deep neural networks are considered black-box machine learning algorithms, as their results are nontransparent and not explainable. As a result, different explanation methods have recently been introduced. The Contrastive Explanation Method is a local approach mimicking a human explanation method by creating pertinent negative explanations. Multimodal approaches can combine different explanation mediums such as verbalization and visualization into high-fidelity explanations. The visual explanations of the Contrastive Explanation Method alone, however, cannot sufficiently describe how a classification depends on the absence of specific spatial relationships. I propose a novel explanation method that infuses contrastive visual explanations with general relation-based rules. The presented Relational Contrastive Explanation Method can extract high-quality spatial explanation rules from contrastive pertinent negatives through the Inductive Logic Programming rule learning engine ALEPH. The proposed explanation method’s efficacy is evaluated with a proof-of-concept implementation of stable and unstable blocks-world structures. Multiple experiments validate the results and demonstrate the system’s capabilities of extracting explicit rules for the concept of structural stability. Moreover, crucial insights for implementing the proposed explanation method in different application domains are derived from these experiments. The resulting explanation architecture showcases the general applicability of enriching visualizations with relational rules as pertinent negative explanations. Thus, images can be explicitly explained by stating what relationships matter and must not be present for a classifier’s decision.