MA Verteidigung Katharina Weitz

MA Verteidigung Katharina Weitz

von Ute Schmid -
Anzahl Antworten: 0

Liebe alle,

zum Glück haben es einige auch ohne Ankündigung mitbekommen:

DO. 19.9.2018, 10-11 Uhr, 5.013

Katharina Weitz (MA CitH): Applying Explainable Artificial Intelligence for Deep Learning Networks to Decode Facial Expressions of Pain and Emotions

Deep learning networks are successfully used for object and face recognition in images and videos. In order to be able to apply such networks in practice, for example
in hospitals as a pain recognition tool, the current procedures are only suitable to a  limited  extent.   The  advantage  of  deep  learning  methods  is  that  they  can  learn
complex non-linear relationships between raw data and target classes without limiting themselves to a set of hand-crafted features provided by humans.  However,
the disadvantage is that due to the complexity of these networks,  it is not possible to interpret the knowledge that is stored inside the network.  It is a black-box
learning  procedure.   Explainable  Artificial  Intelligence  (XAI)  approaches  mitigate this  problem  by  extracting  explanations  for  decisions  and  representing  them  in  a
human-interpretable form. The aim of this master’s thesis is to investigate different XAI methods and apply them to explain how a deep learning network distinguishes facial expressions of pain from facial expressions of emotions such as happiness and
disgust.  The results show that the CNN has problems to distinguish between pain and happiness.  By the usage of XAI it can be shown that the CNN discovers features for happiness in painful images, when the person shows no typical pain related facial expressions.  Furthermore, the results show that the learned features of the network are dataset-independent.  It can be concluded that model-specific XAI approaches seem to be a promising base to make the learned features visible for humans. This is on the one hand the first step to improve CNNs and on the other hand,
to increase the comprehensibility of such black box systems.