Hier werden alle Kolloquiumstermine mit Vortragenden und Thema gepostet
Information to participate via Zoom:
### ZOOMLINK AKTUALISIERT ###
https://uni-bamberg.zoom-x.de/j/61445336357
Meeting-ID: 614 4533 6357
Kenncode: 8*hiTH
Vortragende*r: Oraz Serdarov
Thema: Explainable Unsupervised Learning for Fraud Detection (in coop with HUK Coburg)
Information to participate via Zoom:
### ZOOMLINK AKTUALISIERT ###
https://uni-bamberg.zoom-x.de/j/61445336357
Meeting-ID: 614 4533 6357
Kenncode: 8*hiTH
Wo: WE / 05.013 nicht hybrid
MA Verteidigung
Adrian Völker
OS4ITS: A Distributed Shell-based Platform for Integrated Knowledge- and Learner-Modeling and Curriculum Management Across Different Tutoring Systems
Datum: 21.12.2023
Ort: WE5/05.013 und Zoom
Hybrid Colloquium Blocker fürs nächste Semester
CogSys Lab: WE5/05.013
### ZOOMLINK AKTUALISIERT ###
Information to participate via Zoom:
Beitreten Zoom Meeting
https://uni-bamberg.zoom-x.de/j/61445336357
Meeting-ID: 614 4533 6357
Kenncode: 8*hiTH
Abstract:
Clustering is an unsupervised machine learning approach that aims to find groups of similar instances. Mixed data clustering is of special interest since real-life data consists of diverse data types. Being unsupervised, questions arise about what defines these clusters and what are distinctions between them. Existing eXplainable Artificial Intelligence (XAI) methods for clustering involve intrinsic explainable clustering methodologies, explanations generated from surrogate models using established XAI frameworks, and explanations generated from inter-instance distances. To address gaps in current research, we propose a model-agnostic cluster explainer that does not rely on surrogate models or knowledge about the underlying clustering algorithm. Our proposed approach utilizes entropy-based Feature Importance Scores (FIS), that apply to continuous and discrete features. Global explanations leverage global FIS with visualizations of vital features, while local explanations use FIS, visualisations of important features, FIS-based prototypes, and FIS-based rules for individual clusters. We outline the critical algorithms, data structures and discuss the explainer implementation as an R-package. The implemented explainer is evaluated using the XAI benchmarking framework proposed by Belaid et al. and compared with existing XAI frameworks (SHAP and ClAMP). Therefore, we test the performance of our explainer on synthetic benchmarking datasets as well as real-life datasets. The resulting ranking of features (FIS and GFIS) aligns closely with feature rankings from SHAP. Moreover, the novel FIS-based rules need fewer features and fewer rules per cluster compared to ClAMP, however, this comes with the cost of being less accurate in certain scenarios. The findings show that mixed data clusterings can be explained solely from the distributions of features and the assigned clusters.
Datum: 20.12.2023
Zeit: 12:00-13:00
Ort: WE5/05.013
Abstract:
In recent years, significant improvements have been made in Machine Learning (ML) based models. Primarily, improving these models aims to increase their learning rate. However, the functionality of ML, especially models with Convolutional Neural Network (CNN) architecture, is still complex for humans to understand. A significant method to explore the intricacies of the CNN model is to analyze the models with Layer-wise Relevance Propagation (LRP). This research examines the model’s performance trained on sequential data extracted from the hand gesture recognition dataset. To understand the model’s performance and possible biases, gender-based comparisons were made on the analysis results with different LRP rules, and then the results were clustered. Various biases were revealed by examining the LRP methods and clustering of the relevance maps generated via LRP. Detected biases are classified as sampling bias (for example, recognizing a face, accessory, or sleeve when detecting hand gestures), omitted variable bias (inability to see the entire hand), and representation bias (finger movements and incorrect hand angles that cause fuzzy hand gesture recognition), and algorithmic bias (overlap detection in detected clusters that cause misinterpreted results). Interpreting the results of the bias in the dataset emphasizes the importance of choosing LRP rules and using appropriate clustering methods. The obtained results can be used to reduce any obvious discriminatory tendencies in the CNN model. It is essential to acknowledge the inherent limitations of the dataset, particularly its inadequacy in covering all potential hand gestures.
Zeit: 16:00 - 17:30
Ort: WE5/05.013 und Zoom
Information to participate via Zoom:
### ZOOMLINK AKTUALISIERT ###
https://uni-bamberg.zoom-x.de/j/61445336357
Meeting-ID: 614 4533 6357
Kenncode: 8*hiTH