|
|
|
Books & Chapters
2022
Becking D, Dreyer M, Samek W, Müller K and Lapuschkin S (2022).
“ECQx: Explainability-Driven Quantization for Low-Bit and Sparse DNNs”.
In: xxAI - Beyond Explainable AI 271-296.
Springer, Cham
2019
Montavon G, Binder A, Lapuschkin S, Samek W and Müller K-R (2019).
“Layer-wise relevance propagation: An Overview”.
In: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning 193-209.
Springer, Cham
2016
Binder A, Bach S, Montavon G, Müller K-R and Samek W (2016).
“Layer-wise Relevance Propagation for Deep Neural Network Architectures”.
In: Information Science and Application (ICISA) 2016. Lecture Notes in Electrical Engineering 276:913-922.
Springer, Singapore
Binder A, Montavon G, Lapuschkin S, Müller K-R and Samek W (2016).
“Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers”.
In: Lecture Notes in Computer Science 9887:63-71.
Springer, Berlin/Heidelberg
Publications in Journals
2023
Achtibat R, Dreyer M, Eisenbraun I, Bosse S, Wiegand T, Samek W and Lapuschkin S (2023).
“From attribution maps to human-understandable explanations through Concept Relevance Propagation”.
In: Nature Machine Intelligence 5(9):1006–1019 .
Hedström A, Bommer P, Wickstrøm K K, Samek W, Lapuschkin S and Höhne M-C M (2023).
“The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus”.
In: Transactions on Machine Learning Research 2835–8856 .
Weber L, Lapuschkin S, Binder A and Samek W (2023).
“Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement”.
In: Information Fusion 92:154-176 .
Hedström A, Weber L, Krakowczyk D G, Bareeva D, Motzkus F, Samek W, Lapuschkin S, and Höhne M-C M (2023).
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond”.
In: Journal of Machine Learning Research 24:1-11 .
2022
Hofmann S M, Beyer F, Lapuschkin S, Goltermann O, Loeffler M, Müller K-R, Villringer A, Samek W and Witte A V (2022).
“Towards the Interpretability of Deep Learning Models for Multi-modal Neuroimaging: Finding Structural Changes in the Ageing Brain”.
In: NeuroImage 261:119504 .
Ma J, Schneider L, Lapuschkin S, Achtibat R, Duchrau M, Krois J, Schwendicke F and Samek W (2022).
“Towards Trustworthy AI in Dentistry”.
In: Journal of Dental Research 00220345221106086 .
Rieckmann A, Dworzynski P, Arras L, Lapuschkin S, Samek W, Onyebuchi A A, Rod N H, Ekstrøm C T
(2022).
“Causes of Outcome Learning: A Causal Inference-inspired Machine Learning Approach to Disentangling Common Combinations of Potential Causes of a Health Outcome”.
In: International Journal of Epidemiology dyac078 .
Slijepcevic D, Horst F, Lapuschkin S, Horsak B, Raberger A-M, Kranzl A, Samek W, Breiteneder C, Schöllhorn W I and Zeppelzauer M (2022).
“Explaining Machine Learning Models for Clinical Gait Analysis”.
In: ACM Transactions on Computing in Healthcare 3(2):14:1-27 .
Anders C J, Weber L, Neumann D, Samek W, Müller K-R and Lapuschkin S (2022).
“Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models”.
In: Information Fusion 77:261-295 .
Sun J, Lapuschkin S, Samek W and Binder A (2022).
“Explain and Improve: LRP-inference Fine-tuning for Image Captioning Models”.
In: Information Fusion 77:233-246 .
2021
Samek W, Montavon G, Lapuschkin S, Anders C J, and Müller K-R (2021).
“Explaining Deep Neural Networks and Beyond; A Review of Methods and Applications”.
In: Proceedings of the IEEE 109(3):247-278
Yeom S-K, Seegerer P, Lapuschkin S, Binder A, Wiedemann S, Müller K-R and Samek W (2021).
“Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning”.
In: Pattern Recognition 115:107899
Aeles J, Horst F, Lapuschkin S, Lacourpaille L, and Hug F (2021).
“Revealing the Unique Features of Each Individual’s Muscle Activation Signatures”.
In: Journal of the Royal Society Interface 18(174):20200770
2020
Horst F, Slijepcevic D, Zeppelzauer M, Raberger AM, Lapuschkin S, Samek W, Schöllhorn WI, Breiteneder C, and Horsak B (2020).
“Explaining Automated Gender Classification of Human Gait”.
In: Gait & Posture 81(S1):159-160
Hägele M, Seegerer P, Lapuschkin S, Bockmayr M, Samek W, Müller K-R and Binder A (2020).
“Resolving Challenges in Deep Learning-based Analyses of Histopathological Images using Explanation Methods”.
In: Scientific Reports 10:6423
2019
Alber M, Lapuschkin S, Seegerer P, Hägele M, Schütt K T, Montavon G, Samek W, Müller K-R, Dähne S and Kindermans P-J (2019).
“iNNvestigate Neural Networks!”.
In: Journal of Machine Learning Research 20(93):1-8
Lapuschkin S, Wäldchen S, Binder A, Montavon G, Samek W and Müller K-R (2019).
“Unmasking Clever Hans Predictors and Assessing what Machines Really Learn”.
In: Nature Communications 10:1069
Horst F, Lapuschkin S, Samek W, Müller K-R and Schöllhorn W I (2019).
“Explaining the Unique Nature of Individual Gait Patterns with Deep Learning”.
In: Scientific Reports 9:2391
2017
Montavon G, Lapuschkin S, Binder A, Samek W and Müller K-R (2017).
“Explaining NonLinear Classification Decisions with Deep Taylor Decomposition”.
In: Pattern Recognition 65:211-222 Pattern Recognition Best Paper Award and Pattern Recognition Medal winner
Samek W, Binder A, Montavon G, Lapuschkin S, and Müller K-R (2017).
“Evaluating the Visualization of what a Deep Neural Network has Learned”.
In: IEEE Transactions of Neural Networks and Learning Systems
2016
Sturm I, Lapuschkin S, Samek W and Müller K-R (2016).
“Interpretable Deep Neural Networks for Single-Trial EEG Classification”.
In: Journal of Neuroscience Methods 274:141-145
Lapuschkin S, Binder A, Montavon G, Müller K-R and Samek W (2016).
“The Layer-wise Relevance Propagation Toolbox for Artificial Neural Networks”.
In: Journal of Machine Learning Research 17(114):1-5
2015
Bach S, Binder A, Montavon G, Klauschen F, Müller K-R and Samek W (2015).
“On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation”.
In: PLoS ONE 10(7):e0130140
Publications in Conference Proceedings and Workshops
2023
Pahde F, Dreyer M, Samek W and Lapuschkin S (2023).
“Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations”.
In: Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention 596–606. (Green Open Access).
Binder A, Weber L, Lapuschkin S, Montavon G, Müller K-R and Samek W (2023).
“Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models”.
In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 16143–16152 .
Dreyer M, Achtibat R, Wiegand T, Samek W and Lapuschkin S (2023).
“Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations”.
In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 3828–3838 .
Pahde F, Yolcu GÜ, Binder A, Samek W and Lapuschkin S (2023).
“Optimizing Explanations by Network Canonization and Hyperparameter Search”.
In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 3818–3827 .
Krakowczyk D G, Prasse P, Reich D R, Lapuschkin S, Scheffer T, Jäger L A (2023).
“Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models”.
In: Proceedings of the Symposium on Eye Tracking Research and Applications 1-8 . Best Short Paper Award Winner
Krakowczyk D G, Reich D R, Prasse P, Lapuschkin S, Jäger L A, Scheffer T (2023).
“Selection of XAI Method Matters: Evaluation of Feature Attribution Methods for Oculomotoric Biometric Identification”.
In: NeuRIPS 2022 Workshop on Gaze Meets ML .
2022
Motzkus F, Weber L and Lapuschkin S (2022).
“Measurably Stronger Explanation Reliability via Model Canonization”.
In: Proceedings of the International Conference on Image Processing (ICIP) 516-520 .
Ede S, Baghdadlian S, Weber L, Nguyen A, Zanca D, Samek W and Lapuschkin S (2022).
“Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI”.
In: Proceedings of the International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE) 1-18 . (green open access link)
2021
Sun J, Lapuschkin S, Samek W, Zhao Y, Cheung N-M and Binder A (2021).
“Explanation-Guided Training for Cross-Domain Few-Shot Classification”.
In: Proceedings of the 25th International Conference on Pattern Recognition
Goh G S W, Lapuschkin S, Weber L, Samek W and Binder A (2021).
“Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution”.
In: Proceedings of the 25th International Conference on Pattern Recognition
2020
Kohlbrenner M, Bauer A, Nakajima S, Binder A, Samek W and Lapuschkin S (2020).
“Towards Best Practice in Explaining Neural Network Decisions with LRP”.
In: Proceedings of the IEEE International Joint Conference on Neural Networks
Sun J, Lapuschkin S, Samek W and Binder A (2020).
“Understanding Image Captioning Models beyond Visualizing Attention”.
In: XXAI: Extending Explainable AI Beyond Deep Models and Classifiers. ICML Workshop
Anders C J, Neumann D, Marinč T, Samek W, Müller K-R and Lapuschkin S (2020).
“XAI for Analyzing and Unlearning Spurious Correlations in ImageNet”.
In: XXAI: Extending Explainable AI Beyond Deep Models and Classifiers. ICML Workshop
Sun J, Lapuschkin S Samek W, Zhao Y, Cheung N-M and Binder A (2020).
“Explain and Improve: Cross-Domain-Few-Shot-Learning Using Explanations”.
In: XXAI: Extending Explainable AI Beyond Deep Models and Classifiers. ICML Workshop
2018
Alber M, Lapuschkin S, Seegerer P, Hägele M, Schütt K T, Montavon G, Samek W, Müller K-R, Dähne S and Kindermans P-J (2018).
“How to iNNvestigate Neural Networks’ Predictors!”.
In: Machine Learning Open Source Software: Sustainable Communities. NIPS Workshop
2017
Lapuschkin S, Binder A, Müller K-R and Samek W (2017).
“Understanding and Comparing Deep Neural Networks for Age and Gender Classification”.
In: Proceedings of the ICCV’17 Workshop on Analysis and Modeling of Faces and Gestures (AMFG) 2017:1629-1638
Srinivasan V, Lapuschkin S, Hellge C, Müller K-R and Samek W (2017).
“Interpretable Action Recognition in Compressed Domain”.
In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2017:1692-1696
2016
Bach S, Binder A, Müller K-R and Samek W (2016).
“Controlling Explanatory Heatmap Resolution and Semantics via Decomposition Depth”.
In: Proceedings of the IEEE International Conference of Image Processing (ICIP) 2016:2271-2275
Binder A, Samek W, Montavon G, Bach S, and Müller K-R (2016).
“Analyzing and Validating Neural Network Predictions”.
In: Proceedings of the ICML’16 Workshop on Visualization for Deep Learning . Best Paper Award Winner
Lapuschkin S, Binder A, Montavon G, Müller K-R and Samek W (2016).
“Analyzing Classifiers: Fisher Vectors and Deep Neural Networks”.
In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016:2912-2920
Montavon G, Bach S, Binder A, Samek W and Müller K-R (2016).
“Deep Taylor Decomposition of Neural Networks”.
In: Proceedings of the ICML’16 Workshop on Visualization for Deep Learning 2016:2912-2920
Samek W, Montavon G, Binder A, Lapuschkin S and Müller K-R (2016).
“Interpreting the Predictions of Complex ML Models by Layer-wise Relevance Propagation”.
In: Proceedings of the Interpretable ML for Complex Systems NIPS’16 Workshop
Preprints
Dawoud K, Samek W, Lapuschkin S and Bosse S (2023).
“Human-Centered Evaluation of XAI Methods””.
In: CoRR abs/2310.07534. (accepted for publication at the CXAI Workshop at ICDMW 2023) .
Weber L, Berend J, Binder A, Wiegand T, Samek W and Lapuschkin S (2023).
“Layer-wise Feedback Propagation”.
In: CoRR abs/2308.12053 .
Dreyer M, Pahde F, Anders CJ, Samek W and Lapuschkin S (2023).
“From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent
Space”.
In: CoRR abs/2308.09437 .
Frommholz A, Seipel F, Lapuschkin S, Samek W and Vielhaben J (2023).
“XAI-based Comparison of Input Representations for Audio Event Classification”.
In: CoRR abs/2304.14019. (accepted for publication at CBMI 2023).
Vielhaben J, Lapuschkin S, Montavon G and Samek W (2023).
“Explainable AI for Time Series via Virtual Inspection Layers”.
In: CoRR abs/2303.06365. (accepted for publication in Pattern Recognition) .
Gerstenberger M, Lapuschkin S, Eisert P and Bosse S (2022).
“But That's Not Why: Inference Adjustment by Interactive Prototype Deselection”.
In: CoRR abs/2203.10087 .
Pahde F, Weber L, Anders CJ, Samek W and Lapuschkin S (2022).
“PatClArC: Using Pattern Concept Activation Vectors for Noise-Robust Model Debugging”.
In: CoRR abs/2202.03482 .
Anders C J, Neumann D, Samek W, Müller K-R and Lapuschkin S (2021).
“Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy”.
In: CoRR abs/2106.13200 .
Becker S, Ackermann M, Lapuschkin S, Müller K-R and Samek W (2018).
“Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals”.
In: CoRR abs/1807.03418
Schwenk G and Bach S (2014).
“Detecting Behavioural and Structural Anomalies in Media-Cloud Applications”.
In: CoRR abs/1409.8035
|