Fraunhofer Heinrich Hertz Institute  
      Dr. Sebastian Lapuschkin
       
Artificial Intelligence Department
Head of Explainable AI Group
      Fraunhofer Institute for Telecommunications
Heinrich Hertz Institute
Einsteinufer 37
10587 Berlin
Germany

Explainable AI Group
Phone: +49 30 31002-371
sebastian.lapuschkin@hhi.fraunhofer.de
Google Scholar Page
Sebastian Lapuschkin
      Short Bio  
      Sebastian Lapuschkin received the Ph.D. degree with distinction from the Berlin Institute of Technology in 2018 for his pioneering contributions to the field of Explainable Artificial Intelligence (XAI) and interpretable machine learning. From 2007 to 2013 he studied computer science (B. Sc. and M. Sc.) at the Berlin Institute of Technology, with a focus on software engineering and machine learning. Currently, he is the Head of the Explainable Artificial Intelligence at Fraunhofer Heinrich Hertz Institute (HHI) in Berlin. He is the recipient of multiple awards, including the Hugo-Geiger-Prize for outstanding doctoral achievement and the 2020 Pattern Recognition Best Paper Award. His work is focused on pushing the boundaries of XAI, e.g, for achieving human-understandable explanations, or towards the utilization of interpretable feedback for the improvement of machine learning systems and data. Further research interests include efficient machine learning and data analysis, data and algorithm visualization.  
         
      Publications: [ Books & Chapters | Journals | Conferences | Preprints ]  
      Books & Chapters Publications in Journals
    2022
  • Hofmann S M, Beyer F, Lapuschkin S, Goltermann O, Loeffler M, Müller K-R, Villringer A, Samek W and Witte A V (2022).
    “Towards the Interpretability of Deep Learning Models for Multi-modal Neuroimaging: Finding Structural Changes in the Ageing Brain”.
    In: NeuroImage 261:119504 .

  • Ma J, Schneider L, Lapuschkin S, Achtibat R, Duchrau M, Krois J, Schwendicke F and Samek W (2022).
    “Towards Trustworthy AI in Dentistry”.
    In: Journal of Dental Research 00220345221106086 .

  • Rieckmann A, Dworzynski P, Arras L, Lapuschkin S, Samek W, Onyebuchi A A, Rod N H, Ekstrøm C T (2022).
    “Causes of Outcome Learning: A Causal Inference-inspired Machine Learning Approach to Disentangling Common Combinations of Potential Causes of a Health Outcome”.
    In: International Journal of Epidemiology dyac078 .

  • Slijepcevic D, Horst F, Lapuschkin S, Horsak B, Raberger A-M, Kranzl A, Samek W, Breiteneder C, Schöllhorn W I and Zeppelzauer M (2022).
    “Explaining Machine Learning Models for Clinical Gait Analysis”.
    In: ACM Transactions on Computing in Healthcare 3(2):14:1-27 .

  • Anders C J, Weber L, Neumann D, Samek W, Müller K-R and Lapuschkin S (2022).
    “Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models”.
    In: Information Fusion 77:261-295 .

  • Sun J, Lapuschkin S, Samek W and Binder A (2022).
    “Explain and Improve: LRP-inference Fine-tuning for Image Captioning Models”.
    In: Information Fusion 77:233-246 .

  • 2021
  • Samek W, Montavon G, Lapuschkin S, Anders C J, and Müller K-R (2021).
    “Explaining Deep Neural Networks and Beyond; A Review of Methods and Applications”.
    In: Proceedings of the IEEE 109(3):247-278

  • Yeom S-K, Seegerer P, Lapuschkin S, Binder A, Wiedemann S, Müller K-R and Samek W (2021).
    “Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning”.
    In: Pattern Recognition 115:107899

  • Aeles J, Horst F, Lapuschkin S, Lacourpaille L, and Hug F (2021).
    “Revealing the Unique Features of Each Individual’s Muscle Activation Signatures”.
    In: Journal of the Royal Society Interface 18(174):20200770

  • 2020
  • Horst F, Slijepcevic D, Zeppelzauer M, Raberger AM, Lapuschkin S, Samek W, Schöllhorn WI, Breiteneder C, and Horsak B (2020).
    “Explaining Automated Gender Classification of Human Gait”.
    In: Gait & Posture 81(S1):159-160

  • Hägele M, Seegerer P, Lapuschkin S, Bockmayr M, Samek W, Müller K-R and Binder A (2020).
    “Resolving Challenges in Deep Learning-based Analyses of Histopathological Images using Explanation Methods”.
    In: Scientific Reports 10:6423

  • 2019
  • Alber M, Lapuschkin S, Seegerer P, Hägele M, Schütt K T, Montavon G, Samek W, Müller K-R, Dähne S and Kindermans P-J (2019).
    “iNNvestigate Neural Networks!”.
    In: Journal of Machine Learning Research 20(93):1-8

  • Lapuschkin S, Wäldchen S, Binder A, Montavon G, Samek W and Müller K-R (2019).
    “Unmasking Clever Hans Predictors and Assessing what Machines Really Learn”.
    In: Nature Communications 10:1069

  • Horst F, Lapuschkin S, Samek W, Müller K-R and Schöllhorn W I (2019).
    “Explaining the Unique Nature of Individual Gait Patterns with Deep Learning”.
    In: Scientific Reports 9:2391

  • 2017
  • Montavon G, Lapuschkin S, Binder A, Samek W and Müller K-R (2017).
    “Explaining NonLinear Classification Decisions with Deep Taylor Decomposition”.
    In: Pattern Recognition 65:211-222 Pattern Recognition Best Paper Award and Pattern Recognition Medal winner

  • Samek W, Binder A, Montavon G, Lapuschkin S, and Müller K-R (2017).
    “Evaluating the Visualization of what a Deep Neural Network has Learned”.
    In: IEEE Transactions of Neural Networks and Learning Systems

  • 2016
  • Sturm I, Lapuschkin S, Samek W and Müller K-R (2016).
    “Interpretable Deep Neural Networks for Single-Trial EEG Classification”.
    In: Journal of Neuroscience Methods 274:141-145

  • Lapuschkin S, Binder A, Montavon G, Müller K-R and Samek W (2016).
    “The Layer-wise Relevance Propagation Toolbox for Artificial Neural Networks”.
    In: Journal of Machine Learning Research 17(114):1-5

  • 2015
  • Bach S, Binder A, Montavon G, Klauschen F, Müller K-R and Samek W (2015).
    “On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation”.
    In: PLoS ONE 10(7):e0130140

Publications in Conference Proceedings and Workshops Preprints
  • Achtibat R, Dreyer M, Eisenbraun I, Bosse S, Wiegand T, Samek W and Lapuschkin S (2022).
    “From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation”.
    In: CoRR abs/2206.03208 .

  • Gerstenberger M, Lapuschkin S, Eisert P and Bosse S (2022).
    “But That's Not Why: Inference Adjustment by Interactive Prototype Deselection”.
    In: CoRR abs/2203.10087 .

  • Weber L, Lapuschkin S, Binder A and Samek W (2022).
    “Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement”.
    In: CoRR abs/2203.08008 .

  • Hedström A, Weber L, Bareeva D, Motzkus F, Samek W, Lapuschkin S and Höhne M-C M (2022).
    “Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations”.
    In: CoRR abs/2202.06861 .

  • Motzkus F, Weber L and Lapuschkin S (2022).
    “Measurably Stronger Explanation Reliability via Model Canonization”.
    In: CoRR abs/2202.06621 .

  • Pahde F, Weber L, Anders CJ, Samek W and Lapuschkin S (2022).
    “PatClArC: Using Pattern Concept Activation Vectors for Noise-Robust Model Debugging”.
    In: CoRR abs/2202.03482 .

  • Anders C J, Neumann D, Samek W, Müller K-R and Lapuschkin S (2021).
    “Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy”.
    In: CoRR abs/2106.13200 .

  • Becker S, Ackermann M, Lapuschkin S, Müller K-R and Samek W (2018).
    “Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals”.
    In: CoRR abs/1807.03418

  • Schwenk G and Bach S (2014).
    “Detecting Behavioural and Structural Anomalies in Media-Cloud Applications”.
    In: CoRR abs/1409.8035

   
  Publishing Notes Data Protection Policy