Fraunhofer Heinrich Hertz Institute  
      Dr. Sebastian Lapuschkin
Video Coding & Analytics Department
Machine Learning
      Fraunhofer Institute for Telecommunications
Heinrich Hertz Institute
Einsteinufer 37
10587 Berlin

Machine Learning Group
Phone: +49 30 31002-371
Sebastian Lapuschkin
      Short Bio  
      Sebastian received the Dr. rer. nat. (PhD) degree with distinction ("summa cum laude") from the Berlin Institute of Technology in 2018. From 2007 to 2013 he studied computer science (B.Sc. and M.Sc.) at the Berlin Institute of Technology, with a focus on software engineering and machine learning. Currently, he is a tenured researcher at the machine learning group at Fraunhofer Heinrich Hertz Institute (HHI) in Berlin. His research interests include computer vision, (efficient) machine learning and data analysis, data and algorithm visualization, and the interpretation, (meta-)analysis and rectification of machine learning system behavior.  
      Publications: [ Books | Journals | Conferences | Preprints ]  
      Books Publications in Journals
  • Hägele M, Seegerer P, Lapuschkin S, Bockmayr M, Samek W, Müller K-R and Binder A (2020).
    “Resolving Challenges in Deep Learning-based Analyses of Histopathological Images using Explanation Methods”.
    In: Scientific Reports 10:6423

  • 2019
  • Alber M, Lapuschkin S, Seegerer P, Hägele M, Schütt K T, Montavon G, Samek W, Müller K-R, Dähne S and Kindermans P-J (2019).
    “iNNvestigate Neural Networks!”.
    In: Journal of Machine Learning Research 20(93):1-8

  • Lapuschkin S, Wäldchen S, Binder A, Montavon G, Samek W and Müller K-R (2019).
    “Unmasking Clever Hans Predictors and Assessing what Machines Really Learn”.
    In: Nature Communications 10:1069

  • Horst F, Lapuschkin S, Samek W, Müller K-R and Schöllhorn W I (2019).
    “Explaining the Unique Nature of Individual Gait Patterns with Deep Learning”.
    In: Scientific Reports 9:2391

  • 2017
  • Montavon G, Lapuschkin S, Binder A, Samek W and Müller K-R (2017).
    “Explaining NonLinear Classification Decisions with Deep Taylor Decomposition”.
    In: Pattern Recognition 65:211-222

  • Samek W, Binder A, Montavon G, Lapuschkin S, and Müller K-R (2017).
    “Evaluating the Visualization of what a Deep Neural Network has Learned”.
    In: IEEE Transactions of Neural Networks and Learning Systems

  • 2016
  • Sturm I, Lapuschkin S, Samek W and Müller K-R (2016).
    “Interpretable Deep Neural Networks for Single-Trial EEG Classification”.
    In: Journal of Neuroscience Methods 274:141-145

  • Lapuschkin S, Binder A, Montavon G, Müller K-R and Samek W (2016).
    “The Layer-wise Relevance Propagation Toolbox for Artificial Neural Networks”.
    In: Journal of Machine Learning Research 17(114):1-5

  • 2015
  • Bach S, Binder A, Montavon G, Klauschen F, Müller K-R and Samek W (2015).
    “On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation”.
    In: PLoS ONE 10(7):e0130140

Publications in Conference Proceedings Preprints
  • Goh G S W, Lapuschkin S, Weber L, Samek W and Binder A (2020).
    “Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution”.
    In: CoRR abs/2004.10484 . Submitted.

  • Samek W, Montavon G, Lapuschkin S, Anders C J and Müller K-R (2020).
    “Toward Interpretable Machine Learning: Transparent Deep Neural Networks and Beyond”.
    In: CoRR abs/2003.07631 . Submitted.

  • Sun J, Lapuschkin S, Samek W and Binder A (2020).
    “Understanding Image Captioning Models beyond Visualizing Attention”.
    In: CoRR abs/2001.01037 . Submitted.

  • Anders C J, Marinč T, Neumann D, Samek W, Müller K-R and Lapuschkin S (2019).
    “Analyzing ImageNet with Spectral Relevance Analysis: Towards ImageNet un-Hans'ed”.
    In: CoRR abs/1912.11425 . Submitted.

  • Yeom S-K, Seegerer P, Lapuschkin S Wiedemann S, Müller K-R and Samek W (2019).
    “Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning”.
    In: CoRR abs/1912.08881 . Submitted.

  • Horst F, Slijepcevic D, Lapuschkin S Raberger A-M, Zeppelzauer M, Samek W, Breiteneder C, Schöllhorn W I and Horsak B (2019).
    “On the Understanding and Interpretation of Machine Learning Predictions in Clinical Gait Analysis Using Explainable Artificial Intelligence”.
    In: CoRR abs/1912.07737 . Submitted.

  • Kohlbrenner M, Bauer A, Nakajima S, Binder A, Samek W and Lapuschkin S (2019).
    “Towards Best Practice in Explaining Neural Network Decisions with LRP”.
    In: CoRR abs/1910.09840 . Accepted for publication.

  • Becker S, Ackermann M, Lapuschkin S, Müller K-R and Samek W (2018).
    “Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals”.
    In: CoRR abs/1807.03418

  • Schwenk G and Bach S (2014).
    “Detecting Behavioural and Structural Anomalies in Media-Cloud Applications”.
    In: CoRR abs/1409.8035

  Publishing Notes Data Protection Policy