E-mail senden E-Mail Adresse kopieren

E-Mail

Adresse

Stuhlsatzenhaus 5
66123 Saarbrücken (Germany)

Awards (Auswahl)

2023: Most Influential Paper Award für "The impact of tangled code changes"

2023: ERC Advanced Grant Semantics of Software Systems

2021: Most Influential Paper Award für "Assessing Oracle Quality with Checked Coverage"

2020: ISSTA 2020 Impact Paper Award for "Mutation-driven generation of unit tests and oracles" (with Gordon Fraser)

2017: MSR 10-Year Most Influential Paper Award für "How Long Will It Take to Fix This Bug?" (mit Cathrin Weiß, Rahul Premraj, und
Thomas Zimmermann)

2015: MSR 10-Year Most Influential Paper Award für "When do Changes induce Fixes?" (mit Jacek Sliwerski und Thomas Zimmermann)

2015: ICSE 10-Year Most Influential Paper Award, Official Runner-Up for "Locating Causes of Program Failures" isolating
cause-effect chains in failing programs (with Holger Cleve)

2014: ICSE 10-Year Most Influential Paper Award für "Mining Software Histories to Guide Software Changes" introducing mining
software repositories (with Thomas Zimmermann, Peter Weißgerber, und Stephan Diehl)

2009: ACM SIGSOFT 10-Year Impact Award für "Yesterday, my program worked. Today, it does not. Why?" introducing
Delta Debugging

Kurzbiografie

Prof. Dr. Andreas Zeller ist tenured Faculty am CISPA und Professor für Software Engineering an der Universität des Saarlandes. Seine Forschung zu automatisiertem Debugging, Mining von Softwarearchiven, Spezifikationsfindung und Sicherheitstests sind sehr einflussreich. Zeller ist ACM Fellow und hat einen ACM SIGSOFT Outstanding Research Award erhalten.

CV: Letzte Stationen

Seit 2019
Faculty am CISPA Helmholtz-Zentrum für Informationssicherheit
Seit 2003
Full Professor, Computer Science, Universität des Saarlandes
2001 – 2003
Associate Professor (C3), Computer Science, Universität des Saarlandes
1999 – 2001
Post-Doc Researcher (C1), Computer Science, Universität Passau

Veröffentlichungen von Andreas Zeller

Jahr 2022

Konferenz / Medium

European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE)

Konferenz / Medium

European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE)

Konferenz / Medium

European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE)

Konferenz / Medium

European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE)

Konferenz / Medium

International Conference on Software Engineering (ICSE)

Konferenz / Medium

ACM-SIGPLAN Conference on Programming Language Design and Implementation (PLDI)

Jahr 2021

Konferenz / Medium

European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE)

Konferenz / Medium

European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE)

Lehre von Andreas Zeller

Winter 2021/22

Automated Testing and Debugging

It is estimated that programmers spend half of their time testing and debugging their code.  Wouldn't it be great if computers could help automating these boring tasks?

In this proseminar, we explore and evaluate automated techniques for testing and debugging software.  We look at a number of classic approaches for generating software tests automatically, for finding errors and locating faults, and for having these two interact with each other.  You will be 

  • reading the associated papers,
  • presenting the associated work,
  • possibly even with a demonstration of the technique.

For every technique, we will be providing you with the papers (and a reference or textbook implementation, if available).  For the presentation, we use standard presentation software; for demonstrations, Python and Jupyter Notebooks have shown to be practical.  During the seminar, you will refine your presentation and demonstration skills with us up to the final (and decisive) presentation.

The first part of the seminar consists of ~10 virtual (Zoom) sessions of one hour, with at most one session per week. In each of these sessions, we will have two short presentations and a feedback and discussion round.

In the second part of the seminar, participants will give their final, graded presentations. This part consists of several (at most three) block sessions, which might be held virtually or on-site.

Attendance in all (virtual and on-site) proseminar meetings is mandatory.

Requirements: Programming skills will be required for demonstrations.  Knowledge of Python and experience with Jupyter Notebooks is helpful, but can be acquired during the proseminar.

Registration: To register, use the central system of the CS department.

Winter 2021/22

Security Testing

Software has bugs, and catching bugs can involve lots of effort. This course addresses this problem by automating software testing, specifically by generating tests automatically. Recent years have seen the development of novel techniques that lead to dramatic improvements in test generation and software testing.  In this course, we explore these techniques – in theory and in code.

Course Organization

Every week, you will be provided with Jupyter Notebooks that teach a particular topic and illustrate it using plenty of runnable Python code.  These notebooks come from The Fuzzing Book, a textbook on how to generate software tests written by yours truly.

In the notebook, you can edit the code as you like, run your own experiments, and re-use and extend the code to your liking.  Your task will be to use these techniques (and their code) to build a series of fuzzers (i.e. test generators) that find bugs in a number of challenging settings.

Meetings

This course uses the "inverted classroom" principle – you learn at home, and discuss issues with your instructor.  In our weekly meeting, we use the gathering in the lecture hall to

  • discuss the assignment of last week
  • discuss the assignment of next week
  • discuss ongoing projects as well as general questions.

These meetings come with live coding, so we can explore ideas right on the go.

Projects

During this course, you apply the techniques learned in weekly exercises and two projects which form your coursework. Projects are graded for effectiveness, efficiency, elegance, and creativity. Projects offer special challenges which allow you to gain bonus points.

Exercises

Every week, you get a simple exercise assignment covering the material of the last lecture. Performance in these exercises will make 33% of the final grade.  Note that there is no final exam.

Requirements

Advanced programming skills (such as obtained after two years of successfully studying CS) are required. Knowledge in Python is useful, but can easily be acquired along the course.

Passing Criteria and Grading

To pass this course, you need to have

  • at least 50% of the points in each of the projects, and 
  • at least 50% of the total exercise points.

Your final grade is determined by 66% projects and 33% exercises (see above).

Lecture Plan

The course is organized as "inverted classroom": Every week, we discuss a chapter of the book, which will be supplied with an introduction video; we meet once a week to discuss the material, the associated exercises, and the ongoing projects.

The lecture plan may be subject to changes; these will be announced in time.

Date, Time, Location

  • 14 lectures
  • 6 Credit Points
  • In-presence tutorial and discussion every Tuesday 16:15–17:45, CISPA, Lecture Hall 0.05
  • Optional attendance via Zoom: https://cispa-de.zoom.us/j/96427716796?pwd=RnNTQTlQMGtvMDJ5MWFKYTBpM1RrUT09
  • Start: Tuesday October 19
Summer 2020

Proseminar: Automated Testing and Debugging

In this proseminar, we explore and evaluate automated techniques for testing and debugging software. We look at a number of classic approaches for generating software tests automatically, for finding errors and locating faults, and for having these two interact with each other.

More information

Summer 2020

Seminar: Advanced Fuzzing Techniques

In this seminar, we explore and evaluate automated test generation techniques (fuzzers) and related techniques for their effectiveness and efficiency. We discuss and design evaluation criteria and apply them on a number of techniques from the "Fuzzing Book" (https://www.fuzzingbook.org). Apart from reporting and presenting your results in the seminar, your evaluation results will be included in the book. If the results or techniques are novel (many of them are), we will also strive to publish them as a scientific paper, with you as co-author.

More information

Winter 2019/20

Advanced Lecture: Generating Software Tests

Software has bugs, and catching bugs can involve lots of effort. This course addresses this problem by automating software testing, specifically by generating tests automatically.