About me

I am working as research associate in the Software Verification and Validation Laboratory at Interdisciplinary Centre for Security, Reliability and Trust (SnT) of Luxembourg University. My research lies in the area of software quality assurance, notably

  • Software testing: How do we assess and improve the quality of a test suite?
  • Dynamic program analysis: How can we characterize program runs?

Projects

Efficient and Effective Mutation Testing

My main work during my PhD has been in the area of mutation testing -- that is, seeding artificial bugs into code to assess whether your test suite finds them. If it does not, this means your test suite is not yet good enough.

Despite its effectiveness, mutation testing has two issues. First, it requires large computing resources to re-run the test suite again and again. Second, and this is worse, a mutation to the program can keep the program's semantics unchanged -- and thus cannot be detected by any test. Such equivalent mutants act as false positives; they have to be assessed and isolated manually, which is an extremely tedious task.

For that purpose, I have developed Javalanche, a framework for mutation testing of Java programs, which addresses both the problems of efficiency and equivalent mutants:

  • First, Javalanche is built for efficiency from the ground up, manipulating byte code directly and allowing mutation testing of programs that are several orders of magnitude larger than earlier research subjects.
  • Second, Javalanche addresses the problem of equivalent mutants by assessing the impact of mutations on a program run. Impact metrics compare properties of tests suite runs on the original program with runs on mutated versions, and are based on abstractions over program runs such as dynamic invariants, covered statements, and return values. The intention of these metrics is that mutations that have a graver influence on the program run are more likely to be non-equivalent, and thus more likely it is to be useful for improving test suites.

If you are interested in Javalanche, please find the project page on GitHub. For more details on the different impact metrics, see our publications on Javalanche.

Checked Coverage

Coverage criteria are the most widespread metrics to assess test quality. Test coverage criteria measure the percentage of code features that are executed during a test. The rationale is that the higher the coverage, the higher the chances of catching a code feature that causes a failure—a rationale that is easy to explain, but it relies on an important assumption. This assumption is that we are actually able to detect the failure. It does not suffice to cover the error, we also need a means to detect it. Mutation testing is one approach to assess the quality of oracles in terms of defect detection. We proposed checked coverage as an alternative cost-efficient way to assess oracle quality. Using dynamic slicing, we determine the statements which were not only executed, but which actually contribute to the results checked by oracles.
More details on this technique can be found in our paper on "Assessing Oracle Quality with Checked Coverage".

Detecting Code Theft with API Birthmark

As part of my master's thesis, I have developed the API birthmark, a tool to extract and compare birthmarks based on API call sequence sets. With API birthmark, one can effectively determine code theft.

The corresponding paper was presented at ASE 2007, and got some nice media coverage (Slashdot · Saarbruecker Zeitung · Heise · Linuxworld · Computable · Bild · Entwickler · ACM Technews · Computerzeitung · IDW Online).

Publications

A list of my publications can be found here.

Teaching

I worked as a graduate teaching assistant for programming 2 (Programmierung 2), an introduction to object-oriented programming, in the summer terms of 2007, 2008, and 2009. Among other things, I was responsible for setting up and running automated tests for more than 200 participants. In the summer term of 2010, I was the lead teaching assistant for the graduate course Testing and Debugging. Within the scope of this lecture, I developed exercise programming projects for the participants, which dealt with implementing research topics in the areas of bug detection, mutation testing, and automated test generation.

Contact

My office is located on the Kirchberg Campus, on first floor of the G Building, Room G 102. You can send me an e-mail at david.schuler@uni.lu,
or reach me by phone under +352 46 66 44 5896.
My mail address is:
David Schuler
Université du Luxembourg
Campus Kirchberg
6, rue Coudenhove-Kalergi
L-1359 Luxembourg