Eligible participants

The competition is open to all the interested bachelor, master and PhD students enrolled at the University of Trento and at the University of Innsbruck.

The participation to the challenge counts as study credits for the Multimedia Security course (University of Trento) and Information Security I Proseminar (University of Innsbruck). For further information, please contact Prof. Giulia Boato and Prof. Rainer Böhme, respectively.

Teams must be composed of maximum 4 members. Single participants are also allowed.


Development dataset

The development dataset is composed of forged images and the corresponding ground truth tampering maps, a binary image where nonzero pixels indicate the forged area.

All the images contain a forgery, which is created starting from a background image by either applying a localized processing or superimposing an external object. Except for few cases, the background images (i.e., the original images before the creation of the forgery) have been taken with 4 cameras. We also provide original flat images taken from such cameras, which can be used to accurately extract the PRNU profiles to be exploited in the forensic analysis.

The teams can use this corpus of images as starting point for the development of their forensic algorithm, whose performance will be measured later on the testing dataset.


Forensic algorithm development and submission

The deadline for submitting the code has been postponed to February 28 at 23:59!

By this date, each team will be asked to send the forensic algorithm they developed, which will be used to validate your results on the testing dataset. The forensic algorithm developed by each group should take as input the test image filename and provide as output the tampering map, of the same size as the input image.

There are no restrictions in the design of the algorithm.

You are free to use any programming language and environment. Matlab is a very good choice because of its wide variety of easy-to-use image processing tools. Python, R, Octave and Scilab, also combined OpenCV, are good options too.


Result submission and validation

After the release of the testing dataset (composed only of forged images), the teams will run the code they submitted on the testing images to obtain the estimated tampering maps. Then, the team will submit the tampering maps by March 9 at 23:59, which will be used to evaluate the performance of their methods.

To ensure that no fine-tuning is performed based on the testing set, in this phase we will check that there is a correspondence between the maps for the testing set submitted by the teams between 1/03 and 9/03 and the ones we will obtain by running the get_maps.* algorithm submitted by the teams by the 28/02. Significant differences will lead to penalties for the team.

The performance will be measured by comparing each estimated tampering map with the corresponding ground truth one in terms of F-measure. The final ranking will be established by computing for each team the F-measure values on all the test images, discarding the 5% of them with the lowest F-measure and averaging the remaining ones.


Prize and evaluation


The team which will obtain the best performance on the testing dataset will be awarded with a GoPro Hero 5! Good luck 🙂


Moreover, the competing teams will have the chance to present their methodology during the final event of the project, that will take place in Innsbruck on March 22-23!

For the sake of study credits and corresponding marks, the evaluation will be based not only on the accuracy performance of their algorithm on the testing set, but also on the quality of the presentation and the originality of the methodology proposed.