The competition is open to all the interested bachelor, master and PhD students enrolled at the Department of Information Engineering and Computer Science (DISI) of the University of Trento and at the Institut für Informatik of the University of Innsbruck.
The participation to the challenge counts as study credits for the Multimedia Security course (University of Trento) and Information Security I Proseminar (University of Innsbruck). For further information, please contact Prof. Giulia Boato and Prof. Rainer Böhme, respectively.
Teams must be composed of maximum 4 members. Single participants are also allowed.
The development dataset is composed of forged images and the corresponding ground truth tampering maps, a binary image where nonzero pixels indicate the forged area.
All the images contain a forgery, which is created starting from a background image by either applying a localized processing or superimposing an external object. Except for few cases, the background images (i.e., the original images before the creation of the forgery) have been taken with 4 cameras. We also provide original flat images taken from such cameras, which can be used to accurately extract the PRNU profiles to be exploited in the forensic analysis.
The teams can use this corpus of images as starting point for the development of their forensic algorithm, whose performance will be measured later on the testing dataset. Go to the Download page for detailed information!
Forensic algorithm development and submission
By the 26th of February, each team will be asked to send the forensic algorithm they developed, which will be used to validate your results on the testing dataset. The forensic algorithm developed by each group should take as input the test image filename and provide as output the tampering map, of the same size as the input image.
There are no restrictions in the design of the algorithm.
You are free to use any programming language and environment. Matlab is a very good choice because of its wide variety of easy-to-use image processing tools. Python, R, Octave and Scilab, also combined OpenCV, are good options too.
Check the Hints & Tips page and get inspired! 🙂
Result submission, evaluation and validation
After the release of the testing dataset (composed only of forged images), the teams will run the code they submitted on the testing images to obtain the estimated tampering maps. By the 7th of March, the obtained maps have to be submitted, which will be used to evaluate the performance of your methods.
In this phase, the organizers will validate the results by verifying that the submitted codes have been used. Significant differences will lead to penalties for the team.
The performance will be measured by comparing each estimated tampering map with the corresponding ground truth one in terms of F-measure. The final ranking will be established by computing for each team the F-measure values on all the test images, discarding the 5% of them with the lowest F-measure and averaging the remaining ones. Go to the Download page for detailed information!
The best performing team will be awarded with a GoPro Hero 5! Good luck 🙂