Download and submission

Development dataset

The development dataset can be downloaded here.

The forged images are contained in the subfolder dev-dataset-forged and are numbered from 1 to 800. All of them are 1500×2000 color images, either in .tif or .jpg format.  The corresponding tampering maps are stored in the subfolder dev-dataset-maps with the same filename.

All the images contain a forgery, which is created starting from a background image by either applying a localized processing or superimposing an external object. Except for few cases, the background images (i.e., the original images before the creation of the forgery) have been taken with 4 cameras. We also provide original flat images taken from such cameras (in the subfolders flat-camera-1, flat-camera-2, flat-camera-3 and flat-camera-4), which can be used to accurately extract the PRNU profiles to be exploited in the forensic analysis.

NOTE: the development dataset has been updated on the 23/01/’18. Those who have downloaded the dataset prior to this date can simply replace the following images in the dev-dataset-maps folder: ‘dev_0105.bmp’, ‘dev_0129.bmp’, ‘dev_0199.bmp’, ‘dev_0389.bmp’, ‘dev_0401.bmp’, ‘dev_0604.bmp’, ‘dev_0649.bmp’, ‘dev_0790.bmp’.


F-measure computation

The performance of the algorithms will be measured in terms of F-measure (or F1-score) between the ground truth and estimated tampering map. This value combines the count of true positives (forged pixels correctly identified), false negatives (forged pixels erroneously identified as non-forged) and false positives (non-forged pixels erroneously identified as forged).

When the teams will submit the estimated tampering maps on the test dataset, we will compute the F-measure for each test image, discard the 5% of them which yield the lower F-measure values and then average the remaining ones. Such average F-measure will be used to determine the final ranking.  In order for the teams to self-evaluate their algorithm, we provide a Matlab code for computing the F-measure starting from a ground truth and an estimated tampering map.

Note: do not be afraid by low values of the F-measure! Even though the algorithm provides visually interpretable tampering maps, the F-measure can decrease substantially (way lower than 0.5) due to inaccuracies at the boundaries or isolated false alarms/missed detections. Moreover, in the evaluation phase we will compute the F-measure by using both the submitted tampering maps and their inverted versions (swapping black and white pixels), and consider the maximum value obtained.


Code submission

The deadline for submitting the code has been postponed to February 28 at 23:59!

To submit the final code, please send an email to with the object [CODE SUBMISSION] <team-name>.

In the message, you should specify the team name, the team members and corresponding email contacts.

Moreover, you must report a download link pointing to the submitted material (for instance via Dropbox, GDrive, file transfer services, etc). It is ok to be accessible only with a private URL but it should not require a username/password access.

The material shared through the link should consist of:

  • a function/script called get_map.* that takes as input a generic image and creates the corresponding tampering map;
  • a subfolder named DEMO_RESULTS containing the binary tampering maps in .bmp format obtained from get_map.* when applied to these images;
  • a text file named README containing instructions on how to use get_map.* and information on the environment used and requirements for running the code. Please, remember that get_map.* should be as self-contained as possible;
  • (optional) a subfolder named SUPPORT containing any data invoked by get_map.* (like pre-trained models, auxiliary functions, etc).

The final submission should look like this:

Note: unless it is strictly required by your methodology, the submitted material should not include the whole training set but only the data necessary to create the map (e.g., pre-trained classifiers, estimated parameters, etc…).

For specific needs or inquiries, please contact