SAFE-Video-2025

UL DSRI logo

SAFE Synthetic Video Detection Challenge 2025

SAFE: Video Challenge

Hugging Face Discord

👉 All participants are required to register for the competition by filling out this Google Form

📊 Overview🥇 Detailed Leaderboard🏆 Prize📢 Results Sharing📝 Tasks🤖 Model Submission📂 Create Model Repo🔘 Submit🆘 Helpful Stuff🗂 Training Data🔍 Evaluation⚖️ Rules

📣 Updates

2025-08-04

2025-07-01

2025-07-18

2025-07-30

📊 Overview

ULRI’s Digital Safety Research Institute is excited to announce the SAFE: Synthetic Video Detection Challenge at the Authenticity and Provenance in the Age of Generative AI (APAI) workshop at ICCV 2025.

The SAFE: Synthetic Video Detection Challenge will drive innovation in detecting and attributing synthetic and manipulated video content. It will focus on several critical dimensions of synthetic video detection performance, including generalizability across diverse visual domains, robustness against evolving generative video techniques, and scalability for real-world deployment. As generative video technologies advance rapidly—with increasing accessibility and sophistication of image-to-video, text-to-video, and adversarially optimized pipelines—the need for effective and reliable solutions to authenticate video content has become urgent. We aim to mobilize the research community to address this need and strengthen global efforts in media integrity and trust.

All participants are required to register for the competition

❗Important: To ensure the challenge emphasizes generalizable detection methods, approaches that rely on analyzing metadata, file format, etc. are discouraged and ineligible for participation.

📅 Schedule

🥇 Detailed Leaderboard

https://safe-challenge-video-challenge-leaderboard.hf.space

🏆 Prize

The most promising solutions may be eligible for research grants to further advance their development. A travel stipend will be available to the highest-performing teams to support attendance at the APAI workshop at ICCV 2025, where teams can showcase their technical approach and results. Remote participation options will also be available.

📢 Results Sharing

In addition to leaderboard rankings and technical evaluations, participants will have the opportunity to share insights, methodologies, and lessons learned through an optional session at the APAI Workshop at ICCV 2025. Participants will be invited to present at the workshop, showcasing their approach and findings to fellow researchers, practitioners, and attendees. To facilitate this engagement, we will collect 250-word abstracts in advance. These abstracts should briefly describe your method, key innovations, and any noteworthy performance observations. Submission details and deadlines will be announced on the challenge website. This is a valuable opportunity to contribute to community knowledge, exchange ideas, and build collaborations around advancing synthetic video detection.

🧠 Challenge Tasks

The SAFE: Synthetic Video Challenge at APAI @ ICCV 2025 will consist of several tasks. This competition will be fully blind. No data will be released. Participants will need submit their models on our huggingface space. Only a small sample dataset will be available for debugging purposes. You are free to use anything you want for training your models. We provide some pointers to publically avaialble datasets. Each team will have a limited number of submissions per day. If your submission fails due an error, you can reach out to us and we can help debug and reset this limit. (discord server, SafeChallenge2025@gmail.com )

All tasks will be hosted in our SAFE Video Challenge Collection on Huggingface Hub 🤗.

❗Important: submissions that work based on analyzing metadata, file format, etc. are not eligible.

🚀 Pilot Task (✅ Open): Detection of Synthetic Video Content

🎯 Task 1 (✅ Open): Detection of Synthetic Video Content

🔮 Additional tasks will be announced leading up to ICCV 2025. These may explore areas such as manipulation detection, attribution of generative models, laundering detection, or characterization of generative content. Stay tuned for updates on new challenge tracks and associated datasets.

🤖 Model Submission

This is a script based competetion. No data will be released before the competition. A subset of the data may be released after the competition. Competition will be hosted on Huggingface Hub. There will be a limit to number of submissions per day.

📂 Create Model Repo

Participants will be required to submit their model to be evaluated on the dataset by creating a huggingface model repository. Please use the example model repo as a template.

🔘 Submit

Once your model is ready, it’s time to submit:

🆘 How to get help

We provide an example model submission repo and a local debug example:

🗂️ Training data

We are not providing any training data so feel free to use anything you want to train your models. Here are few pointers to existing datasets:

🔍 Evaluation

All submissions will be ranked by balanced accuracy. Balanced accuracy is defined as an average of true positive rate and true negative rate.

⚖️ Rules

To ensure a fair and rigorous evaluation process for the Synthetic and AI Forensic Evaluations (SAFE) - Synthetic Video Challenge Registration, the following rules must be adhered to by all participants:

  1. Leaderboard:
    • The competition will maintain both a public and a private leaderboard.
    • The public leaderboard will show error rates for each anonymized source.
    • The private leaderboard will be used for the final evaluation and will include non-overlapping data from the public leaderboard.
  2. Submission Limits:
    • Participants will be limited in submissions per day.
  3. Confidentiality:
    • Participants agree not to publicly compare their results with those of other participants until the other participant’s results are published outside of the conference venue.
    • Participants are free to use and publish their own results independently.
  4. Compliance:
    • Participants must comply with all rules and guidelines provided by the organizers.
    • Failure to comply with the rules may result in disqualification from the competition and exclusion from future evaluations.

By participating in the SAFE challenge, you agree to adhere to these evaluation rules and contribute to the collaborative effort to advance the field of video forensics.