SAFE-Video-2025

SAFE Synthetic Video Detection Challenge 2025

SAFE: Video Challenge

Hugging Face Discord

👉 All participants are required to register for the competition by filling out this Google Form

📊 Overview🥇 Detailed Leaderboard🏆 Prize📢Results Sharing and Poster Session📝 Tasks📈 Data🤖 Model Submission📂 Create Model Repo🔘 Submit🆘 Helpful Stuff🔍 Evaluation⚖️ Rules

📣 Updates

📊 Overview

To advance the state of the art in video forensics, we are launching a funded evaluation challenge at the Authenticity and Provenance in the Age of Generative AI (APAI) workshop at ICCV 2025. This challenge will drive innovation in detecting and attributing fully synthetic and manipulated video content. It will focus on several critical dimensions, including generalizability across diverse visual domains, robustness against evolving generative video techniques, and scalability for real-world deployment. As generative video technologies rapidly advance—with increasing accessibility and sophistication of image-to-video, text-to-video, and adversarially optimized pipelines—the need for effective and reliable solutions to authenticate visual content has become urgent. Sponsored by the ULRI Digital Safety Research Institute, this initiative aims to mobilize the research community to confront these challenges and strengthen global efforts in media integrity and trust.

All participants are required to register for the competition

🥇 Detailed Leaderboard

coming soon …

🏆 Prize

The most promising solutions may be eligible for research grants to further advance their development. A travel stipend will be available to the highest-performing teams to support attendance at the APAI workshop at ICCV 2025, where teams can showcase their technical approach and results.

📢 Results Sharing and Poster Session

In addition to leaderboard rankings and technical evaluations, participants will have the opportunity to share insights, methodologies, and lessons learned through an optional poster session at the APAI Workshop at ICCV 2025. Participants will be invited to present a poster at the workshop, showcasing their approach and findings to fellow researchers, practitioners, and attendees. To facilitate this engagement, we will collect 250-word abstracts in advance. These abstracts should briefly describe your method, key innovations, and any noteworthy performance observations. Submission details and deadlines will be announced on the challenge website. This is a valuable opportunity to contribute to community knowledge, exchange ideas, and build collaborations around advancing synthetic video detection.

🧠 Challenge Tasks

The SAFE: Synthetic Video Challenge at APAI @ ICCV 2025 will consist of several tasks. This competition will be fully blind. No data will be released. Only a small sample dataset will be released for debugging purposes. Each team will have a limited number of submissions per day. If your submission fails due an error, you can reach out to us and we can help debug and reset this limit. (discord server, SafeChallenge2025@gmail.com )

All tasks will be hosted in our SAFE Video Challenge Collection on Huggingface Hub 🤗.

🚀 Pilot Task (✅ Open): Detection of Synthetic Video Content

🎯 Task 1 (🚧 Under Construction): Detection of Synthetic Video Content

🔮 Additional tasks will be announced leading up to ICCV 2025. These may explore areas such as manipulation detection, attribution of generative models, laundering detection, or characterization of generative content. Stay tuned for updates on new challenge tracks and associated datasets.

🤖 Model Submission

This is a script based competetion. No data will be released before the competition. A subset of the data may be released after the competition. Competition will be hosted on Huggingface Hub. There will be a limit to number of submissions per day.

📂 Create Model Repo

Participants will be required to submit their model to be evaluated on the dataset by creating a huggingface model repository. Please use the example model repo as a template.

🔘 Submit

Once your model is ready, it’s time to submit:

🆘 How to get help

We provide an example model submission repo and a local debug example:

🔍 Evaluation

All submissions will be ranked by balanced accuracy. Balanced accuracy is defined as an average of true positive rate and true negative rate.

⚖️ Rules

To ensure a fair and rigorous evaluation process for the Synthetic and AI Forensic Evaluations (SAFE) - Synthetic Video Challenge Registration, the following rules must be adhered to by all participants:

  1. Leaderboard:

    • The competition will maintain both a public and a private leaderboard.
    • The public leaderboard will show error rates for each anonymized source.
    • The private leaderboard will be used for the final evaluation and will include non-overlapping data from the public leaderboard.
  2. Submission Limits:

    • Participants will be limited in submissions per day.
  3. Confidentiality:

    • Participants agree not to publicly compare their results with those of other participants until the other participant’s results are published outside of the conference venue.
    • Participants are free to use and publish their own results independently.
  4. Compliance:

    • Participants must comply with all rules and guidelines provided by the organizers.
    • Failure to comply with the rules may result in disqualification from the competition and exclusion from future evaluations.

By participating in the SAFE challenge, you agree to adhere to these evaluation rules and contribute to the collaborative effort to advance the field of video forensics.