DGM-Image Statistics challenge

Organized by challenge-organizer - Current server time: June 6, 2023, 10:18 a.m. UTC

First phase

Phase 1
Jan. 12, 2023, midnight UTC

End

Competition Ends
May 29, 2023, 11:59 p.m. UTC


Overview:

Submission deadline extended to Monday May 29th at 11:59pm UTC.

An AAPM Grand Challenge

The American Association of Physicists in Medicine (AAPM) is sponsoring the DGM-Image challenge – a Grand Challenge on deep generative modeling for learning medical image statistics, leading up to the 2023 AAPM Annual Meeting. This challenge invites participants to develop or refine generative models that can accurately reproduce image statistics important and relevant to medical imaging applications. An individual from each of the two top-performing teams will be invited to present their results during the AAPM 2023 Annual Meeting Grand Challenge Symposium and will be awarded a complimentary meeting registration to attend the meeting. The findings and insights from the challenge will be published via a challenge report.

DGM_overview_photo_1.png DGM_overview_photo_1.png DGM_overview_photo_1.png DGM_overview_photo_1.png

Background:

Over the past few years, deep generative models, such as generative adversarial networks (GANs) and diffusion models, have gained immense popularity for their ability to generate perceptually realistic images [1,2,3]. Deep generative models (DGMs) are being actively explored in the context of medical imaging applications, such as data sharing, image restoration, reconstruction, translation, and objective image quality assessment [4,5]. State-of-the-art DGMs trained on medical image datasets have been shown to produce images that look highly realistic, but nevertheless may contain medically impactful errors [4,6]. Unlike medical image classification and, more recently, medical image reconstruction, there is a glaring lack of standardized evaluation pipelines and benchmarks for developing and assessing DGMs for medical image synthesis. This challenge aims to establish DGMs that can faithfully reproduce image statistics relevant to medical imaging. Such models, when validated, can play a key role in the evaluation of imaging systems as well as for use in training and/or testing of AI/ML algorithms, especially for applications where clinical data is limited. 

Objective:

The goal of this challenge is to facilitate the development and refinement of generative models that can reproduce several key image statistics that are known to be useful for a wide variety of medical image assessments. Though this challenge, a custom image dataset of coronal slices from anthropomorphic breast phantoms based on the VICTRE toolchain [7] will be provided to the participants to train their generative models. This challenge will identify the generative model learned from the provided data that most accurately reproduces the training data distribution of morphological and intensity-derived statistical measures as well as breast-density relevant features, while still producing perceptually realistic images and avoiding overfitting/memorization of the training data. An eventual goal of this challenge is to provide a dataset, a standardized evaluation procedure, and a benchmark for evaluating generative models for medical image synthesis.

This challenge will be divided into two phases. Phase 1 will require the participants to submit a dataset of 10,000 images from their generative model along with a 1-page description of their approach. These will be used to compute performance measures and populate the leaderboard. Phase 2 of the challenge will require the participants to submit their code for generating an image dataset. In Phase 2, code submission from at least top 10 submissions will be manually checked and images will be re-generated using the participant code to verify the results. 

Get Started

Register to get access by following instructions in the 'Participate' tab. Then more info is available on the Participate tab.

Important Dates

  • Jan 13th, 2022: Challenge announced, participant registration opens

  • Jan 16th, 2023: Training set release

  • May 19th, 2023: Deadline for Phase 1 submission (was May 1st)

  • May 26th, 2023: Deadline for Phase 2 submission (was May 8th)

  • June 16th 2023: Participants and top-ranked finishers contacted with results

 

Organizers (arranged alphabetically)

Mark A. Anastasio1, Frank J. Brooks1, Rucha M. Deshpande2, Dimitrios S. Gotsis1, Varun A. Kelkar1, Prabhat K.C.3, Kyle J. Myers4, Rongping Zeng3

1 University of Illinois at Urbana-Champaign

2Washington University in St. Louis

3 US Food and Drug Administration

4 Puente Solutions, LLC

 

Contact: dgm.image@gmail.com

Evaluation metrics:

Our evaluation pipeline will be divided into two stages:

  1. Stage I: In stage I, the submitted models will be evaluated using well-known perceptual measures such as the Frechet Inception Distance (FID) score. A custom memorization metric will also be used. Models (such as Memory-GANs) that display memorization of the training dataset or fail to produce perceptually realistic images in terms of the FID score will be disqualified based on a suitable threshold. The models that qualify in this stage will be further evaluated via stage II.
  2. Stage II: In this stage, the evaluation will be based on the following statistical image features that are known to be useful for a wide variety of medical image assessments:
    1. Intensity-derived statistics such as gray-level texture features,
    2. Morphological statistics, such as region properties, skeleton statistics, and fractal dimension,
    3. Breast-density relevant features.

 

A summary metric derived from the above statistics will be used to score and rank participants and declare a winner and a runner-up. Appropriate strategies for tie-breaking will be developed if needed. To assist the participants in developing their best models, we provide a basic implementation of the stage II of our evaluation (final evaluation may be slightly different). We also provide the scores for a baseline model (StyleGAN2) along with the trained weights, which the participants are allowed to use for initialization/transfer learning.

 

Required Items for Phase 1 Submissions

  1. 10,000 images generated from your model.

  2. A 1-page description of your approach.

Required Items for Phase 2 Submissions

  1. 10000 images generated from your model.

  2. Code, trained model weights, and a bash script to run the model to generate 10,000 images along with a well-tested dockerfile that containerizes your code.

  3. A 1-page description of your approach. This must contain a time estimate for generating 10,000 images on a single Nvidia V100 GPU.

Terms and conditions

  1. Participants must ensure that their model can generate 10,000 images within 12 hours on a single Nvidia V100 GPU. If it takes >12 hours to generate 10,000 images, the model may be disqualified depending upon our computational budget.

  2. At the end of the Challenge, code submission from at least top 10 submissions will be manually checked and images will be re-generated using the participant code to verify the results.

  3. Participants will share the names of their team members and contact information during registration. However, this will not be made public on the leaderboard.

  4. Participants are only allowed to use the provided training data and/or the provided weights to develop their models.

  5. Participants are not allowed to submit images from the training data or other forms of memory models. Submissions are restricted to machine learning models that are estimated from the provided training data. This will be checked during the code review at the end of the competition.

  6. Descriptions of participants’ methods and results may become part of presentations, publications, and subsequent analyses derived from the Challenge (with proper attribution to the participants) at the discretion of the Organizers. While methods, results and generated images may become part of Challenge reports and publications, participants may choose not to disclose their identity and remain anonymous for the purpose of these reports.

  7. Please note, one member from each top-ranked team will be awarded complimentary meeting registration to attend AAPM's 2023 Annual Meeting in Houston, TX (July 23-27, 2023), in-person attendance is mandatory. They will present on their results and methodology during a Grand Challenge Symposium and participate in an Awards & Honors Ceremony.

Phase 1

Start: Jan. 12, 2023, midnight

Phase 2

Start: May 19, 2023, 11:59 p.m.

Competition Ends

May 29, 2023, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In