ATLAS R2.0 - Anatomical Tracings of Lesions After Stroke


Introduction

View the paper on Scientific Data: A large, curated, open-source stroke neuroimaging dataset to improve lesion segmentation algorithms, Liew et al. 2022.

Accurate lesion segmentation is critical in stroke rehabilitation research for the quantification of lesion burden and accurate image processing. Current automated lesion segmentation methods for T1-weighted (T1w) MRIs, commonly used in rehabilitation research, lack accuracy and reliability. Manual segmentation remains the gold standard, but it is time-consuming, subjective, and requires significant neuroanatomical expertise. However, many methods developed with ATLAS v1.2 report low accuracy, are not publicly accessible or are improperly validated, limiting their utility to the field. Here we present ATLAS v2.0 (N=1271), a larger dataset of T1w stroke MRIs and manually segmented lesion masks that includes training (public. n=655), test (masks hidden, n=300), and generalizability (completely  hidden, n=316) data. Algorithm development using this larger sample should lead to more robust solutions, and the hidden test and generalizability datasets allow for unbiased performance evaluation via segmentation challenges. We anticipate that ATLAS v2.0 will lead to improved algorithms, facilitating large-scale stroke rehabilitation research.


Task

The goal of this challenge is to empirically evaluate automated methods of lesion segmentation in MR images. Participants are tasked with automatically generating lesion segmentation masks for T1w MR images. The task is divided into two phases:

  1. prediction evaluation (May 1 - Aug 17)
  2. algorithm evaluation (Aug 4 - Aug 17)

In the first phase, participants submit their predictions for the distributed test set. In the second, participants will submit their segmentation model ("algorithm") via a Docker container which will then be used to generate predictions on a hidden set.

Timeline - Updated as of August 25

  • Training data released: December 2021 (Done!)
  • Phase 1 sample notebook release: April 1 (Done!)
  • Phase 1 automated evaluation release: April 1 (Done!)
  • Phase 1 participant submission open: April 1 (Done!)
  • Phase 2 sample notebook release: May 1 (Done!)
  • Phase 2 automated evaluation release: May 1 (Done!)
  • Phase 2 participant submission open: June 15 (Done!)
  • Phase 2 algorithm test (docker submission) - sanity check phase open: Aug 4 (Done!)
  • Phase 2 algorithm test (docker submission) open: Aug 4 (Done!) 
  • Final deadline for all submissions (automated evaluation and algorithm test via docker submission) for ISLES 2022 (MICCAI): August 17 at midnight Pacific Time (Done!)
  • MICCAI ISLES/ATLAS Workshop at BrainLes September 18 - results and rankings announced! 
  • September 19 onward - all leaderboards and evaluations open for ongoing submission (you may submit to the public test evaluation once per day, and you may submit to the hidden test docker evaluation once per month)

*Note: All deadlines are based on Pacific Time Zone

Getting Started

As part of the MICCAI ISLES 2022 challenge, we provide a sample solution on GitHub to help you get started. The notebook walks you through obtaining the data, loading it, and saving your predictions in the format expected by our automatic evaluator.


This challenge is sponsored by AWS and is one of MICCAI 2022's challenges in conjunction with the ISLES'22 challenge: