Something unclear?

See the FAQs below or get into contact with us:

  1. Join our Discord server channel: https://discord.gg/XCqqMVV
  2. Or you can interact with us from the chat that goes directly into our Discord (for this, no registration is required):

FAQ

Hi is the competition already over? I did not recieve any instructions on email!

The competition is not over yet, but will start on 17.08. Even then, we will not start with the real challenge rightaway. Before teams can start the main challenge, we require teams to work through a quizz-like coding “micro-challenge”, with three small tasks. This micro-challenge is intended to let teams familiarize themselves with the Eisen toolkit and the computational ecosystem of our challenge.

So for know we just need to be ready with a local segmentation model on synthetic data, right?

For now, it is recommended to download the sample toy/synthetic dataset and to get acquainted with the data. Download the volumes, use a medical image viewer like Slicer or ITKSnap, and get familiar with the nature of the data and its implications for the segmentation task. Feel free to already train your favorite models using your favorite frameworks. Maybe already create first benchmark results using pre-trained models from you own research. However, please note that for this challenge, you will eventually have to adapt your code, and embed your pre-trained models into the Eisen-based software infrastructure, which is based on pytorch. This is necessary to make sure that you can submit training jobs through our platform, without direct access to the data itself. More information on the job submission system will follow on the kickoff event and during the first 1-2 weeks after the kickoff. For now, please note that it is necessary to be familiar with Eisen to be able to partake in the main challenge.

If there is any specific choice train/val/test splits for getting segmentation model. And then let's say I have a frozen graph, how to share it via Github projects or Googledrive or Email?

For the micro-challenge, you do not need to submit a trained model, only a Colab file with executable code is sufficient, which our team will review, before greenlighting your team to the main challenge. For the main challenge, there are two essential and separate mechanisms: 1) You can download the synthetic toy dataset, train your models locally at your lab (as many models and as often as you want, with train/val/test data splits as you wish) to create an Eisen-compatible workflow that achieves as good results as you can. You do NOT need to send us any models pre-trained on the synthetic data. 2) Once you feel ready for the main challenge, you can “convert” your model and training setup for the main challenge, by creating an Eisen job configuration json file (more instructions on how to create such a configuration file will follow). This step is necessary to make your designed model trainable on the non-disclosed data stored in our computational platform on AWS. Your model will be re-trained using AWS GPU instances (at no cost for you). The model will automatically be validated on the validation set, and you will be able to real-time monitor the training progress with a Tensorboard provided through our challenge platform – multiple artifacts, including losses, metrics and intermediate visual results (single slices through the volumes) will be available for monitoring. Your models will also automatically be evaluated on the test set. As such you will not need to send us your model, because it is trained and accessible to us through the AWS platform. You will be able to download the trained model through the platform as well.

The website says "This micro-challenge with the task of segmentation based on our synthetic toy dataset has to be submitted until 14th of August"

We apologize for this. The data of 14. Aug. was our initial aim, but we had to adjust our schedule. Further (hopefully minor) adjustments to the schedule may happen throughout the challenge. Organizing such a challenge is a daunting task, and we hope for your patience and understanding, as we are trying to further optimize our efforts. We will give our best to update the website and other communication channels according to latest information. For the latest information, make sure to closely follow our discussions here on Discord.

First Question for segmentation: do we need to predict the full labels i.e lung lobes + ggo/consolidation or just ggo/consolidations or is it our choice?

For Covid19, the amount of lesions (GGO/consolidation) are indeed very important. However, their distribution is also important, i.e. their location. Therefore, the segmentation of lung lobes is also very important. The segmentation models need to accurately segment all give lung lobes and both lesion types. Further details on the evaluation metrics (Dice overlap, Hausdorff distance etc) will be explained in a challenge design document which we are going to release on the day of the kickoff event (17.08.2020). For local development (on synthetic data) we will soon detail the evaluation computation of all metrics required for the challenge. An identical evaluation will be implemented inside the main challenge platform.