top of page
  • How is the mAP calculated for leaderboard ranking?
    The evaluation follows the metric MS COCO, using "pycocotools" for assessment. FLOPs and inference time are calculated based on your submission based on the internal formula. The final ranking will be determined after the competition management team verifying your submitted code. (Check the detailed formula here)
  • How will the rank be determined in case of equal score with others?
    Scores are evaluated up to 5th decimal place. However, if the scores are still equal, then the first submitted model in the leaderboard based on the first evaluation period will be precedence(Test_open).
  • My uploaded results are not showing on the leaderboard.
    The processing time for uploaded results may take up to 30 minutes. Delays can occur due to server status or evaluation errors. You can check the uploading status and processing information on the Upload Status page in the Leaderboard section. Please make sure that the result file was correctly formatted before uploading. If the issue persists, feel free to contact us through the inquiry feature with your upload details and the approximate submission date, and we’ll respond as quickly as possible.
  • What is evaluated during the second round of evaluation?
    In the second round, submissions will be evaluated using a private internal dataset to determine the final rankings. Reproducibility test will also be conducted based on the submitted code (in Docker Image format) and the solution documents (PPT or DOC). If the results cannot be reproduced or if the code fails to run, the team will be excluded from winnings (checkingsimilarity in performance metrics includes in evaluation).
  • Where can I find detailed information about the second round submission?
    Only top 10 teams will be notified about the detailed instructions regarding the second round evaluation. The instruction will be sent via email based on the address provided during the submission in the leaderboard.
  • What are the criteria for measuring FLOPs and Inference Time?
    Since FLOPs depend on the input image size, the calculated FLOPs for each image must be recorded in the prediction file. Inference Time should also include both prediction and post-processing time for each image in the prediction file. Post-processing time includes late fusion methods such as Weighted Box Fusion. If an ensemble model is used, the time consumption for all models to process a single image must be summed up. The average FLOPs and Inference Time across all images will be used in the final score calculation.
  • Can I participate as a team?
    Yes, the team participation is allowed. However, you can join only one team, and the team cannot exceed three members, including yourself.

Copyright © 2025 Hanwha Systems Co., Ltd. All rights reserved.

bottom of page