top of page
solution5_top.jpg

IR Instance Segmentation AI Challenge 2025

Building on the remarkable success of the 2024 AI Challenge, the 2025 edition aims to advance the field by focusing on 2D instance segmentation using images captured from infrared sensors. Participating teams will be evaluated both the accuracy of their segmentation models and their inference speed. The primary evaluation metrics include Mask AP, inference speed(including post-processing), and FLOPs(FLoating point OPerationS), ensuring a thorough assessment of model performance in terms of both effectiveness and efficency. 

The competition process is structured as follows:

  1. First EvaluationTeams will submit their predictions on a public evaluation dataset. The results will be displayed on a leaderboard, and the top 10 teams will advance to the final evaluation.

  2. Final Evaluation: Once the leaderboard is closed, the top 10 teams will be evaluated using a private dataset (hidden test dataset). Based on model performance and adherence to competition rulese, the 1st, 2nd, and 3rd place winners will be determined. 

Winners' Prizes :  

  • 1st Place Team : 1,000 USD

  • 2nd Place Team : 500 USD

  • 3rd Place Team : 250 USD

All winners will be invited to present their solutions at the Thermal Infrared in Robotics Workshop during the 2025 IEEE International Conference on Robotics & Automation(ICRA), scheduled for May 19-23 in Atlanta, USA.

View Dataset Page

View Result Page

2025 AI Challenge Schedules

25.03.31

  • Release of Dataset

25.04.07 ~

25.04.25

  • Opening of Leaderboard

25.04.30

  • Deadline of Top 10 Teams' Code Submission

25.05.07

  • Announcement of Final Rankings

25.05.19

  • Award Ceremony and Top 3 Teams Presentation Session (ICRA 2025)

Challenge Rules in Detail

1. Training and Evaluation Regulations

  • ​Pretrained Weights: Pretrained weights trained on public datasets such as ImageNet, MSCOCO, etc., are allowed.

    • Only datasets without licensing issues may be used to create pretrained weights.

    • Use datasets with clearly defined licenses such as MIT License, BSD License, or Creative Commons License. (Participants will be required to submit links to the license policies of the datasets used at a later stage.)​

  • Fine-tuning: Only the training dataset provided in this competition may be used for training(validation data can also be included).​

    • Data processing is allowed(e.g., label correction, augmentation, etc.).

    • The test dataset must not be used under any circumstances.​

2. Model

  • ​Use code composed of publicly available open-source projects that are permitted for commercial use(e.g., MIT, BSD license).

  • The training and inference code for the model must be under a license that does not require mandatory disclosure(e.g., GPL, AGPL License).

3. Evaluation

  • Evaluation Metric 

CodeCogsEqn (13).png
  • ​                      : mAP(@0.5:0.95) is a commonly used evaluation metric in object detection and instance segmentation that calculates the mean Average Precision across multiple IoU thresholds ranging from 0.5 to 0.95 in steps of 0.05.

  • ​​         : The processing time per image (in milliseconds), measured during inference performed with FP32 precision, including model inference time and post-processing time. When submitting prediction results, you must also submit the GPU index used and the processing time per image. Differences in processing time across GPUs will be adjusted based on a GPU-specific correction formula (Check GPU index and correction criteria).

    • For time measurement, use Python’s time module and measure the duration with "time.perf_counter()".

    • An example of the time measurement standard is provided below.

CodeCogsEqn (10).png
Screenshot from 2025-03-31 13-22-48.png
  • FLOPs: FLOPs(Floating Point OPerationS, measured in Giga FLOPs) is a metric used to estimate the total number of arithmetic operations an AI model performs during inference or training, reflecting its computational complexity.

  • ​The leaderboard is based on the "Test_open" dataset, which is publicly available. After the leaderboard is closed, the final rankings will be determined using the "Test_private" dataset, which is not publicly available.

  • For more details on result submission, please refer to the submission page(now closed).

4. Participation Guidelines

  • Both individuals and teams composed of members aged 14 or older are eligible to participate. Each team may have up to 3 members.

  • The top 10 teams must submit a Docker image file that is properly configured for training and evaluation, along with a project description document(in PPT or DOC format).

Hanwha Systems AI Challenge 2025
Partners

cmu_remove_background_edited.png
quantumred_edited.png

Copyright © 2025 Hanwha Systems Co., Ltd. All rights reserved.

bottom of page