Participants must devise methods capable of compressing input prompts while preserving or even improving the quality of model-generated outputs.
Specifically, the challenge involves reducing the size of provided prompts while ensuring accurate generation of high-quality responses.
Submission Please, refer to our
GitHub to find a guide for submission process.
EvaluationThe target metrics:
- Compression Ratio: length of compressed prompts divided by original prompt length (characters).
- Score: proportion of correct answers produced when running the model with your optimized prompts.
- Overall Score: Compression Ratio x Score.
You should submit the compressed version of the given prompts. The model (which is unknown for the participants during the competition) will be run on the submitted data, and the answers will be evaluated against ground truths.
If the overall length of submission (in chars) is not less than the initial lengths, then the answers evaluation won't be run and the Score will be 0.0 by default.
The team achieving the highest Overall Score wins.
Participation RulesFinal results will be obtained on the private test set and announced during the AINL 2026 conference.
Each team will have 3 attempts to submit final results.
After the competition is over, we will publish the ground truths for the test set, so the participants could perform some ablation studies.