Context
Selected among 7,000+ global applicants for the Mistral AI Worldwide Hackathon in London (Feb 28 - Mar 1, 2026). Organized by Mistral AI and Iterate, sponsored by Weights & Biases, NVIDIA, Amazon Web Services (AWS), ElevenLabs and Hugging Face.
Fine-Tuning Track (sponsored by W&B) — 48 hours to fine-tune open-source Mistral models and build a working application.
Project
Ecotopia is an interactive political simulation where the player is mayor of a city facing ecological collapse. Free-text speeches are analyzed by specialized fine-tuned models:
- Structured information extraction — Political promise NER, type categorization, contradiction detection
- Conditional text generation — Contextualized citizen reactions based on game state, citizen profiles, and trust history
Fine-Tuning
4 Mistral models fine-tuned via QLoRA (NF4 4-bit, LoRA r=16, alpha=32) on 690 synthetic examples generated via Amazon Bedrock, in under 10 minutes per model:
| Task | Models | Training Examples |
|---|---|---|
| Promise Extraction | Ministral 8B, Nemo 12B | 300 (3 difficulty tiers) |
| Citizen Reactions | Ministral 8B, Small 24B | 390 |
Results
Our 8B fine-tuned SLMs outperform Mistral Large (base) across the entire structured output pipeline at 10x lower latency. Mistral Large scores 0% valid JSON on citizen reactions without fine-tuning.
Specializing small models on precise tasks enables real-time applications where latency and output format reliability are hard constraints.
Architecture
- Inference: HuggingFace Endpoints with custom handler (4-bit BitsAndBytes)
- Backend: Spring Boot 3.5 + Spring AI
- Frontend: Phaser 3 (TypeScript, pixel art)
- Tracking: Weights & Biases (experiment tracking, evaluation, automated report)
- Data: PostgreSQL + synthetic training data via Amazon Bedrock
