Skip to Main Content (Press Enter)

Logo UNIOR
  • ×
  • Home
  • Corsi
  • Insegnamenti
  • Persone
  • Strutture

UNIFIND
Logo UNIOR

|

UNIFIND

unior.it
  • ×
  • Home
  • Corsi
  • Insegnamenti
  • Persone
  • Strutture

Challenging the Abilities of Large Language Models in Italian: a Community Initiative

Capitolo di libro
Data di Pubblicazione:
2025
Abstract:
The rapid progress of Large Language Models (LLMs) has transformed natural language processing and broadened its impact across research and society. Yet, systematic evaluation of these models, especially for languages beyond English, remains limited. "Challenging the Abilities of LAnguage Models in ITAlian" (CALAMITA) is a large-scale collaborative benchmarking initiative for Italian, coordinated under the Italian Association for Computational Linguistics. Unlike existing efforts that focus on leaderboards, CALAMITA foregrounds methodology: it federates more than 80 contributors from academia, industry, and the public sector to design, document, and evaluate a diverse collection of tasks, covering linguistic competence, commonsense reasoning, factual consistency, fairness, summarization, translation, and code generation. Through this process, we not only assembled a benchmark of over 20 tasks and almost 100 subtasks, but also established a centralized evaluation pipeline that supports heterogeneous datasets and metrics. We report results for four open-weight LLMs, highlighting systematic strengths and weaknesses across abilities, as well as challenges in task-specific evaluation. Beyond quantitative results, CALAMITA exposes methodological lessons: the necessity of fine-grained, task-representative metrics, the importance of harmonized pipelines, and the benefits and limitations of broad community engagement. CALAMITA is conceived as a rolling benchmark, enabling continuous integration of new tasks and models. This makes it both a resource -- the most comprehensive and diverse benchmark for Italian to date -- and a framework for sustainable, community-driven evaluation. We argue that this combination offers a blueprint for other languages and communities seeking inclusive and rigorous LLM evaluation practices.
Tipologia CRIS:
2.1 Contributo in volume (Capitolo o Saggio)
Elenco autori:
Nissim, Malvina; Croce, Danilo; Patti, Viviana; Basile, Pierpaolo; Attanasio, Giuseppe; Musacchio, Elio; Rinaldi, Matteo; Borazio, Federico; Francis, Maria; Gili, Jacopo; Scalena, Daniel; Altuna, Begoña; Azurmendi, Ekhi; Basile, Valerio; Bentivogli, Luisa; Bisazza, Arianna; Bolognesi, Marianna; Brunato, Dominique; Caselli, Tommaso; Casola, Silvia; Cassese, Maria; Cettolo, Mauro; Collacciani, Claudia; De Cosmo, Leonardo; Di Buono, Maria Pia; Esuli, Andrea; Etxaniz, Julen; Ferrando, Chiara; Fidelangeli, Alessia; Frenda, Simona; Fusco, Achille; Gaido, Marco; Galassi, Andrea; Galli, Federico; Giordano, Luca; Goffetti, Mattia; Gonzalez-Dios, Itziar; Gregori, Lorenzo; Grundler, Giulia; Iannaccone, Sandro; Jiang, Chunyang; La Quatra, Moreno; Lagioia, Francesca; Marem Lo, Soda; Madeddu, Marco; Magnini, Bernardo; Manna, Raffaele; Mercorio, Fabio; Merlo, Paola; Muti, Arianna; Nastase, Vivi; Negri, Matteo; Onorati, Dario; Palmieri, Elena; Papi, Sara; Passaro, Lucia; Pensa, Giulia; Piergentili, Andrea; Potertì, Daniele; Puccetti, Giovanni; Ranaldi, Federico; Ranaldi, Leonardo; Amelio Ravelli, Andrea; Rosola, Martina; Sofia Ruzzetti, Elena; Samo, Giuseppe; Santilli, Andrea; Santin, Piera; Sarti, Gabriele; Sartor, Giovanni; Savoldi, Beatrice; Serino, Antonio; Seveso, Andrea; Siciliani, Lucia; Torroni, Paolo; Varvara, Rossella; Zaninello, Andrea; Zanollo, Asya; Massimo Zanzotto, Fabio; Zeinalipour, Kamyar; Zugarini, Andrea
Autori di Ateneo:
MANNA RAFFAELE
di Buono Maria Pia
Link alla scheda completa:
https://unora.unior.it/handle/11574/251380
Link al Full Text:
https://unora.unior.it//retrieve/handle/11574/251380/254352/2512.04759v1.pdf
Titolo del libro:
arXiv preprint arXiv:2512.04759
  • Dati Generali

Dati Generali

URL

https://arxiv.org/abs/2512.04759
  • Utilizzo dei cookie

Realizzato con VIVO | Designed by Cineca | 26.5.0.0