Meta-solver approaches exploit many individual solvers to potentially build a better solver. To assess the performance of meta-solvers, one can adopt the metrics typically used for individual solvers (e.g., runtime or solution quality) or employ more specific evaluation metrics (e.g., by measuring how close the meta-solver gets to its virtual best performance). In this paper, based on some recently published works, we provide an overview of different performance metrics for evaluating (meta-)solvers by exposing their strengths and weaknesses.
Roberto Amadini, Maurizio Gabbrielli, Tong Liu, Jacopo Mauro (2023). On the Evaluation of (Meta-)solver Approaches. THE JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 76, 705-719 [10.1613/jair.1.14102].
On the Evaluation of (Meta-)solver Approaches
Roberto Amadini
;Maurizio Gabbrielli;Tong Liu;Jacopo Mauro
2023
Abstract
Meta-solver approaches exploit many individual solvers to potentially build a better solver. To assess the performance of meta-solvers, one can adopt the metrics typically used for individual solvers (e.g., runtime or solution quality) or employ more specific evaluation metrics (e.g., by measuring how close the meta-solver gets to its virtual best performance). In this paper, based on some recently published works, we provide an overview of different performance metrics for evaluating (meta-)solvers by exposing their strengths and weaknesses.File | Dimensione | Formato | |
---|---|---|---|
14102wPg#s.pdf
accesso aperto
Tipo:
Versione (PDF) editoriale
Licenza:
Licenza per accesso libero gratuito
Dimensione
859.53 kB
Formato
Adobe PDF
|
859.53 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.