Web accessibility evaluations are typically done by means of automatic tools and by humans' assessments. Metrics about accessibility are devoted to quantify accessibility level or accessibility barriers, providing numerical synthesis from such evaluations. It is worth noting that, while automatic tools usually return binary values (meant as the presence or the absence of an error), human assessment in manual evaluations are subjective and can get values from a continuous range. In this paper we present a model which takes into account multiple manual evaluations and provides final single values. In particular, an extension of our previous metric BIF, called cBIF, has been designed and implemented to evaluate consistence and effectiveness of such a model. Suitable tools and the collaboration of a group of evaluators is supporting us to provide first results on our metric and is drawing interesting clues for future researches.
Silvia Mirri, Paola Salomoni, Ludovico Antonio Muratori, Matteo Battistelli (2012). Getting one voice: tuning up experts' assessment in measuring accessibility. NEW YORK : ACM [10.1145/2207016.2207023].
Getting one voice: tuning up experts' assessment in measuring accessibility
MIRRI, SILVIA;SALOMONI, PAOLA;MURATORI, LUDOVICO ANTONIO;BATTISTELLI, MATTEO
2012
Abstract
Web accessibility evaluations are typically done by means of automatic tools and by humans' assessments. Metrics about accessibility are devoted to quantify accessibility level or accessibility barriers, providing numerical synthesis from such evaluations. It is worth noting that, while automatic tools usually return binary values (meant as the presence or the absence of an error), human assessment in manual evaluations are subjective and can get values from a continuous range. In this paper we present a model which takes into account multiple manual evaluations and provides final single values. In particular, an extension of our previous metric BIF, called cBIF, has been designed and implemented to evaluate consistence and effectiveness of such a model. Suitable tools and the collaboration of a group of evaluators is supporting us to provide first results on our metric and is drawing interesting clues for future researches.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.