Skip to Content

Measures

In 2009, all computations were done with SOIRE, a software for IR evaluation based on a service-oriented architecture and double-checked against trec_eval, the standard program for evaluation used in the TREC evaluation campaign.

in 2010 we have used the trec_eval version 9.0 for both Prior Art and Classification tasks. To compute the PRES score, we have used the script provided by the authors of this measure.

For each experiment submitted to the Prior Art types of tasks we computed 10 standard IR measures:

  • Precision, Precision@5, Precision@10, Precision@100
  • Recall, Recall@5, Recall@10, Recall@100
  • MAP
  • nDCG(with reduction factor given by a logarithm in base 10)
  • PRES (starting 2010)

For the Classification task we computed the following measures:

  • Precision@1, Precision@5, Precision@10, Precision@25, Precision@50
  • Recall@5, Recall@25, Recall@50
  • MAP
  • F1 at 5, 25, and 50
     

Evaluation Results

 All evaluation results are published as IRF technical reports.

2009

  • IRF-TR-2009-00001: Florina Piroi, Giovanna Roda, Veronika Zenz, CLEF-IP 2009 Evaluation Summary, July 2009, download PDF
  • IRF-TR-2010-00002: Florina Piroi, Giovanna Roda, Veronika Zenz, CLEF-IP 2009: Evaluation Summary - Part Two, July 2010, download PDF

2010

  • IRF-TR-2010-00003: Florina Piroi, CLEF-IP 2010: Prior Art Candidates Search Evaluation Summary, July 2010, download PDF
  • IRF-TR-2010-00004: Florina Piroi, CLEF-IP 2010: Classification Task Evaluation Summary, August 2010, download PDF

In Preparation

These pages are currently being updated. Please visit later for more information or register for our CLEF-IP Newsletter: clef-ip-news@ir-facility.org

CLEF-IP Past and Present