The main and difficult problem of information retrieval (IR) is to find test collections. The existing data sets (like those provided by TREC) have played a key role to promote progress in the IR domain. However, given the significant increase of online content over the past few years and their diversity, and of the increasing rate of search queries, the current testbeds are either too small or not representative of the real applications of IR systems. A testbed for evaluating information retrieval systems requires three parts: a document collection, a list of query topics, and a set of relevance judgements. Evaluating information retrieval algorithms performance, in different contexts (distributed IR, P2P network, language-oriented IR, IR for specific domains, is already a challenging task caused by the lack of realistic testbeds. In this sub-topic, we focus our research on testbeds for distributed systems (included P2P networks) and arabic context.
“نحو منهجية ومدونة في خدمة المعلوماتية العربية في الواب الاجتماعي الدلالي”, المجلة الدولية لعلوم و هندسة الحاسوب, vol. 3, no. 3, pp. 67–80, 2011.,