HOBBIT organizes, joins and promotes open challenges that aim to measure the performance of technologies for the different steps of the Big Linked Data (BLD) lifecycle. In contrast to existing benchmarks, we will provide modular and easily extensible benchmarks for all industry-relevant BLD processing steps that allow to assess whole suites of software that cover more than one step. The infrastructure necessary to run the evaluation campaigns will be made available. Our architecture will rely on web interfaces and cloud infrastructures to ensure scalability.
The open challenges are:
HOBBIT invites you (open call) to add your benchmarks to HOBBIT project platform, in order to drive the Semantic Web and Big Linked Data community towards transparent performance evaluation using standardized hardware: