Solutions Providers

Product Placement

HOBBIT will  generate different benchmarks to evaluate software from the different Linked Data Lifecycle stages: generation and acquisition, analysis and processing, storage and curation, and visualisation and services. These benchmarks will scale at real industrial usage with respect to volume, velocity and variability.

Get Feedback

The HOBBIT platform will publish regular reports containing the results of different systems for our benchmark and an analysis of these results. Software vendors will receive valuable feedback on the performance of the solution based on real data to help them to improve their products.

Accessible Benchmarks

Evaluating Big Linked Data requires a significant amount of hardware. In addition to providing a benchmarking framework that will emulate the generation of real data streams gathered from the industrial partners, HOBBIT will build a local cluster for testing implementations, and will provide access to large-scale evaluations on Cloud services (e.g., Amazon6, Sysfera7, IBS8).

Comparable Results

HOBBIT benchmarks are created with input from Industry (not academics or vendors). This means that benchmarks are designed under industrial needs with real data. The data generation benchmarks will compile the scores of the benchmarked tools according to the list of pre-defined key performance indicators and return these as result of the evaluation. By these means, HOBBIT ensure that the evaluation results are fair and comparable.

How to participate

HOBBIT will invite vendors to yearly evaluation campaigns to participate in the benchmarking challenges.

Get more involved 

HOBBIT leads a subgroup on Big Data Benchmarking under BDVA taskforce 6 (TF6-SG7). Currently there are 176 registered members to the TF6. You can join the group by becoming a BDVA member under