A platform for benchmarking Big Linked Data solutions

One of the major goals of the HOBBIT project is to provide a benchmarking platform supporting the benchmarking of Big Linked Data solutions. This platform needs to a) enable benchmarking in a b) scalable way on c) different platforms and d) support the execution of benchmarking challenges. Additionally, the platform should be made available as an open source project as well as an online platform.

In February, we released the first version of the HOBBIT benchmarking platform that fulfills all these requirements. Firstly, the platform offers the benchmarking of systems. The benchmark and system components have to be provided as Docker images which can be uploaded to the Docker repository integrated into the platform. A user management component makes sure that uploaded systems can not made publicly available but can be benchmarked by the uploading user.

Secondly, the platform is scalable. It can be executed on single machines with limited resources but it can also be deployed in a distributed way. It relies on RabbitMQ known to be a scalable messaging system. Deployed together with Docker Swarm, the platform can create multiple instances of benchmark or system components on different machines and thus horizontally scale the benchmark and the system by deploying more instances.

Thirdly, the platform itself is developed as several components that are executed as Docker containers. Thus, it can be run on every system that can host Docker containers.

Fourthly, challenges can be created comprising several single challenge tasks. Users with systems compatible to at least one task can register their system for the challenge. Since a challenge can be bound to a tough time plan, benchmarking a system for a challenge has a higher priority than other benchmarking experiments. During the last months, four challenges have been carried out using the HOBBIT platform.

For supporting the development of benchmarks, we did not only provide the benchmarking platform but a general structure for benchmarks and a Java library that offers several useful methods to implement a benchmark or an adapter for a system that should be benchmarked. The general structure of benchmark is described in detail in the project deliverable D2.1 and the implementation of a benchmark component or a system adapter using our library is described in our wiki.

After the first release of the platform at the beginning of the year, we put a lot of effort into enhancing the platform and developing the first benchmarks. Now, our benchmarking platform offers benchmarks for several areas ranging from triple storages over instance matching, question answering and knowledge extraction to stream analysis. Thanks to all these efforts, our platform already supported four challenges and carried out more than 3000 experiments.

 

Spread the word. Share this post!

Leave A Reply

Your email address will not be published. Required fields are marked *