The HOBBIT project has successfully organized two series of challenges in order to measure the performance of implemented systems in processing Big Linked Data. In total, we organized ten (10) challenges, five (5) in each period. In particular, during the first period, HOBBIT organized:
- the MOCHA challenge at ESWC 2017 (https://project-hobbit.eu/challenges/mighty-storage-challenge/)
- the OKE challenge at ESWC 2017 (https://project-hobbit.eu/challenges/oke2017-challenge-eswc-2017/)
- the QALD challenge at ESWC 2017 (https://project-hobbit.eu/challenges/qald2017/)
- the DEBS Grand Challenge at DEBS 2017 (https://project-hobbit.eu/challenges/debs-grand-challenge/)
- the HOBBIT Link Discovery Task at OAEI OM 2017 Workshop at ISWC 2017 (https://project-hobbit.eu/challenges/om2017/)
For the second period, we were pleased to re-organize the challenges that we had run at ESWC 2017 (except for the QALD challenge), ISWC 2017 and DEBS 2017. The only exception was that, instead of the QALD challenge, we organized the 1st edition of the SQA challenge as an “offspring” of the QALD challenge. Specifically, during the second period, HOBBIT organized:
- the MOCHA challenge at ESWC 2018 (https://project-hobbit.eu/challenges/mighty-storage-challenge2018/)
- the OKE challenge at ESWC 2018 (https://project-hobbit.eu/challenges/oke2018-challenge-eswc-2018/)
- the SQA challenge at ESWC 2018 (https://project-hobbit.eu/challenges/sqa-challenge-eswc-2018/)
- the DEBS Grand Challenge at DEBS 2018 (https://project-hobbit.eu/challenges/debs2018-grand-challenge/)
- the HOBBIT Link Discovery Task at OAEI OM 2018 Workshop at ISWC 2018 (https://project-hobbit.eu/challenges/om2018/)
Challenges consisted of multiple benchmarking tasks prepared by the HOBBIT partners; participants were invited to submit systems that tackled one or more of a challenge’s tasks. The participation in the challenges of the first period was satisfying, with a total of 22 systems. The successful organization of the challenges in the first period rewarded the HOBBIT project with an increased number of participants in the second period. More precisely, 31 systems in total took part in the second round of the challenges. Overall, the second series of challenges attracted 59 systems (i.e. 30 participating systems and 29 benchmarks tested internally by the challenges’ teams).
The HOBBIT platform successfully supported all challenges during these two periods. Participating systems were tested using benchmarks developed by HOBBIT and their evaluation was conducted on top of the HOBBIT platform. The challenges’ results were presented to the public in dedicated workshop sessions that attracted several attendees. Importantly, the qualitative evaluation of the challenges via questionnaires, in each period separately, provided to the HOBBIT consortium valuable feedback to improve its documentation, as well as to enhance the platform. In this way, not only we created a faithful growing audience for all HOBBIT challenges but the HOBBIT platform also became established as a reliable benchmarking solution for all steps of the Linked Data lifecycle.