Upcoming Evaluation Campaigns powered by HOBBIT technologies, almost.

In a former blog post, we presented the predecessor of the HOBBIT platform, the General Entity Annotator Benchmarking Framework (GERBIL) [1]. This framework has been adopted by two evaluation campaigns as described in this post. These campaigns focus on tasks such as entity recognition, entity typing and question answering.

Open Knowledge Extraction Challenge

The OKE Challenge 2015 was part of the Extended Semantic Web Conference 2015 and had three different tasks. Two of them have been implemented and evaluated using GERBIL.

The first task comprised (1) the identification of entities in a sentence (Entity Recognition), (2) the linking of the entities to a reference KB (Entity Linking) and (3) the assigning of a type to the entity (Entity Typing). The task focussed on mapping entities to classes like Person, Place, Organization and Role according to the semantics of the DOLCE Ultra Lite ontology. However, GERBIL as well as HOBBIT focus on being knowledge base-agnostic to cover a wide range of Linked Data.

The second task aimed at the identification of the type description of a given entity and infer the most appropriate DOLCE+DnS Ultra Lite class that contains this type. The participating systems received short texts in which a single named entity had been marked. The systems had to recognize and mark the type(s) inside the text, generate an RDF node per identified type and link it to at least one DOLCE Ultra Lite class.

For both tasks, GERBIL offered the evaluation of a participating system regarding the complete task. Moreover, it showed the system performance regarding the single sub tasks. This detailed reporting can lead to the identification of single system parts that have to be enhanced like in [2].

The second Open Knowledge Extraction challenge, which addresses the same tasks, will take place in May 29th to June 2nd at the Extended Semantic Web Conference 2016 (ESWC 2016) and will be supported by GERBIL as well.

Question Answering over Linked Data (QALD-6)

The 6th QALD implementation of this evaluation campaign will also take place at the ESWC 2016. This evaluation campaign focusses on question answering (QA) over linked data, with a strong emphasis on multilinguality and hybrid approaches using information from both structured and unstructured data. Moreover, QALD-6 will also tackle statistical QA for the first time!

GERBIL will be able to measure not only the performance of the QA systems but also other sub tasks, like recognizing required properties, relations or entities. Although QALD-6 will be mainly evaluated using the existing portal, we will present the new GERBIL version to the community to kick-start comparable, archivable and up-to-date experiments in the research field of QA.

The results of these campaigns will allow us to assess the evaluation component that will be deployed within the HOBBIT platform. For more information, join our HOBBIT community!

[1] Usbeck R., et al.. 2015. GERBIL: General Entity Annotator Benchmarking Framework. In Proceedings of the 24th International Conference on World Wide Web (WWW ’15). ACM, New York, NY, USA, 1133-1143.
[2] Röder M., et al.. 2015. CETUS – A Baseline Approach to Type Extraction. In 1st Open Knowledge Extraction Challenge @ 12th European Semantic Web Conference (ESWC 2015).

Spread the word. Share this post!

Leave A Reply

Your email address will not be published. Required fields are marked *