QALD2017 Challenge – ESWC 2017

Question Answering over Linked Data (QALD-7) – ESWC 2017

Challenge Motivation

The past years have seen a growing amount of research on question answering (QA) over Semantic Web data, shaping an interaction paradigm that allows end users to profit from the expressive power of Semantic Web standards while at the same time hiding their complexity behind an intuitive and easy-to-use interface. At the same time the growing amount of data has led to a heterogeneous data landscape where QA systems struggle to keep up with the volume, variety and veracity of the underlying knowledge.

The Question Answering over Linked Data (QALD) challenge aims at providing an up-to-date benchmark for assessing and comparing state-of-the-art-systems that mediate between a user, expressing his or her information need in natural language, and RDF data. It thus targets all researchers and practitioners working on querying Linked Data, natural language processing for question answering, multilingual information retrieval and related topics. The main goal is to gain insights into the strengths and shortcomings of different approaches and into possible solutions for coping with the large, heterogeneous and distributed nature of Semantic Web data.

QALD has a 6-year history of developing a benchmark that is increasingly being used as standard evaluation benchmark for question answering over Linked Data. Overviews of the past instantiations of the challenge are available from the CLEF Working Notes as well as ESWC proceedings: QALD-6QALD-5QALD-4QALD-3.

Since many of the topics relevant for QA over Linked Data lie at the core of ESWC (Multilinguality, Semantic Web, Human-Machine-Interfaces), we will  run the 7th instantiation of QALD again at ESWC 2017. HOBBIT project guarantees a controlled setting involving rigorous evaluations via its platform.

Challenge Overview

The key challenge for QA over Linked Data is to translate a user’s information need into a form such that it can be evaluated using standard Semantic Web query processing and inferencing techniques. The main task of QALD therefore is the following:

Given one or several RDF dataset(s) as well as additional knowledge sources and natural language questions or keywords, return the correct answers or a SPARQL query that retrieves these answers.


For more information send an e-mail to:

Important Dates

Paper submission deadline: Monday March 20th, 2017
Challenge paper reviews: Tuesday April 5th, 2017
Paper Notifications and invitation to task: Friday April 7th, 2017
Camera ready papers (5 pages document): Sunday April 23rd, 2017
Release of training data: Friday January 13th, 2017
Release of test dataset: Friday April 7th, 2017
Deadline for system submission: Wednesday May 3rd, 2017
Running of the systems: Monday May 15th, 2017
Presentation of challenge results: Thursday June 1st, 2017
Camera ready papers for the challenge proceedings (up to 15 pages): Friday June 30th, 2017 (tentative deadline)
Proclamation of winners: During ESWC2017 closing ceremony

 Tasks & Training Data

QALD-7 focusses on specific aspects and challenges:

  • (Updated 2017-03-09) Task 1: Multilingual question answering over DBpedia
  • Task 2: Hybrid question answering
  • New! Task 3: Large-Scale Question answering over RDF
  • New! (Updated 2017-02-13) Task 4: English question answering over Wikidata

Prerequisites for participation

The QALD challenge provides an automatic evaluation tool, namely GERBIL QA ( integrated into the HOBBIT platform, that is open source and available for everyone to re-use. This tool is accessible online, so that participants can simply upload the answers to GERBIL QA for testing (via upload or a webservice). Each experiment will have a citable, time-stable and archivable URI which is both human- and machine-readable.

Paper Submission

  • All challenge papers should be exactly five (5) pages in length in PDF file format and written in English.
  • In order to prepare their manuscripts, authors should follow Springer’s Lecture Notes in Computer Science (LNCS) style. For details and templates see Springer’s Author Instructions.
  • Paper submissions will be handled electronically via the EasyChair conference management system, available at the following address:
  • Papers must be submitted no later than Monday March 20th, 2017, 23:59 Hawaii Time.
    NOTE: Eligible to submit papers are only authors participating in the challenge.
  • Each submission will be peer-reviewed by members of the challenge program committee.  Papers will be evaluated according to their significance, originality, technical content, style, clarity, and relevance to the challenge.
  • Proceedings will be published by Springer in LNCS volume.
  • After the conference, challenge participants will be able to provide a detailed description of their system and evaluation results in a longer version of their paper (up to 15 pages). This longer paper will be included in the challenge proceedings.

Evaluation Metrics

Participating systems will be evaluated with respect to precision and recall. For each question q, precision and recall are computed as follows:

   recall(q) = number of correct system answers for q / number of gold standard answers for q
   precision(q) = number of correct system answers for q / number of system answers for q

Globally, the evaluation will compute the macro and micro F-measure of a system, both over all test questions and over those questions that the system did provide an answer for. Contrasting the latter two cases will allow to take into account a system’s ability to identify questions it cannot answer. The evaluation will also allow systems to provide a confidence measure together with their answers, which will be multiplied with the achieved score. For task 3 in specific, the evaluation will not only take into account the accuracy measures for the answered questions but also the scalability measures in terms of number of processed queries and time needed for answer retrieval.

Technical requirements for system submission

Each participant must provide a system as Docker image. This image has to be uploaded to the HOBBIT Gitlab (it is possible to use a private repository, i.e., the system will not be visible for other people). In general, the uploaded Docker image can contain either a) the system itself or b) a web service client that forwards requests to the system that is hosted by you. Note, general information can be found in our HOBBIT wiki. Do not hesitate to contact us if you have any questions.

Implementing the API

To be able to benchmark your system, it needs to implement our QALD-JSON-based  format (e.g., using a wrapper). There are several scenarios how this can be achieved.

1st possibility: GERBIL-QA compatible APIs

If your system already implements an API that is compatible with the GERBIL QA benchmarking framework, you do not have to implement anything additional to that. You only need to provide a Docker image of your system that implements the same API as your original web service and send us a description how it has to be started (e.g., which environmental variables have to be defined).

A system is compatible to GERBIL QA if it is either one of the systems that are already available in GERBIL QA or it is a QALD-JSON-based web service that can be benchmarked with GERBIL QA.

2nd possibility: Java-based system or system adapter


  • You can write Java code and you are familiar with maven.
  • You have a system written in Java (or at least a client for the system).
  • You have Docker installed
  • You can find an example implementation here:
1. Write a System Adapter for your system
  • Create a new maven-Project in your favorite IDE.
  • Add the following to your pom.xml
                        <name>University Leipzig, AKSW Maven2 Repository</name>
                        <name>University Leipzig, AKSW Maven2 Repository</name>
        <!-- Add your System here -->
                <!-- Add a slf4j log binding here -->
                                        <!-- filter all the META-INF files of other artifacts -->
                                                <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                                                <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer" />
  • Add your system as dependency.
2. Now, add a SystemAdapter:
  • Add following as Source file to your maven project:
package org.example;
public class ExampleSystemAdapter extends AbstractSystemAdapter {
        private MyAnnotator myAnnotator;

    public void init() throws Exception {
        myAnnotator= new MyAnnotator();
            // Your initialization code comes here...
        // You can access the RDF model this.systemParamModel to retrieve meta data about this system adapter

     * You MIGHT need this, depends on the benchmark.
     * @see <a href="">Challenges and their Benchmarks</a>
    public void receiveGeneratedData(byte[] data) {
        // handle the incoming data as described in the benchmark description
     *  Create results for the incoming data.
     *  The structure of the incoming data should be defied by the <a href="">Challenges and their Benchmarks</a>
     *  e.g If you want to benchmark against the QALD-Challenge,
     *  you can expect the incoming data to be questions. Accordingly,
     *  your result[] output should be answers. 
     *  The data structure of the incoming and outgoing data should follow 
     *  the QALD-Json format. You can find <a href="">here</a>
     *   a loder and parser and a complete class structure for QALD-Json. 
     *   These are already included as dependency.
     *  @see <a href="">The Task Queue and structure of data[]</a>
    public void receiveGeneratedTask(String taskId, byte[] data) {

            //Here is where your system has to do its job.
        byte[] result = myAnnotator.annotate(data);

                // Send the result to the evaluation storage
                try {
                        sendResultToEvalStorage(taskId, result);
                } catch (IOException e) {
                        //Log the error

        public void close() throws IOException {
        // Free the resources you requested here
        // Always close the super class after yours!

  • Where the Class MyAnnotator refers to your System.
  • Be sure to implement a proper init() and close() along with your program logic in recieveGeneratedTask()
  • Now you are ready to build your project: Use your IDE or run mvn package from command line in the root directory of your maven project
  • You should now have two jar’s in your (project_root)/target/ folder: The original and the shaded one. If you can’t differentiate the two: Look at their filesize. The shaded one is usually by far larger. Also, the not-shaded one has prefix original-.
3. Create an account for HOBBIT
4. Docker it
  • Create a new file at your project root with the name Dockerfile
  • The contents of this file should look like this:
FROM java
ADD target/<MyShadedJar.jar> /<MyAnnotator>/<MyShadedJar.jar>
WORKDIR /<MyAnnotator>
CMD java -cp <MyShadedJar.jar> org.example.ExampleSystemAdapter
5. Create a system.ttl file

Here is an example of a basic file:

@prefix rdfs: <> .
@prefix hobbit: <> .

<> a  hobbit:SystemInstance; 
    rdfs:label        "MySystem"@en;
    rdfs:comment        "A short description of MySystem"@en;
    hobbit:imageName "";
    hobbit:implementsAPI <> .

Where refers to the benchmark your system should be tested with. You can find those URIs at

Now, push the system.ttl to your gitlab project , or add it by hand to your gitlab project root.

6. Run a Benchmark

Go to and run a benchmark on your system.

There is a detailed description in the platform wiki that you might want to reuse or refer to

Where to go now?

You can find a very detailed description of each step and in-depth information at:

Look at the example implementation at:


3rd possibility: Direct implementation of the API

If you want to use a different language to implement our QALD-JSON-based API, you need to implement the API of a system that can be benchmarked in HOBBIT. Every message of the task queue will be a single QALD-JSON-document. The response of your system has to be send to the result queue. Your system won’t receive data through the data queue.


If your system is able to answer questions in more than one language, we will send the lang HTTP parameter with ISO-standard abbreviations (i.e. en, fr, de) and not as JSON. The system will ask on default questions in English.

Uploading the Docker image

The uploading of the Docker image is described in the Hobbit project platform wiki.

The system meta data file

Your system needs a system meta data file (called system.ttl)

A detailed description of the system.ttl file will be added soon.

Program & Accepted Papers

Challenge Session
Tuesday, May 30th, 2017
14:00 – 14:30
14:00 – 14:10 QALD Challenge Overview
Giulio Napolitano
14:10 – 14:17 Daniil Sorokin and Iryna Gurevych, End-to-End Representation Learning for Question Answering with Weak Supervision
14:17 – 14:24 Dennis Diefenbach, Kamal Singh and Pierre Maret, WDAqua-core0: A Question Answering Component for the Research Community
14:24 – 14:31 Nikolay Radoev, Mathieu Tremblay, Michel Gagnon and Amal Zouaq, Answering Natural Language Questions on RDF Knowledge Base in French
Posters and Demos Session
June 1st, 2017
9:00 – 11:00
 The following systems will be presented as posters:

  • Daniil Sorokin and Iryna Gurevych, End-to-End Representation Learning for Question Answering with Weak Supervision
  • Nikolay Radoev, Mathieu Tremblay, Michel Gagnon and Amal Zouaq, Answering Natural Language Questions on RDF Knowledge Base in French
Closing Ceremony
Thursday, June 1st, 2017
 Announcement of challenge winners during the ESWC closing ceremony

For possible, last minute changes to the program please also check the ESWC 2017 program (

QALD7 Challenge Results

The results are described here and in Deliverable 7.3.1.


The organization responsibility will be shared by the following four main organizers:

  • Ricardo Usbeck, University of Leipzig, Germany
    Expertise: Knowledge extraction and Question Answering
  • Axel-Cyrille Ngonga Ngomo, Institute for Applied Informatics, Germany
    Expertise: Knowledge extraction, Machine Learning, Question Answering, Information Retrieval
  • Bastian Haarmann, Fraunhofer-Institute IAIS, Germany
    Expertise: Knowledge extraction, Named Entitiy Recognition, Linked Open Data and Question Answering
    Website:  keyword=haarmann
  • Anastasia Krithara, National Center for Scientific Research “Demokritos”
    Expertise: Information retrieval, Question Answering

In addition, the following data experts will support the construction of the benchmark data and question sets, as well as the advertisement and dissemination of the challenge. This list might still be subject to extension.

  • Harsh Takkar, University Bonn, Germany
  • Jonathan Huthmann, Institute for Applied Informatics, Germany
  • Jens Lehmann, Fraunhofer-Institute IAIS, Germany

In addition to the above set of people, we will compile a program commit-tee consisting of experts from research and industry, who will review the paper submissions, independently of the organization team:

  • Corina Forascu, Alexandru Ioan Cuza University, Iasi, Romania
  • Sebastian Walter, CITEC, Bielefeld University, Germany
  • Bernd Müller, ZBMed, Germany
  • Christoph Lange, Fraunhofer Gesellschaft, Germany
  • Dennis Diefenbach, Universit de Saint-Etienne, France
  • Edgard Marx, AKSW, University Leipzig, Germany
  • Hady Elsahar, Universit de Saint-Etienne, France
  • Harsh Thakkar, University of Bonn, Germany
  • Ioanna Lytra, University of Bonn, Germany
  • John McCrae, INSIGHT – The Centre for Data Analytics, Ireland
  • Konrad Höffner, AKSW, University Leipzig, Germany
  • Kuldeep Singh, University of Bonn, Germany
  • Saeedeh Shekarpour, Kno.e.sis Center, Ohio Center of Excellence in Knowledge-enabled Computing, USA
  • Sherzod Hakimov, CITEC, Bielefeld University, Germany
  • Elena Cabrio, University of Nice Sophia Antipolis, France
  • Philipp Cimiano, CITEC, Bielefeld University, Germany
  • Vanessa Lopez, IBM Research, Dublin, Ireland
  • André Freitas, University Passau, Germany
  • Elena Demidova, University of Southampton, United Kingdom
  • Petr Baudiš, Czech Technical University in Prague, Czech Republic
  • Jin-Dong Kim, Database Center for Life Science (DBCLS), Japan
  • Key-Sun Choi, KAIST, Korea


QALD 2017 – 7th Question Answering over Linked Data Challenge

May 28th to June 1st 2017, Portoroz, Slovenia

in conjunction with the 14th European Semantic Web Conference (ESWC 2017,

The key challenge for Question Answering over Linked Data is to translate a user’s
information need into a form such that it can be evaluated using standard
Semantic Web query processing and inferencing techniques.

The main task of QALD therefore is the following:
Given one or several RDF dataset(s) as well as additional knowledge sources and natural
language questions or keywords, return the correct answers or a SPARQL query that
retrieves these answers.

This year, the challenge comprises the following tasks:
* Task 1: Multilingual question answering over DBpedia
* Task 2: Hybrid question answering
* Task 3: Large-Scale Question answering over RDF
* Task 4: English question answering over Wikidata

We expect participants to describe their approach over a 5 page paper, including advantages and disadvantages as well as a first evaluation of the system with the respective task training data or an in-depth analysis of errors.

Important Dates
* Paper submission deadline (5 pages document): March 10th, 2017, 23:59 Hawaii Time**
* Notification of acceptance: April 7th, 2017
* Camera ready papers (5 pages document): April 23rd, 2017
* Deadline for submission of system answers/instructions for evaluation: TBA
* Release of evaluation results: TBA
* Proclamation of winners: During ESWC 2017 closing ceremony

**Eligible to submit papers are only authors participating in the challenge.

* Ricardo Usbeck, University of Leipzig, Germany
* Axel-Cyrille Ngonga Ngomo, Institute for Applied Informatics, Germany
* Bastian Haarmann, Fraunhofer-Institute IAIS, Germany
* Anastasia Krithara, NCSR “Demokritos”, Greece

For the complete list of organizers and program committee members,
please visit the challenge website.

Further Information and Contact
For detailed information, including datasets and submission guidelines,
please visit the challenge website:
Contact Email:

The challenge outcome overview can be found here, while challenge proceedings are available by Springer here.