QALD2017 Challenge – ESWC 2017

Question Answering over Linked Data (QALD-7)

Challenge Motivation

The past years have seen a growing amount of research on question answering (QA) over Semantic Web data, shaping an interaction paradigm that allows end users to profit from the expressive power of Semantic Web standards while at the same time hiding their complexity behind an intuitive and easy-to-use interface. At the same time the growing amount of data has led to a heterogeneous data landscape where QA systems struggle to keep up with the volume, variety and veracity of the underlying knowledge.

The Question Answering over Linked Data (QALD) challenge aims at providing an up-to-date benchmark for assessing and comparing state-of-the-art-systems that mediate between a user, expressing his or her information need in natural language, and RDF data. It thus targets all researchers and practitioners working on querying Linked Data, natural language processing for question answering, multilingual information retrieval and related topics. The main goal is to gain insights into the strengths and shortcomings of different approaches and into possible solutions for coping with the large, heterogeneous and distributed nature of Semantic Web data.

QALD has a 6-year history of developing a benchmark that is increasingly being used as standard evaluation benchmark for question answering over Linked Data. Overviews of the past instantiations of the challenge are available from the CLEF Working Notes as well as ESWC proceedings: QALD-6QALD-5QALD-4QALD-3.

Since many of the topics relevant for QA over Linked Data lie at the core of ESWC (Multilinguality, Semantic Web, Human-Machine-Interfaces), we will  run the 7th instantiation of QALD again at ESWC 2017. HOBBIT project guarantees a controlled setting involving rigorous evaluations via its platform.

Challenge Overview

The key challenge for QA over Linked Data is to translate a user’s information need into a form such that it can be evaluated using standard Semantic Web query processing and inferencing techniques. The main task of QALD therefore is the following:

Given one or several RDF dataset(s) as well as additional knowledge sources and natural language questions or keywords, return the correct answers or a SPARQL query that retrieves these answers.

Q&A

For more information send an e-mail to: qald-contact@googlegroups.com

Important Dates

Paper submission deadline: Monday March 20th, 2017
Challenge paper reviews: Tuesday April 5th, 2017
Paper Notifications and invitation to task: Friday April 7th, 2017
Camera ready papers (5 pages document): Sunday April 23rd, 2017
Release of training data: Friday January 13th, 2017
Release of test dataset: Friday April 7th, 2017
Deadline for system submission: April 23rd, 2017
Running of the systems: May 15th, 2017
Results: May 22nd, 2017
Presentation of challenge results: June 2nd, 2017
Camera ready papers for the challenge proceedings (up to 15 pages): Friday June 30th, 2017 (tentative deadline)
Proclamation of winners: During ESWC2017 closing ceremony

 Tasks & Training Data

QALD-7 focusses on specific aspects and challenges:

  • (Updated 2017-03-09) Task 1: Multilingual question answering over DBpedia
  • Task 2: Hybrid question answering
  • New! Task 3: Large-Scale Question answering over RDF
  • New! (Updated 2017-02-13) Task 4: English question answering over Wikidata

Prerequisites for participation

The QALD challenge provides an automatic evaluation tool (GERBIL QA integrated into the HOBBIT platform)1,2 that is open source and available for everyone to re-use. This tool is accessible online, so that participants can simply upload the answers produced by their system or even check their system via a webservice. Each experiment will have a citable, time-stable and archivable URI which is both human- and machine-readable. It is mandatory for each system to provide an endpoint URL in the submission PDF in order to be evaluated with GERBIL QA, cf. our latest description of the system.

Registration and Submission

  • All challenge papers should be exactly five (5) pages in length in PDF file format and written in English.
  • In order to prepare their manuscripts, authors should follow Springer’s Lecture Notes in Computer Science (LNCS) style. For details and templates see Springer’s Author Instructions.
  • Paper submissions will be handled electronically via the EasyChair conference management system, available at the following address: https://easychair.org/conferences/?conf=qald2017.
  • Papers must be submitted no later than Monday March 20th, 2017, 23:59 Hawaii Time.
    NOTE: Eligible to submit papers are only authors participating in the challenge.
  • Each submission will be peer-reviewed by members of the challenge program committee.  Papers will be evaluated according to their significance, originality, technical content, style, clarity, and relevance to the challenge.
  • Proceedings will be published by Springer in LNCS volume.
  • After the conference, challenge participants will be able to provide a detailed description of their system and evaluation results in a longer version of their paper (up to 15 pages). This longer paper will be included in the challenge proceedings.

Q&A

For any questions, please write an email to Ricardo Usbeck (usbeck AT informatik.uni-leipzig.de).

Organization

The organization responsibility will be shared by the following four main organizers:

  • Ricardo Usbeck, University of Leipzig, Germany
    Expertise: Knowledge extraction and Question Answering
    Website: http://aksw.org/RicardoUsbeck
    Email: usbeck@informatik.uni-leipzig.de
  • Axel-Cyrille Ngonga Ngomo, Institute for Applied Informatics, Germany
    Expertise: Knowledge extraction, Machine Learning, Question Answering, Information Retrieval
    Website:  http://aksw.org/AxelNgonga
    Email: ngonga@informatik.uni-leipzig.de
  • Bastian Haarmann, Fraunhofer-Institute IAIS, Germany
    Expertise: Knowledge extraction, Named Entitiy Recognition, Linked Open Data and Question Answering
    Website: https://www.directory.fraunhofer.de/?search=personKeyword&  keyword=haarmann
    Email: bastian.haarmann@iais.fraunhofer.de
  • Anastasia Krithara, National Center for Scientific Research “Demokritos”
    Expertise: Information retrieval, Question Answering
    Website: http://users.iit.demokritos.gr/~akrithara/
    Email: akrithara@iit.demokritos.gr

In addition, the following data experts will support the construction of the benchmark data and question sets, as well as the advertisement and dissemination of the challenge. This list might still be subject to extension.

  • Harsh Takkar, University Bonn, Germany
  • Jonathan Huthmann, Institute for Applied Informatics, Germany
  • Jens Lehmann, Fraunhofer-Institute IAIS, Germany

In addition to the above set of people, we will compile a program commit-tee consisting of experts from research and industry, who will review the paper submissions, independently of the organization team:

  • Corina Forascu, Alexandru Ioan Cuza University, Iasi, Romania
  • Sebastian Walter, CITEC, Bielefeld University, Germany
  • Bernd Müller, ZBMed, Germany
  • Christoph Lange, Fraunhofer Gesellschaft, Germany
  • Dennis Diefenbach, Universit de Saint-Etienne, France
  • Edgard Marx, AKSW, University Leipzig, Germany
  • Hady Elsahar, Universit de Saint-Etienne, France
  • Harsh Thakkar, University of Bonn, Germany
  • Ioanna Lytra, University of Bonn, Germany
  • John McCrae, INSIGHT – The Centre for Data Analytics, Ireland
  • Konrad Höffner, AKSW, University Leipzig, Germany
  • Kuldeep Singh, University of Bonn, Germany
  • Saeedeh Shekarpour, Kno.e.sis Center, Ohio Center of Excellence in Knowledge-enabled Computing, USA
  • Sherzod Hakimov, CITEC, Bielefeld University, Germany
  • Elena Cabrio, University of Nice Sophia Antipolis, France
  • Philipp Cimiano, CITEC, Bielefeld University, Germany
  • Vanessa Lopez, IBM Research, Dublin, Ireland
  • André Freitas, University Passau, Germany
  • Elena Demidova, University of Southampton, United Kingdom
  • Petr Baudiš, Czech Technical University in Prague, Czech Republic
  • Jin-Dong Kim, Database Center for Life Science (DBCLS), Japan
  • Key-Sun Choi, KAIST, Korea