DEBS 2018—Call for Grand Challenge solutions
Join the 2018 DEBS Grand Challenge and use machine learning to make maritime transportation more reliable! Explore multiple gigabytes of real maritime spatio-temporal streaming data and compete with peers from academia and industry for the Grand Challenge prize of 1000 USD.
The DEBS Grand Challenge is a series of competitions, that started in 2010, in which both academics and professionals compete with the goal of building faster and more accurate distributed and event based system. Every year, the DEBS Grand Challenge participants have a chance to explore a new data set and a new problem and can compare their results based on the common evaluation criteria.
The 2018 DEBS Grand Challenge focuses on the application of machine learning to spatio-temporal streaming data. The goal of the challenge is to make the naval transportation industry more reliable by providing predictions for vessels’ destinations and arrival times. Predicting both correct destinations and arrival times of vessels are relevant problems, that once solved, will boost the efficiency of the overall supply chain management. The Grand Challenge data is provided by the MarineTraffic company and hosted by the Big Data Ocean (*) EU Horizon 2020 project. The evaluation platform is provided by the HOBBIT project (**) represented by AGT International (http://www.agtinternational.com/), an EU Horizon 2020 project.
Details about the data, queries for the Grand Challenge and the evaluation process are provide here.
(*) BigDataOcean project has received funding from the European Union’s H2020 research and innovation action program under grant agreement number #732310.
(**) HOBBIT project has received funding from the European Union’s H2020 research and innovation action program under grant agreement number #688227.
Participants of the challenge compete for two awards: (1) the performance award and (2) the audience award. The winner of the performance award will be determined through the automated evaluation of the HOBBIT platform, according to the evaluation criteria. These criteria factor in speed as well as accuracy of the solution. The winning team will receive 1000 USD as price money.
The winner of the audience award will be determined amongst the finalists who present in the Grand Challenge session of the DEBS. In this session, the audience will be asked to vote for the solution with the most interesting concepts (highest number of votes wins). The intention is to award qualities of the solutions that are not tied to performance. Specifically, the audience will be encouraged to pay attention to the following aspects:
- Novelty/originality of the solution
- Quality of the solution architecture (e.g. flexibility, reusability, extensibility, generality, …)
There are two ways how teams can become finalists and get a presentation slot in the Grand Challenge session. (1) The two teams with the best performance (according to the HOBBIT platform) will be nominated. (2) The Grand Challenge organizers will review the submitted papers for each solution and nominate additional teams with the most interesting concepts.
All submissions of sufficient quality that do not make it to the finals will get a chance to present theirs solution as a poster. (The sufficiency of the quality will be determined through the review of the papers).
The Frequently Asked Questions will appear here. Please notice an issue tracker is available here.
|Challenge start (i.e. HOBBIT platform is available for testing):||January 15th, 2018|
|Submission deadline:||May 7th, 2018|
Static information: The queries require knowledge about the location of ports around the world. The locations are specified via bounding boxes that are defined through coordinates. You can find the complete list of ports here
Data Stream: We provide a stream of comma separated tuples that are ordered by time. A ship sends a tuple according to its behaviour based on the AIS specifications. The schema of the tuples is provided below
Schema <SHIP_ID, SPEED, LON, LAT, COURSE, HEADING, TIMESTAMP, Departure PORT_NAME, Reported_Draught>
- SHIP_ID is the anonymized id of the ship
- SPEED is measured in knots (divide value by 10)
- LON is the longitude of the current ship position
- LAT is the latitude of the current ship position
- COURSE is the direction in which ship moves (see: https://en.wikipedia.org/wiki/Course_(navigation))
- HEADING (see: https://en.wikipedia.org/wiki/Course_(navigation))
- TIMESTAMP is the time at which the message was sent (UTC)
- Departure PORT_NAME is the name of the last port visited by the vessel
- Reported_Draught of a ship’s hull is the vertical distance between the waterline and the bottom of the hull (keel) https://en.wikipedia.org/wiki/Draft_(hull)
Query 1: Predicting destinations of ships
Predicting the correct destination of a vessel is a relevant problem for a wide range of stakeholders including port authorities, vessel operators and many more. The prediction problem is to generate a continuous stream of predictions for the destination port of any vessel given the following information: (1) name of the port of origin, (2) unique ID of the vessel, (3) position of the vessel, (4) time stamp, and (5) vessel’s draught. The above data is provided as a continuous stream of tuples and the goal of the system is to provide for every input tuple one output tuple containing the name of the destination port. A solution is considered correct at time stamp T if for a tuple with this timestamp as well as for all subsequent tuples the predicted destination port matches the actual destination port. The goal of any solution is not only to predict a correct destination port but also to predict it as soon as possible counting from the moment when a new port of origin appears for a given vessel.
For the challenge we will define a set of port to consider. Each port will be specified by coordinates that define a bounding box around the port.
Evaluation for Query 1
The evaluation takes into account how early the correct predictions are made (Rank A1) and the total runtime of the system (Rank B1).
Rank A1 ranks according to the prediction time (the average time span between a prediction and the arrival at the port). Only correct predictions are considered. The arrival at a port is defined by the first event that is reported from within the respective bounding box. More formally, Rank A1 = total_travel_time / earliest_travel_time_with_correct_prediction (Note, that earliest_travel_time_with_correct_predictionredictions_correct is defined by the point in time from which all subsequent predictions are correct).
The overall ranking for query 1 (Rank Q1) is then computed as Rank Q1 = 0.75*Rank A1 + 0.25*Rank B1.
At any point in time there is only one tuple per ship in the queue.
Query 2: Predicting arrival times of ships
There is a set of ports defined by respective bounding boxes of coordinates. Once a ship leaves a port (i.e. the respective bounding box), the task is to predict the arrival time at its destination port (i.e. when the next defined bounding boxes will be entered). Also for this query, after port departure and until arrival, the solution must emit one prediction per position update. The event includes the following information <ship_id,arrival_time>.</ship_id,arrival_time>
Evaluation for Query 2
The evaluation takes into account the accuracy of predictions (Rank A2) and the total runtime (Rank B2).
Rank A2 ranks according to the prediction accuracy (i.e. mean average error of all predicted arrival times). Note, only correctly predicted target ports will be considered while Rank B2 ranks according to the total runtime.
The overall ranking for query 2 (Rank Q2) is then computed as Rank Q2 = 0.75*Rank A2 + 0.25*Rank B2. The final ranking is given by the sum of ranks Rank Q1 and Rank Q2.
The evaluation cluster of the online platform has three working nodes allocated for solutions. Each node is 2×64 bit Intel Xeon E5-2630v3 (8-Cores, 2,4 GHz, Hyperthreading, 20MB Cache, each proc.), 256 GB RAM, 1Gb Ethernet.
In order to participate in challenge participant need to:
- Develop a system adapter connecting his system to the HOBBIT platform
- Upload the system to the HOBBIT platform so that it can be benchmarked
- Register the system for the DEBS 2018 Grand Challenge for final evaluation
Instructions for developing a HOBBIT system adapter are available at the HOBBIT Wiki. A simple Hello World example for this challenge is available here. The hobbit-java-sdk and published sources (to be updated) should help participants to debug and their system locally and to prepare docker image for uploading into the online platform. Detailed information about upload procedure is documented here. After submitting your system to the HOBBIT platform, you can use the DEBS 2018 Benchmark (to be published) to test the correctness of your implementation.
In order to register your system for the Challenge you have to use the “DEBS 2018 Grand Challenge” item under the “Challenges” tab in the platform GUI. The detailed description of the registration procedure is described here. Participants need to register their systems for all tasks defined in DEBS 2018 Grand Challenge at the moment.
Registration and Submission
- Register at EasyChair: The first step is to register your submission in the EasyChair Grand Challenge Track. At this point, this is only to state your intent to participate and to establish communication with the organizers. Therefore, it is sufficient to submit an interims title for your work.
- Submit a solution to HOBBIT: You need to submit your solution to the HOBBIT platform in order to get it benchmarked in the challenge. The platform gives you feedback and allows to update your solution. Thereby you can continuously improve your system until the closing date (t.b.d.). We will evaluate the latest solution that you uploaded before the closing date.
- Submit a short paper: Finally you need to upload a short paper (2 pages, plus optional appendix) about your solution to EasyChair. The paper will be reviewed to assess the merit and originality of your solution. All solutions of sufficient quality will at least get the chance to present a poster on the DEBS conference.
Program & Accepted Papers
|Session: Grand Challenge – MSB.1.01
Tuesday, June 28th, 2018
|Vincenzo Gulisano, Zbigniew Jerzak, Pavel Smirnov, Martin Strohbach, Holger Ziekow, Dimitris Zissis, The DEBS 2018 Grand Challenge|
|Ciprian Amariei, Paul Diac, Emanuel Onica and Valentin Roșca. Grand Challenge: Cell Grid Architecture for Maritime Route Prediction on AIS Data Streams|
|Moti Bachar, Gal Elimelech, Itai Gat, Gil Sobol, Nicolo Rivetti and Avigdor Gal. Grand Challenge: Venelia, On-line Learning and Prediction of Vessel Destination|
|Abderrahmen Kammoun, Tanguy Raynaud, Syed Gillani, Kamal Singh, Jacques Fayolle and Frederique Laforest. Grand Challenge: A Scalable Framework for Accelerating Situation Prediction over Spatio-temporal Event Streams|
|Duc-Duy Nguyen, Chan Le Van and Muhammad Intizar Ali. Vessel Destination and Arrival Time Prediction using Sequence-to-Sequence Models over Spatial Grid|
|Hyungkun Jung, Kang-Woo Lee, Joong-Hyun Choi and Eun-Sun Cho. Grand Challenge: Bayesian Estimation on Destination Ports and Arrival times of Vessels|
|Florian Schmidt, Oleh Bodunov, André Martin, Andrey Brito and Christof Fetzer. Grand Challenge: Real-time Destination and ETA Prediction for Maritime Traffic|
|Chun-Xun Lin, Tsung-Wei Huang, Guannan Guo and Martin Wong. MtDetector: A High-Performance Marine Traffic Detector at Stream Scale|
|Rim Moussa. Scalable Maritime Traffic Map Inference and Real-time Prediction of Vessels’ Future Locations on Apache Spark|
|Valentin Roșca, Emanuel Onica, Paul Diac and Ciprian Amariei. Grand Challenge: Predicting Destinations by Nearest Neighbor Search on Training Vessel Routes|
Grand Challenge Results
- Vincenzo Gulisano, Chalmers University of Technology, Sweden
- Zbigniew Jerzak, SAP SE, Germany
- Pavel Smirnov, AGT International, Germany
- Martin Strohbach, AGT International, Germany
- Holger Ziekow, Furtwangen University, Germany
- Dimitris Zissis, University of the Aegean, Greece