Workshop on
Evaluation and Benchmarking of Human-Centered AI Systems

Date: September 20th, 2019
Location: Milton Keynes, UK
Time: 13:30 - 17:00

Motivation

There is a push in Europe toward the development of AI systems that put the human and their values at the center of their design, development and operation. Such human-centred AI systems should be able to operate in the physcal world of humans, to collaborate with humans, and to explain their behaviour to humans. In order to allow their seamless integration with humans, it is paramount that these systems are properly evaluated.

Evaluating the performance and properties of AI systems is an important and open problem. From a scientific perspective, this is the problem of finding performance metrics for cognitive systems. From an industrial perspective, this means to be able to quantify the added value of using an AI solution. When dealing with human-centered AI systems, the evaluation problem is further complicated by the fact that such systems often integrate different AI paradigms and methods, and they interact with the humans and with the physical world.

Objectives

This workshop is devoted to the discussion of methods and tools to evaluate human-centered AI systems. The topics of the workshop include, but are not limited to:

Keywords: HRI Testing, Performance Evaluation, Benchmarking, Empirical Evaluation, AI Measures, Performance Metrics.

Program

Schedule

Program Committee

Organizers

The workshop is co-organized by SciRoc.eu (Smart CIty RObotic Challenge) and by AI4EU.eu (Building the European AI on-demand Platform), both funded by the EC under the H2020 programme.

Venue

The workshop will take place at The Connected Places Catapult, 170 Midsummer Boulevard, Milton Keynes, UK, MK9 1BP. The location is about 10-15 minutes walk from the MK shopping centre, where the SciRoc competition takes place.

List of Nearby hotels:

Note: None of these hotels is affiliated with the event. This is just a mere list of nearby hotels.

Call for Contributions

Evaluation and Benchmarking of Human-Centered AI Systems

Interested participants must submit an extended abstract reporting work relevant to the workshop's themes. Reports of work in progress and reports of recently published work are acceptable. Abstracts should be between 500 and 1500 words in length, in free format, and must include the name, affiliation and contact information of all authors. Abstracts must be submitted as PDF files via easychair at the following URL:

https://easychair.org/conferences/?conf=ebhais2019

Contributions will be selected on the basis of relevance, quality and expected impact. Accepted contributions will be published on the Symposium website.

IMPORTANT DATES