Applications of hybrid intelligence systems in which humans and machines collaborate must behave responsibly in order to promote synergy and prevent unwanted or even harmful effects. For instance, the agents in a hybrid intelligence system (both human and automated) should communicate in a correct and reliable way, they should be able to provide their reasons for having a belief or opinion, and they should be able to explain their actions in terms of the values they apply.
In practice, it proves to be a hard and complex problem to design responsible hybrid intelligence systems. For instance, both humans and machines can make mistakes and be unreliable, have unjustified beliefs and positions, and can act without considering their values or even go against them.
A current and telling example of a hybrid intelligence system is ChatGPT in conversation with a human user, which has provided a new level of natural hybrid interaction about a wide range of topics. Many have experienced that the conversation contains correct and reliable elements, but also mistakes and unreliable behavior. Also the reasons provided by ChatGPT can be helpful and convincing, but also irrelevant or vacuous. Furthermore ChatGPT refers to following a value system when avoiding harmful or sensitive topics, but also can act against the values expressed in an erratic way.
The aim of this workshop is to collect research and research ideas aimed at designing responsible hybrid intelligence systems. We invite submissions of all levels of maturity (early stage, mid stage, completed). A selection will be made on the basis of overall quality, relevance and diversity.
The workshop is organized in connection with the HHAI 2023 conference on Hybrid Human-Artificial Intelligence.
Topics and issues
Topics and issues on designing responsible hybrid intelligence include, but are not limited to, the following:
– Methods and tools for designing responsible hybrid intelligence systems
– Theoretical foundations for the design of responsible hybrid intelligence systems
– Experiments with and innovative applications of responsible hybrid intelligence systems
– Evaluation methods for implemented responsible hybrid intelligence systems
– The role of various AI approaches in the design of responsible hybrid intelligence systems (data, knowledge, reasoning, argumentation, language, …)
We aim to collect contributions and discussions that bridge the technological and ethical side of responsible hybrid intelligence design. For all contributors (whether technological or ethical), we ask to make explicit how and to what extent the research or research idea discussed aims to contribute to responsible hybrid intelligence design.
Submission guidelines
Submission can be made using Easychair. Following the main conference format, we welcome submissions of 4-12 pages (using the IOS formatting guidelines). We ask all contributors to make explicit how and to what extent the research or research idea discussed aims to contribute to responsible hybrid intelligence design. A selection will be made on the basis of workshop-level reviewing focusing on overall quality, relevance and diversity.
Dates
May 1, 2023: Deadline for submissions to the workshop (after the HHAI 2023 notification deadline)
May 15, 2023: Deadline for notifications on the submissions
Tuesday June 27, 2023: Workshop