Bias in AI and neuroscience Conference
Part of BIAS2019 17-18-19 June 2019 – Nijmegen, Netherlands
Diversity Computing is our ethical vision of future interactive technologies supporting social interactions between people of highly diverse backgrounds, without being grounded in any particular normative framework (Fletcher-Watson et al, 2018). By way of example, our own research concerns interactions between autistic and non-autistic persons. Typically, problems encountered there, are judged against the social norms held by the non-autistic person, and autistic behaviour is considered as ‘abnormal’. Assistive technologies often focus on helping autistic persons to behave ‘normally’. Instead, we see such interactions as involving diverse normative backgrounds and the challenge is to collaboratively ‘make sense’. In this we draw explicitly from the enactivist theory of participatory sense-making as developed by Hanne de Jaegher. Presently our (sketchy) vision needs to be fleshed out in concrete technological terms. As a first step, this workshop explores challenges we think Diversity Computing poses to some of the basic principles and mechanisms grounding todays’ Artificial Intelligence.
Participants are asked to respond to this challenge: Assuming diverse participatory sense-making, it follows we have at least two persons, each bringing their own normative ‘background’. How could machine learning algorithms be taught to recognise meaningful patterns that subsequently (via some form of user feedback) support the quality of the interaction (the participatory sense-making), without any external-normative framework as the ‘ground truth’? How can machines support meaning-making and increase the ‘interaction quality’, when we have no independent a priori way to define what is ‘meaningful’ or ‘quality’? What kind of ‘norm-independent’ learning algorithms would support emergent, participatory sense-making, growing out of the (technology-mediated) interaction itself? What if algorithms would discover ‘meaningful’ patterns on the fly, where ‘meaningfulness’ can be judged or derived from the interactions between people or emerging goals from people?
We call for abstracts discussing computational mechanisms and concepts that contribute to meeting this challenge. As we envision supporting embodied, face to face situations, we are also interested in proposals for sensors and feedback-hardware contributing to on the fly embodied sense-making. We are also interested in ways of computationally mediating social interactions in online social media, provided they prove to be a step into the direction of Diversity Computing. (We ourselves are slightly sceptical). We also welcome contributions that make it more precise how existing learning algorithms or existing commercial applications are indeed problematic in the way we have so far only sketched out.
Submission: a 500-word abstract including at least one visual/graph/diagram.
Deadline: April 15.
NB. We welcome rigorous computational analyses, yet also bear in mind the topic is new and the audience diverse (our own backgrounds span psychology, philosophy, design and computing). Make sure to explain your ideas in a format accessible to a broad audience.
Send submissions to: jelle.vandijk@utwente.nl
References
Sue Fletcher-Watson, Hanne De Jaegher, Jelle van Dijk, Christopher Frauenberger, Maurice Magnée, and Juan Ye. 2018. Diversity computing. Interactions 25, 5 (August 2018), 28-33.
DOI: https://doi.org/10.1145/3243461 Free download: https://dl.acm.org/citation.cfm?id=3243461