UnBias – Telling Tales of Engagement

Telling Tales of Engagement was an award by the RCUK Digital Economy Theme designed to help capture and promote the impact that Research Council funded digital economy research is having.

The evaluation process for this award considered both the impact of our previous work and a proposed new activity to “tell the story” of our research.

Our submission was titled “building and engaging with multi-stakeholder panels for developing policy recommendations”,  highlighting the importance to our research of engaging with our stakeholder panel and with organizations that are shaping the policy and governance space for algorithmic systems.

Building a multi-stakeholder panel:

An obvious key requirement for successful multi-stakeholder engagement is the establishing of a sufficiently large and diverse group of participating stakeholders. In order to reach out to non-academic stakeholders we immediately started actively discussing the topic of our work at multi-stakeholder internet governance events (e.g. EuroDIG), events organized by regulatory organizations (e.g. European Data Protection Supervisor (EDPS) civil society summit; Council of Europe Convention 108 meeting; “Algorithmic accountability and transparency in the digital economy” roundtable at the European Parliament), events organized by groups like Nesta and the Digital Catapult Centre, and workshops organized by the UN Special Rapporteur for Privacy. As a result of these activities
we rapidly gained stakeholder participation by representatives of the Internet Society (ISOC UK), the EDPS, the European Commission (DG Connect), civil-society organizations like Doteveryone and AlgorithmWatch; interest from SMEs and industry (e.g. Facebook) as well as academics from fields ranging from Computer Science and Engineering to Business Schools, Psychology, Education and Law. Evidence of impact arising from our stakeholder engagement activity can be found in the inclusion of UnBias as a ‘relevant project’ listed in the tender specification document that was published by the European Commission as part of their Algorithm Awareness Building project call, resulting in requests for participation by two consortia looking to submit to this bid.

Magnifying reach and impact through participation in topic aligned initiatives:

An important mechanism we are using in order to magnify the impact our project can have in the policy domain is to participate in initiatives such as the Internet Society’s activities on User Trust, and more importantly the IEEE Global Initiative on Ethics for Autonomous and Intelligent Systems. As the world’s largest technical professional organization with more than 420 000 members internationally and hosting many of the key conferences and journals related to algorithmic systems, working with the IEEE brings a convening power that has allowed us to reach groups who’s attention would otherwise have been difficult to obtain. Part of the IEEE Global Initiative involves the development of a set of ‘industry standards’ related to the ethical design and use of systems involving algorithmic decision making. We therefore submitted a proposal for developing a Standard for Algorithmic Bias Considerations. Following the approval of this Standard proposal by the IEEE Standards Association (IEEE-SA) we now chair the working group for the development of the P7003 Standard for Algorithmic
Bias Considerations, which involves academic, industry and civil-
society participants from North and South America, Europe, Africa and Asia. Chairing the P7003 working group has also resulted in invitations to speak about our work at: events such as IEEE-SA webinars; the US ACM panel of Algorithmic Transparency and Accountability in Washington DC; the IEEE UK/Ireland Members Open Day; a public lecture to approximately 350 students, staff
and interested public at the Technical University of Eindhoven; the European Big Data Value Forum panel on Data and Society; and participate in a Ditchley Foundation conference on Machine Learning and Artificial Intelligence and a Lorentz Centre workshop on “Intersectionality and Algorithmic Discrimination: Intersecting Disciplinary Perspectives” in Leiden.

An updated list of our events and engagement activities is provided here.

Communicating with the public through blog posts and (social) media:

A third pillar of our stakeholders engagement strategy is the use of social media (Twitter, this blog) and periodic publications in public media like the Conversation to provide a means for stakeholders to discover our work. Content of the blog, and the Conversation submissions, consists of a mix of reports on activities by the UnBias project and reflections on events/news stories related to our work.
Within the first month of the start of the project (September 2016), UnBias published a project-launch press-release and a Conversation
article related to an example of the kind of issues our project is focusing on. As a result of the press-release we were contacted by a German consulting company expressing interest in participating in our stakeholder group. The Conversation article, which was rapidly picked-up for re-publication by Business Insider UK, similarity resulted in us getting contacted by an SME also interested in joining the stakeholder group as well as an invitation by the Belfast Solicitors Association to deliver a lecture as part of one of their CPD seminars on “Libel, Privacy, Data Protection and Online Legal Action – A Practitioner’s Guide”. Since then we have published an average of two blog posts per month, averaging 530 site visits per month, have published four Conversation articles, had our work on the IEEE P7003 Standard highlighted in articles in IEEE’s “The Institute” magazine the ACM SIGAI newsletter “AI Matters”, were mentioned in the Law Society Gazette and had our Conversation articles picked up for re-publishing on the Digital Leaders blog and Medium pages.

An updates list of our dissemination activities, including: Evidence to parliamentary inquiries; Media engagement; Press releases & mentions in the media; project reports; Journal publications; and Conference contributions is available here.

New activity to “tell the story” of our research

The proposed new activity for “telling the story” of our research was inspired by the innovative approach with which Prof. Matt Daniels is using video clips with material from popular films and computer games, and crated by students, to raise awareness of Human Rights. In order to reach a broader and hopefully international audience of young people we will work with Prof. Daniels to create similar videos to highlight the intersection of Human Rights and accountability in algorithmic decision making, sources of algorithmic bias/discrimination and critical engagement/evaluation of algorithmically recommended content. An example being the video based on Captain America – Winter Soldier where (at time-stamp 3:04) they discuss how algorithmic systems use digital records to make increasing detailed predictions about people, that can be used to against them. In addition to creating and disseminating such videos, we also intend to explore how to evaluate the impact that creating and/or watching the videos has on peoples’ awareness of human rights related content in public media.

On May 26th we were please to host Prof. Daniels for a full day of discussions and planning in order to make this proposal a reality. Based on these discussions we agreed to two-stage approach:

Study 1:
·         To take place in 2019
·         Participants to complete a pre-questionnaire, then watch (a number of) privacy videos at  https://righttoprivacy.org/, with the instruction to press a button whenever they hear something interesting, then complete a post-questionnaire.
·         Carry out a follow up (questionnaire/interview) a month later to see what (if any) impact watching the video has had on the user/any behaviour changes/increased awareness of privacy related issues.

This study has now been completed. The preliminary write-up of the results is available in this report. A more detailed anslysis will be prepared for an academic publication.

Study 2:
·         To take place in 2020
·         Participants to take part in a number of workshops where they are able to identify a privacy issue arising from algorithmic systems in a film or game.  They will then pitch an idea, and once it is approved they will write a script using a template supplied by Prof. Daniels.  Matt and his team will then read the script closely to check that it is legally and historically accurate, and suggest alterations if required.
·         Once the script is approved, the students will use simple video editing and audio recording tools to create a new video clip highlighting the algorithmic-privacy issue.
The best videos will be rewarded with prizes and all resulting videos that are of sufficient quality will be uploaded and hosted on the Human Rights Network YouTube channel (which has more than 21k followers) as well as the UnBias project website. The videos will also become part of the discussion prompts for future ReEnTrust (UnBias follow-up project) activites. In this context they will also be included in the Youth Juries material provided on the Horizon Digital Economy Research institute’s Open Educational Resources website, which provides supporting materials for teachers, and others, who want to run similar Youth Juries activities on their own.

Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy

%d bloggers like this: