With Paris playing host to the Paris Peace Forum from 11 to 13 November, the GovTech summit on November 12th, the Internet Governance Forum (IGF2018) from 12 to 14 November, and concluding with a UNESCO/ISOC/Mozilla symposium on November 15th (and the ITU Plenipotentiary Conference 2018 running simultaneously in Dubai from 29 October to 16 November), the start of November saw a lot a activity relating to Internet (as associated) Governance. For those of us based in the UK, this series of conference was further continued with the UK IGF 2018 on November 22nd.
Reporting on our work towards developing policy recommendation, industry standards and educational resources, UnBias participated in the IGF2018, the UNESCO/ISOC/Mozilla symposium, and UKIGF2018, as well as an informal presentation at the CNIL.
Continue reading UnBias participation in multi-stakeholder debates on Internet Governance and AI Ethics →
This slideshow requires JavaScript.
The Fairness Toolkit has been developed for UnBias by Giles Lane and his team at Proboscis, with the input of young people and stakeholders. It is one of our project outputs aiming to promote awareness and stimulate a public civic dialogue about how algorithms shape online experiences and to reflect on possible changes to address issues of online unfairness. The tools are not just for critical thinking, but for civic thinking – supporting a more collective approach to imagining the future as a contrast to the individual atomising effect that such technologies often cause.
The toolkit contains the following elements:
1. Handbook
2. Awareness Cards
3. TrustScape
4. MetaMap
5. Value Perception Worksheets
All components of Toolkit are freely available to download and print from our site under Creative Commons license (CC BY-NC-SA 4.0).
Demonstrations of the toolkit will be given at the V&A Digital Design weekend, London September 22nd.
More information is available on the Fairness Toolkit, and Trustscapes pages.
Continue reading UnBias Fairness Toolkit →
On June 21st 2018, the KAIST Institute for Artificial Intelligence, Fourth Industrial Revolution Center in Korea hosted a public forum discussion on “Taming Artificial Intelligence: Engineering, Ethics, and Policy to discuss the ethics of artificial intelligence technology development as well as policy making around the world.
Continue reading KAIST workshop on Taiming AI: Engineering, Ethics and Policy →
We are pleased to announce that UnBias won one of the three 2017 RCUK Digital Economy Theme ‘Telling Tales of Engagement’ awards. The evaluation process for this award considered both the impact of our previous work and a proposed new activity to “tell the story” of our research.
Our submission was titled “building and engaging with multi-stakeholder panels for developing policy recommendations”, highlighting the importance to our research of engaging with our stakeholder panel and with organizations that are shaping the policy and governance space for algorithmic systems.
Continue reading RCUK Digital Economy Theme ‘Telling Tales of Engagement’ award for UnBias →
From 21st to 22nd February the Royal Society and the Royal Netherlands Academy of Arts and Sciences (KNAW) held a UK – Netherlands bilateral international meeting to explore common research interests in the fields of Quantum Physics and Technology, Nanochemistry and Responsible Data Science. UnBias was pleased to participate as part of the Responsible Data Science stream.
Continue reading Responsible Data Science at Royal Society Bilateral UK-NL workshop →
From October 18th to 21st UnBias participated in the 2017 edition of the annual Association of Intern Researchers (AoIR) conference which was help in Tartu, Estonia.
Continue reading Human Agency on Algorithmic Systems – UnBias at AoIR2017 →
On September 7th the Guardian published an article drawing attention to a study from Stanford University which had applied Deep Neural Networks (a form of machine learning AI) to test if they could distinguish peoples’ sexual orientation from facial images. After reading both the original study and the Guardian’s report about it, there were so many problematic aspects about the study that I immediately had to write a response, which was published in the Conversation on September 13th under the title “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of“.
Continue reading In the Conversation: “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of” →
USACM, the ACM U.S. Public Policy Council, will be hosting a panel event on “Algorithmic Transparency and Accountability.” The event will provide a forum for a discussion between stakeholders and leading computer scientists about the growing impact of algorithmic decision-making on our society and the technical underpinnings of algorithmic models.
Panelists will discuss the importance of the Statement on Algorithmic Transparency and Accountability and the opportunities for cooperation between academia, government and industry around these principles.
Values in Emerging Science and Technology
The Ethicomp series of conferences fosters an international community of scholars and technologists, including computer professionals and business professionals from industry. Since 1995, conferences have been scheduled across Europe and Asia, with our main events coming every 18 months. Ethicomp considers computer ethics conceived broadly to include philosophical, professional, and practical aspects of the field. CEPE (Computer Ethics Philosophical Enquiry), as the name implies, is more narrowly focused on the philosophical aspects of computer and information ethics. CEPE events have been held every 18 months since 1997. Since the CEPE community overlaps considerably with the Ethicomp community, it makes sense for our two conference series to work together. In light of this, our next conference will be a jointly sponsored event, hosted at the Università degli Studi di Torino (University of Turin), Turin, Italy in June of 2017.
In the two decades since the inception of Ethicomp and CEPE, computing has gone from being esoteric and newfangled to ubiquitous and everyday. The ensuing transformations of our cultural and social institutions are liable to accelerate and metastasize as information technologies find their ways into every field of research and every pursuit. Our shared mission of promoting the ethical use of computer technology consequently demands an inquiry into values as these relate broadly to emerging sciences and technologies.
Tracks
- Open Track: topics that do not fit the other tracks, including but not limited to big data, privacy, intellectual property, professional ethics, ethical theory as related to computing, and the teaching computer ethics (Fran Grodzinsky and Catherine Flick)
- Fiction in Professional Ethics (Kai Kimppa)
- Video Games, Philosophy, and Society (Catherine Flick)
- Technology and the Law (Aimite Jorge and Kanwal DP Singh)
- Responsible Research and Innovation (RRI) in Computing (Emad Yaghmaei)
- Living with Robots (Yuko Murakami)
- Networks, Crowdsourcing, and the Rise of Social Machines (Claudia Pagliari)
- Cyborg Ethics: wearables to insideables (Mario Arias-Oliva and Jorge Pelegrín-Borondo)
- Digital Health: legal and ethical challenges and solutions (Diane Whitehouse)
- Is it cheating? Infidelity online (Sanjeev P. Sahni)
- Cybercrime: Psychological, Sociological, Cultural and Criminological perspectives (Indranath Gupta)
- ICT and the City (Michael Nagenborg)
- Graduate Student/Young Scholar Track (Maria Bottis)
UnBias @ Ethicomp2017
We will be at Ethicomp to present at paper on:
“Editorial responsibilities arising from personalization algorithms”
As part of our stakeholder engagement work towards the development of algorithm design and regulation recommendations UnBias is engaging with the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems to develop an IEEE Standard for Algorithm Bias Considerations, designated P7003. The P7003 working group is chaired by Ansgar Koene and will have its first web-meeting on May 5th 2017.
Continue reading IEEE Standard for Algorithm Bias Considerations →