Tag Archives: WP4

9th International ACM Web Science Conference 2017

The 9th International ACM Web Science Conference 2017 will be held from June 26 to June 28, 2017 inTroy, NY (USA) and is organized by the Rensselaer Web Science Research Center and the Tetherless World Constellation at RPI. The conference series by the Web Science Trust is following events in Athens, Raleigh, Koblenz, Evanston, Paris, Indiana, Oxford and Hannover.

The conference brings together researchers from multiple disciplines, like computer science, sociology, economics, information science, or psychology. Web Science is the emergent study of the people and technologies, applications, processes and practices that shape and are shaped by the World Wide Web. Web Science aims to draw together theories, methods and findings from across academic disciplines, and to collaborate with industry, business, government and civil society, to develop our knowledge and understanding of the Web: the largest socio-technical infrastructure in human history.

AMOIA workshop at ACM Web Science 2017

AMOIA  (Algorithm Mediated Online Information Access) – user trust, transparency, control and responsibility

This Web Science 2017 workshop, delivered by the UnBias project, will be an interactive audience discussion on the role of algorithms in mediating access to information online and issues of trust, transparency, control and responsibility this raises.

The workshop will consist of two parts. The first half will feature talks from the UnBias project and related work by invited speakers. The talks by the UnBias team will contrast the concerns and recommendations that were raised by teen-aged ‘digital natives’ in our Youth Juries deliberations and user observation studies with the perspectives and suggestions from our stakeholder engagement discussions with industry, regulators and civil-society organizations. The second half will be an interactive discussion with the workshop participants based on case studies. Key questions and outcomes from this discussion will be put online for WebSci’17 conference participants to refer to and discuss/comment on during the rest of the conference.

The case studies we will focus on:

  • Case Study 1: The role of recommender algorithms in hoaxes and fake news on the Web
  • Case Study 2: Business models that share AMOIA, how can web-science boost Corporate Social Responsibility / Responsible Research and Innovation
  • Case Study 3: Unintended algorithmic discrimination on the web – routes towards detection and prevention

The UnBias project investigates the user experience of algorithm driven services and the processes of algorithm design. We focus on the interest of a wide range of stakeholders and carry out activities that 1) support user understanding about algorithm mediated information environments, 2) raise awareness among providers of ‘smart’ systems about the concerns and rights of users, and 3) generate debate about the ‘fair’ operation of algorithms in modern life. This EPSRC funded project will provide policy recommendations, ethical guidelines and a ‘fairness toolkit’ that will be co-produced with stakeholders.

The workshop will be a half-day event

Programme

9:00 – 9:10   Introduction
9:10 – 9:30   Observations from the Youth Juries deliberations with young people, by Elvira Perez (University of Nottingham)
9:30 – 9:50   Insights from user observation studies, by Helena Webb (University of Oxford)
9:50 – 10:10 Insights from discussions with industry, regulator and civil-society stakeholders, by Ansgar Koene (University of Nottingham)
10:10 – 10:30  “Platforms: Do we trust them”, by Rene Arnold
10:30 – 10:50 “IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems”, by John Havens
10:50 – 11:10 Break
11:10 – 11:50 Discussion of case study 1
11:50 – 12:30 Discussion of case study 2
12:30 – 12:50 Break
12:50 – 13:30 Discussion of case study 3
13:30 – 14:00 Summary of outcomes

Key dates

Workshop registration deadline: 18 June 2017
Workshop date: 25 June 2017
Conference dates: 26-28 June 2017

EuroDIG 2017

About

EuroDIG 2017 will take place in Tallinn, 6-7 June and will be hosted by the Ministry of Foreign Affairs of the Republic of Estonia. EuroDIG is not a conference, it is a year round dialogue on politics and digitisation across the whole European continent which culminates in an annual event. More about EuroDIG.

Pre- and side-events

A number of pre and side events will be enriching the EuroDIG programme.  European organisations will organise meetings on day zero 5th June and the European Commission opens the High Level Group on Internet Governance Meeting on 8th June to the public.

Participate!

Our slogan is “Always open, always inclusive and never too late to get involved!”

Org Teams did their best to facilitate the ground for in depth multistakeholder discussion and our Estonian host, the Ministry of Foreign Affairs, worked hard to give you a warm welcome!

Now it is up to you to engage in the discussion – the floor is always open! A first opportunity will be the open mic session, the first session after the welcome.

We would like to hear from YOU: How I am affected by Internet governance?

No chance to travel to Tallinn?

No problem! We are in Estonia, the most advanced country in Europe when it comes to digital futures! For all workshops and plenary sessions we provide, video streaming (passive watching), WebEx (active remote participation) and transcription. Transcripts and videos will be provided at the EuroDIG wiki after the event. Please connect via the links provided in the programme.

UnBias at EuroDIG

UnBias is contributing to EuroDIG 2017 by running a Flash session on “Accountability and Regulation of Algorithms” and as part of the organizing team for the Plenary session of “Internet in the ‘post-truth’ era?“.

Looking forward to seeing you there!

UnBias project contribution to the 4th Winchester Conference on Trust, Risk, Information and the Law

The 4th Winchester Conference on Trust, Risk, Information and the Law took place at the University of Winchester on Wednesday 3rd May 2017. The overarching theme of the day was “Artificial and De-Personalised Decision-Making: Machine-Learning, A.I. and Drones”: offering a chance for multi-stakeholder and interdisciplinary discussion on the risks and opportunities presented by algorithms, machine learning and artificial intelligence.

Continue reading UnBias project contribution to the 4th Winchester Conference on Trust, Risk, Information and the Law

TRILCon 2017

The 4th Winchester Conference on Trust, Risk, Information and the Law

Our overall theme for this conference will be:

Artificial and De-Personalised Decision-Making: Machine-Learning, A.I. and Drones

Programme in brief

  • Plenary Address: Prof. Katie Athkinson
    ‘Arguments, Values and Baseball: AI support for Legal Practice’
  • Stream 1A: Automated weapons & automated investigations
  • Stream 1B: Smart retail & behavioural advertising
  • Stream 2A: Algorithms & criminal justice
  • Stream 2B: Data power & its regulation
  • Stream 1C: Artificial intelligence, decision-making & the protection of human interests
  • Stream 2C: Smart contracts & smart machines
  • Plenary Address:  John McNamara
    ‘Protecting trust in a world disrupted by machine learning’
  • Stream 3A:  Workshop run by the UnBias project: An exploration of trust, transparency and bias in law enforcement and judicial decision support systems
  • Stream 3B: Autonomous vehicles
  • Stream 3C: Values & machine learning
  • Panel Discussion: The Future of A.I., machine learning and
    algorithmic decision-making

Full programme available here
Abstract booklet available here

UnBias will be at the conference running the workshop: ‘An exploration of trust, transparency and bias in law enforcement and judicial decision support systems’
This workshop will consist of two parts. In the first twenty minutes we will review some of the outcomes of the UnBias project. Specifically, we will contrast the concerns and recommendations that were raised by teen-aged ‘digital natives’ in our Youth Juries
deliberations with the perspectives and suggestions from our stakeholder engagement discussions. We will then spend a couple of minutes to introduce our workshop participants to a case study based on the ProPublica report of bias in the COMPAS algorithms for recidivism probability forecasting and the subsequent studies showing that when it is not possible for an algorithm to be equally predictive for all without disparities in harm of incorrect predictions when the two populations have unequal base rates. This case study
will form the basis for discussions during the remainder of the session. Some of the questions we will raise include: what are the implications of such findings for trust in law enforcement and judicial rulings? What are the minimum levels of transparency and output audit-ability that a decision support system must have in order to maintain trust in a fair application of the law? The outcomes of the discussion will be summarized in a short report that will be sent out to all participants and feed into the development of policy
recommendations by UnBias.

Publication of 1st WP4 workshop report

We are please to announce that the report summarizing the outcomes of the first UnBias project stakeholder engagement workshop is now available for public dissemination.

The workshop took place on February 3rd 2017 at the Digital Catapult centre in London, UK. It brought together participants from academia, education, NGOs and enterprises to discuss fairness in relation to algorithmic practice and design. At the heart of the discussion were four case studies highlighting fake news, personalisation, gaming the system, and transparency.

Continue reading Publication of 1st WP4 workshop report

IEEE Standard for Algorithm Bias Considerations

As part of our stakeholder engagement work towards the development of algorithm design and regulation recommendations UnBias is engaging with the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems to develop an IEEE Standard for Algorithm Bias Considerations, designated P7003. The P7003 working group is chaired by Ansgar Koene and will have its first web-meeting on May 5th 2017.

Continue reading IEEE Standard for Algorithm Bias Considerations

The Human Standard: Why Ethical Considerations Should Drive Technological Design Webinar

The IEEE Standards Association (IEEE-SA) Corporate Membership Program invites you to join an exclusive webinar.

The Human Standard:
Why Ethical Considerations Should Drive
Technological Design Webinar

Follow this link to register

18 April 2017 at 12:00PM- 1:00PM EDT 

In the age of autonomous and intelligent machines, it is more important than ever to help technologists and organizations be cognizant of the ethical implications of the products, services or systems they are building and how they are being built before making them available to the general public. While established Codes of Ethics provide instrumental guidance for employee behavior, new values-centric methodologies are needed to complement these codes to address the growing use of algorithms and personalization in the marketplace.

Key insights from the Working Group Chairs of three IEEE-SA projects will be presented. The IEEE Global Initiative provided the input and recommendations that led to the creation of Working Groups for these IEEE-SA standards projects:

IEEE P7001™: Transparency of Autonomous Systems

IEEE P7003™: Algorithmic Bias Considerations

Speakers will provide their perspectives on why it is important for business leaders to increase due diligence relative to ethical considerations for what they create.  This focus is not just about avoiding unintended consequences, but also increasing innovation by better aligning with customer and end-user values.

Speakers

Kay Firth-Butterfield
Executive Committee Vice-Chair, The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, Executive Director, AI Austin
SEE FULL BIO

John C. Havens
Executive Director, The IEEE Global Initiative for Ethical Considerations In Artificial Intelligence and Autonomous Systems
SEE FULL BIO

Konstantinos Karachalios
Managing Director, IEEE Standards Association SEE FULL BIO

Ansgar Koene
Senior Research Fellow at Horizon Digital Economy Research institute, University of Nottingham. Co-Investigator on the UnBias project and Policy Impact lead for Horizon.
SEE FULL BIO

Alan Winfield
Professor, Bristol Robotics Laboratory, University of the West of England; Visiting Professor, University of York
SEE FULL BIO