Tag Archives: Ansgar

TRILCon 2017

The 4th Winchester Conference on Trust, Risk, Information and the Law

Our overall theme for this conference will be:

Artificial and De-Personalised Decision-Making: Machine-Learning, A.I. and Drones

Programme in brief

  • Plenary Address: Prof. Katie Athkinson
    ‘Arguments, Values and Baseball: AI support for Legal Practice’
  • Stream 1A: Automated weapons & automated investigations
  • Stream 1B: Smart retail & behavioural advertising
  • Stream 2A: Algorithms & criminal justice
  • Stream 2B: Data power & its regulation
  • Stream 1C: Artificial intelligence, decision-making & the protection of human interests
  • Stream 2C: Smart contracts & smart machines
  • Plenary Address:  John McNamara
    ‘Protecting trust in a world disrupted by machine learning’
  • Stream 3A:  Workshop run by the UnBias project: An exploration of trust, transparency and bias in law enforcement and judicial decision support systems
  • Stream 3B: Autonomous vehicles
  • Stream 3C: Values & machine learning
  • Panel Discussion: The Future of A.I., machine learning and
    algorithmic decision-making

Full programme available here
Abstract booklet available here

UnBias will be at the conference running the workshop: ‘An exploration of trust, transparency and bias in law enforcement and judicial decision support systems’
This workshop will consist of two parts. In the first twenty minutes we will review some of the outcomes of the UnBias project. Specifically, we will contrast the concerns and recommendations that were raised by teen-aged ‘digital natives’ in our Youth Juries
deliberations with the perspectives and suggestions from our stakeholder engagement discussions. We will then spend a couple of minutes to introduce our workshop participants to a case study based on the ProPublica report of bias in the COMPAS algorithms for recidivism probability forecasting and the subsequent studies showing that when it is not possible for an algorithm to be equally predictive for all without disparities in harm of incorrect predictions when the two populations have unequal base rates. This case study
will form the basis for discussions during the remainder of the session. Some of the questions we will raise include: what are the implications of such findings for trust in law enforcement and judicial rulings? What are the minimum levels of transparency and output audit-ability that a decision support system must have in order to maintain trust in a fair application of the law? The outcomes of the discussion will be summarized in a short report that will be sent out to all participants and feed into the development of policy
recommendations by UnBias.

Publication of 1st WP4 workshop report

We are please to announce that the report summarizing the outcomes of the first UnBias project stakeholder engagement workshop is now available for public dissemination.

The workshop took place on February 3rd 2017 at the Digital Catapult centre in London, UK. It brought together participants from academia, education, NGOs and enterprises to discuss fairness in relation to algorithmic practice and design. At the heart of the discussion were four case studies highlighting fake news, personalisation, gaming the system, and transparency.

Continue reading Publication of 1st WP4 workshop report

IEEE Standard for Algorithm Bias Considerations

As part of our stakeholder engagement work towards the development of algorithm design and regulation recommendations UnBias is engaging with the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems to develop an IEEE Standard for Algorithm Bias Considerations, designated P7003. The P7003 working group is chaired by Ansgar Koene and will have its first web-meeting on May 5th 2017.

Continue reading IEEE Standard for Algorithm Bias Considerations

The Human Standard: Why Ethical Considerations Should Drive Technological Design Webinar

The IEEE Standards Association (IEEE-SA) Corporate Membership Program invites you to join an exclusive webinar.

The Human Standard:
Why Ethical Considerations Should Drive
Technological Design Webinar

Follow this link to register

18 April 2017 at 12:00PM- 1:00PM EDT 

In the age of autonomous and intelligent machines, it is more important than ever to help technologists and organizations be cognizant of the ethical implications of the products, services or systems they are building and how they are being built before making them available to the general public. While established Codes of Ethics provide instrumental guidance for employee behavior, new values-centric methodologies are needed to complement these codes to address the growing use of algorithms and personalization in the marketplace.

Key insights from the Working Group Chairs of three IEEE-SA projects will be presented. The IEEE Global Initiative provided the input and recommendations that led to the creation of Working Groups for these IEEE-SA standards projects:

IEEE P7001™: Transparency of Autonomous Systems

IEEE P7003™: Algorithmic Bias Considerations

Speakers will provide their perspectives on why it is important for business leaders to increase due diligence relative to ethical considerations for what they create.  This focus is not just about avoiding unintended consequences, but also increasing innovation by better aligning with customer and end-user values.

Speakers

Kay Firth-Butterfield
Executive Committee Vice-Chair, The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, Executive Director, AI Austin
SEE FULL BIO

John C. Havens
Executive Director, The IEEE Global Initiative for Ethical Considerations In Artificial Intelligence and Autonomous Systems
SEE FULL BIO

Konstantinos Karachalios
Managing Director, IEEE Standards Association SEE FULL BIO

Ansgar Koene
Senior Research Fellow at Horizon Digital Economy Research institute, University of Nottingham. Co-Investigator on the UnBias project and Policy Impact lead for Horizon.
SEE FULL BIO

Alan Winfield
Professor, Bristol Robotics Laboratory, University of the West of England; Visiting Professor, University of York
SEE FULL BIO

Algorithm Workshop, University of Strathclyde. February 2017

What are algorithms and how are they designed? Why are they used in commercial practice and what kinds of benefits can they bring? What are the potential harmful impacts of using algorithms and how can they be prevented?

On Wednesday 15th February 2017 some UnBias consortium members had the pleasure of attending an Algorithm Workshop hosted by the Law School, University of Strathclyde. During the workshop, we had the opportunity to consider, discuss and begin to address key issues and concerns surrounding the contemporary prevalence of algorithms. The workshop was also attended by students from the host University and an interdisciplinary group of experts from areas including Law, Computer Science and the Social Sciences. This mix of expertise made for a really great afternoon of talks and discussions surrounding the design, development and use of algorithms through various disciplinary perspectives.

Continue reading Algorithm Workshop, University of Strathclyde. February 2017

Internet Society – European Chapters meeting

Agenda and Details

Wednesday 22 February

12:00 Lunch at the venue

Welcome and introductions, Frederic Donck

Introduction to trust, based on 2016 ISOC report (and discussion), Richard Hill

Editorial responsibility for online content – platform neutrality, recommender systems and the problem of ‘fake news’ (and discussion), Ansgar Koene

Future Internet Scenarios, Konstantinos Komaitis

17:30 Day 1 ends

19:00 Dinner (On Canal Boat leaving from Oosterdok in front of the hotel)

Thursday 23 February

9:00 Day starts

Collaborative security introduction, Olaf Kolkman

Real life examples of collaborative security in action meeting, Andrei Robachevsky

User-Trust, with regard to longevity and security of IoT devices (and discussion), Jonas Jacek

Round table on current issues related to user trust in Europe

12:30 Working lunch at the venue

Search ranking technologies (and discussion), Brandt Dainow

ISOC-NL presentation

Way forward: meeting on next steps, concrete actions for ISOC and chapters

16:00 Day 2 ends

University of Strathclyde – Algorithms Workshop

Algorithm Workshop 15 February 2017

1230 hours Coffee, tea and biscuits

INTRODUCTIONS: POLICY, LAW, TECHNOLOGY

1300 hours Introduction to algorithms and their place in governance – Michael Veale, UCL

1330 hours Law and algorithmic governance—some war stories and some solutions? – Lilian Edwards, University of Strathclyde

1410 hours Algorithms—a technical perspective—are they really a black box? – Ansgar Koene, Unbias

1440 hours Questions

1450 hours Coffee

TYPES OF ALGORITHMIC GOVERNANCE AND POSSIBLE REMEDIES

1505 hours Algorithms, media governance and political disinformation – Lorna Woods, Essex; Rachel Craufurd –Smith, Edinburgh

1535 hours Algorithmic pricing and employment discrimination – Freddie Brogesius, IViR (NL) John Gannon, Leeds

1610hours Algorithms and search engines – Thomas Hoeppner, Berlin

1630hours Questions

1640 hours Panel—Conclusions and next steps

1715 hours Dinner

Internet Society UK England – User Trust Webinar

In preparation for the European Chapters meeting (22-23 February 2017) we will have a 90 minutes Webinar / Conference call on Tuesday 14 February 2017 from 6pm to collect input from participants about the ways in which ISOC UK can/should engage with the theme of User Trust.

In June 2016 ISOC published a working paper “A policy framework for an open and trusted Internet” outlining the four interrelated dimension to be considered when developing policies for the internet. http://www.internetsociety.org/doc/policy-framework-open-and-trusted-internet

The aim of the European Chapters meeting is to build on this and identify specific areas related to User Trust that ISOC should prioritise and focus on when engaging with policy maker to build a trusted Internet.

The specific discussions around User Trust that have been proposed for the meeting are:

  • Ethical data handling
  • Privacy
  • Data breaches
  • Examples of collaborative security in action
  • Internet of Things – implications for security, privacy, control (who control which aspect of the device: user vs. service provider), liability in case of problems, longevity (e.g. devices embedded in infrastructure)
  • Digital Literacy – the need for people to understand basic aspects of how the internet, and digital services, work in order to: improve cybersecurity; be able to give informed consent to personal data usage; understand the implications of proposed legislation (e.g. snoopers charter); …
  • User generated content moderation – how to approach the issues related to fake news and editorial responsibility
  • An overview of the situation in Russia

Other areas of User Trust that might be especially relevant for ISOC UK could be:

  • Government surveillance powers (implications and legal challenges to the Investigative Powers Act)
  • The impact of nation-first, anti-globalization movement (Brexit)
  • Governance of the platform economy (e.g. Uber, Deliveroo), i.e. classification as ‘tech’ company to avoid regulations

Which areas should we prioritize? The chapters meeting is only one and a half days long so time is limited.

Looking beyond the European Chapters meeting, what kind of follow-up activities should ISOC UK pursue, e.g. digital literacy 101 for parliamentarians?

Topic: Internet Society UK and User Trust – Webinar
Time: Feb 14, 2017 6:00 PM London