Tag Archives: Helena

TRILCon 2017

The 4th Winchester Conference on Trust, Risk, Information and the Law

Our overall theme for this conference will be:

Artificial and De-Personalised Decision-Making: Machine-Learning, A.I. and Drones

Programme in brief

  • Plenary Address: Prof. Katie Athkinson
    ‘Arguments, Values and Baseball: AI support for Legal Practice’
  • Stream 1A: Automated weapons & automated investigations
  • Stream 1B: Smart retail & behavioural advertising
  • Stream 2A: Algorithms & criminal justice
  • Stream 2B: Data power & its regulation
  • Stream 1C: Artificial intelligence, decision-making & the protection of human interests
  • Stream 2C: Smart contracts & smart machines
  • Plenary Address:  John McNamara
    ‘Protecting trust in a world disrupted by machine learning’
  • Stream 3A:  Workshop run by the UnBias project: An exploration of trust, transparency and bias in law enforcement and judicial decision support systems
  • Stream 3B: Autonomous vehicles
  • Stream 3C: Values & machine learning
  • Panel Discussion: The Future of A.I., machine learning and
    algorithmic decision-making

Full programme available here
Abstract booklet available here

UnBias will be at the conference running the workshop: ‘An exploration of trust, transparency and bias in law enforcement and judicial decision support systems’
This workshop will consist of two parts. In the first twenty minutes we will review some of the outcomes of the UnBias project. Specifically, we will contrast the concerns and recommendations that were raised by teen-aged ‘digital natives’ in our Youth Juries
deliberations with the perspectives and suggestions from our stakeholder engagement discussions. We will then spend a couple of minutes to introduce our workshop participants to a case study based on the ProPublica report of bias in the COMPAS algorithms for recidivism probability forecasting and the subsequent studies showing that when it is not possible for an algorithm to be equally predictive for all without disparities in harm of incorrect predictions when the two populations have unequal base rates. This case study
will form the basis for discussions during the remainder of the session. Some of the questions we will raise include: what are the implications of such findings for trust in law enforcement and judicial rulings? What are the minimum levels of transparency and output audit-ability that a decision support system must have in order to maintain trust in a fair application of the law? The outcomes of the discussion will be summarized in a short report that will be sent out to all participants and feed into the development of policy
recommendations by UnBias.

Algorithm Workshop, University of Strathclyde. February 2017

What are algorithms and how are they designed? Why are they used in commercial practice and what kinds of benefits can they bring? What are the potential harmful impacts of using algorithms and how can they be prevented?

On Wednesday 15th February 2017 some UnBias consortium members had the pleasure of attending an Algorithm Workshop hosted by the Law School, University of Strathclyde. During the workshop, we had the opportunity to consider, discuss and begin to address key issues and concerns surrounding the contemporary prevalence of algorithms. The workshop was also attended by students from the host University and an interdisciplinary group of experts from areas including Law, Computer Science and the Social Sciences. This mix of expertise made for a really great afternoon of talks and discussions surrounding the design, development and use of algorithms through various disciplinary perspectives.

Continue reading Algorithm Workshop, University of Strathclyde. February 2017