Tag Archives: Ansgar

Fair Use by Design

An international workshop co-organized by LINKS, and the Center for Cyber, Law and Policy, University of Haifa, Israel in collaboration with UCLA Program on Understanding Law, Science, & Evidence (PULSE)

July 9, 2017 University of Haifa

Can fair use be implemented by design? Could artificial intelligence (AI) capability enable algorithms to identify fair use with a reasonable degree of accuracy?

How can we ensure the accountability of such systems? The purpose of this interdisciplinary workshop is to address these questions.

Background
This Notice and Take down regime, enacted in the U.S. by the Digital Millennium Copyright Act in 1998, now involves algorithmic implementation. Large copyright holders deploy automated systems, which use bots to scour the Internet for copyright infringements and then to generate and send take down notices to the applicable intermediaries. Unfortunately, this algorithmic copyright regime removes or blocks access to large amounts of material that does not infringe copyright — a consequence of both deliberate misuse of the notice and take down process and the failure of current algorithmic enforcement mechanisms to distinguish between infringing and non infringing content.
In particular, as currently implemented, the algorithmic copyright regime has the effect of blacking out online fair use. Fair use is a legal doctrine that serves as a check on copyright, to make sure it does not stifle the very creativity that the law seeks to foster. Hence, it is often fair use to copy from a copyright-protected work – or even to copy the entire work – for purposes of criticism, commentary, parody, news reporting, or scholarship, or even using the original as raw material for a different expressive purpose. The U.S. Supreme Court has also emphasized that fair use is a vital free speech safety valve within copyright law, serving to ensure that copyright enforcement does not stifle free speech.

Objectives
The purpose of this workshop is to explore whether fair use could be implemented by an algorithm and embedded in the design of the online enforcement system. The workshop will bring together experts from computer science, data sciences, and law with the goal of exploring the feasibly of developing fair use by design. If we conclude that fair use by design is feasible, even in part, a subsequent stage will aim at developing a proof of concept for algorithmic fair use (i.e., by open hackathons/competition).

Program
The structure of the workshop will be fully participatory for each section. We have asked several participants to take the lead in the given sections and to present the main challenges.
8:30-9:00 Welcome and Coffee
9:00-9:30 Setting the agenda
Opening remarks by the organizers and round of introductions
9:30-11:00 Fair use: the legal challenge (Neil Netanel, Oren Bracha)
Themes
* A brief introduction to fair use
* Predictability/foreseeability in fair use
* Might some subset of fair uses be more predictable?
* How compared with other legal-tech systems?
11:00-11:30 Coffee Break
11:30- 13:00 AI: the technological challenges (Rita Osadchy, Tamir Hazan, Roi Reichart)
Themes
* A brief introduction to AI and machine learning
* What algorithms can and cannot do?
* What input and output is necessary?
* Is it useful to apply parameters and clusters identified by legal scholars?
13:00-14:00 Lunch
14:00-16:00 Exploring the Feasibility of Fair Use by Design (Niva Elkin-Koren, Mayan Perel)
Themes
* Existing algorithmic tools applied for detecting infringing materials
* Can algorithms decide fair use?
* What are the standards of functionality?
* What are the barriers?
* How to test and evaluate the algorithm?
16:30-17:00 Coffee Break
17:00-18:30 Accountability (Chris Garstka, Ansgar Koene. Rita Osadchy)
Themes
* How to ensure accountability in such systems?
* How to protect against error and biases?
* How to certify, test and evaluate the algorithm?
* What procedures and standards could be useful for legal oversight?
* What possibilities might there be for human intervention?
* What lessons could be drawn for judicial oversight of algorithmic adjudication in other areas?
18:30 Concluding remarks and next stage
19:00 Reception & Dinner

A Month of Conferences and Workshops

June was a month of conferences and workshops for UnBias. The 3rd UnBias project meeting on June 1st, hosted by our Edinburgh partners this time, was quickly followed by the Ethicomp and EuroDIG conferences which both took place from June 5th to 8th.

Continue reading A Month of Conferences and Workshops

9th International ACM Web Science Conference 2017

The 9th International ACM Web Science Conference 2017 will be held from June 26 to June 28, 2017 inTroy, NY (USA) and is organized by the Rensselaer Web Science Research Center and the Tetherless World Constellation at RPI. The conference series by the Web Science Trust is following events in Athens, Raleigh, Koblenz, Evanston, Paris, Indiana, Oxford and Hannover.

The conference brings together researchers from multiple disciplines, like computer science, sociology, economics, information science, or psychology. Web Science is the emergent study of the people and technologies, applications, processes and practices that shape and are shaped by the World Wide Web. Web Science aims to draw together theories, methods and findings from across academic disciplines, and to collaborate with industry, business, government and civil society, to develop our knowledge and understanding of the Web: the largest socio-technical infrastructure in human history.

AMOIA workshop at ACM Web Science 2017

AMOIA  (Algorithm Mediated Online Information Access) – user trust, transparency, control and responsibility

This Web Science 2017 workshop, delivered by the UnBias project, will be an interactive audience discussion on the role of algorithms in mediating access to information online and issues of trust, transparency, control and responsibility this raises.

The workshop will consist of two parts. The first half will feature talks from the UnBias project and related work by invited speakers. The talks by the UnBias team will contrast the concerns and recommendations that were raised by teen-aged ‘digital natives’ in our Youth Juries deliberations and user observation studies with the perspectives and suggestions from our stakeholder engagement discussions with industry, regulators and civil-society organizations. The second half will be an interactive discussion with the workshop participants based on case studies. Key questions and outcomes from this discussion will be put online for WebSci’17 conference participants to refer to and discuss/comment on during the rest of the conference.

The case studies we will focus on:

  • Case Study 1: The role of recommender algorithms in hoaxes and fake news on the Web
  • Case Study 2: Business models that share AMOIA, how can web-science boost Corporate Social Responsibility / Responsible Research and Innovation
  • Case Study 3: Unintended algorithmic discrimination on the web – routes towards detection and prevention

The UnBias project investigates the user experience of algorithm driven services and the processes of algorithm design. We focus on the interest of a wide range of stakeholders and carry out activities that 1) support user understanding about algorithm mediated information environments, 2) raise awareness among providers of ‘smart’ systems about the concerns and rights of users, and 3) generate debate about the ‘fair’ operation of algorithms in modern life. This EPSRC funded project will provide policy recommendations, ethical guidelines and a ‘fairness toolkit’ that will be co-produced with stakeholders.

The workshop will be a half-day event

Programme

9:00 – 9:10   Introduction
9:10 – 9:30   Observations from the Youth Juries deliberations with young people, by Elvira Perez (University of Nottingham)
9:30 – 9:50   Insights from user observation studies, by Helena Webb (University of Oxford)
9:50 – 10:10 Insights from discussions with industry, regulator and civil-society stakeholders, by Ansgar Koene (University of Nottingham)
10:10 – 10:30  “Platforms: Do we trust them”, by Rene Arnold
10:30 – 10:50 “IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems”, by John Havens
10:50 – 11:10 Break
11:10 – 11:50 Discussion of case study 1
11:50 – 12:30 Discussion of case study 2
12:30 – 12:50 Break
12:50 – 13:30 Discussion of case study 3
13:30 – 14:00 Summary of outcomes

Key dates

Workshop registration deadline: 18 June 2017
Workshop date: 25 June 2017
Conference dates: 26-28 June 2017

EuroDIG 2017

About

EuroDIG 2017 will take place in Tallinn, 6-7 June and will be hosted by the Ministry of Foreign Affairs of the Republic of Estonia. EuroDIG is not a conference, it is a year round dialogue on politics and digitisation across the whole European continent which culminates in an annual event. More about EuroDIG.

Pre- and side-events

A number of pre and side events will be enriching the EuroDIG programme.  European organisations will organise meetings on day zero 5th June and the European Commission opens the High Level Group on Internet Governance Meeting on 8th June to the public.

Participate!

Our slogan is “Always open, always inclusive and never too late to get involved!”

Org Teams did their best to facilitate the ground for in depth multistakeholder discussion and our Estonian host, the Ministry of Foreign Affairs, worked hard to give you a warm welcome!

Now it is up to you to engage in the discussion – the floor is always open! A first opportunity will be the open mic session, the first session after the welcome.

We would like to hear from YOU: How I am affected by Internet governance?

No chance to travel to Tallinn?

No problem! We are in Estonia, the most advanced country in Europe when it comes to digital futures! For all workshops and plenary sessions we provide, video streaming (passive watching), WebEx (active remote participation) and transcription. Transcripts and videos will be provided at the EuroDIG wiki after the event. Please connect via the links provided in the programme.

UnBias at EuroDIG

UnBias is contributing to EuroDIG 2017 by running a Flash session on “Accountability and Regulation of Algorithms” and as part of the organizing team for the Plenary session of “Internet in the ‘post-truth’ era?“.

Looking forward to seeing you there!

The first IEEE P7003™ Working Group meeting

IEEE Standards Association (IEEE-SA) invites your participation in the IEEE P7003™, Standard for Algorithmic Bias Considerations Working Group.

Why get involved: 

The goal of this Standard Project is to describe specific methodologies that can help users certify how they worked in order to address and eliminate issues of negative bias in the creation of their algorithms. “Negative bias” refers to the usage of overly subjective or uniformed data sets or information known to be inconsistent with legislation concerning certain protected characteristics (such as race, gender, sexuality, etc.); or with instances of bias against groups not necessarily protected explicitly by legislation, but otherwise diminishing stakeholder or user wellbeing and for which there are good reasons to be considered inappropriate.

Who should participate:

Programmers, manufacturers, researchers or other stakeholders involved in creating an algorithm along with any stakeholders defined as end users of the algorithm, and any non-user affected by the use of the algorithm, including but not limited to customers, citizens or website visitors

How to Participate:

If you wish to participate in the IEEE P7003™ Working Group, please contact the Working Group Chair, Ansgar Koene.

Meeting Information:

The first IEEE P7003™ Working Group meeting will be held online via (WebEx) on Friday, 5 May from 9:00 AM – 11:00 AM (EST)

REGISTER FOR MEETING

If you cannot attend the meeting and want to be added to the distribution list please fill out this form.