In the spirit of recent events surrounding the revelations about Cambridge Analytica and the breaches of trust regarding Facebook and personal data, ISOC UK and the Horizon Digital Economy Research institute held a panel discussion on “Multi Sided Trust for Multi Sided Platforms“. The panel brought together representatives from different sectors to discuss the topic of trust on the Internet, focusing on consumer to business trust; how users trust online services that are offered to them. Such services include, but are not limited to, online shopping, social media, online banking and search engines.
On March 5th and 6th UnBias had the pleasure of participating in a workshop that was organized to signal the launch of the European Commission’s Joint Research Center’s HUMAINT (HUman behaviour and MAchine INTelligence ) project.
The HUMAINT project is a multidisciplinary research project that aims to understand the potential impact of machine intelligence on human behaviour. A particular focus of the project lies on human cognitive capabilities and decision making. The project recognizes that machine intelligence may provide cognitive help to people, but that algorithms can also affect personal decision making and raise privacy issues.
Some of us attended a joint conference of the ECREA (European Communications Research and Education Association) Communication and Media Industries, on the 10th-11th November in Stockholm. About 100 people, mainly academics, researchers from NGOs and media consultants from Europe and the US, took part.
On September 14th the US ACM organized a panel on Algorithmic Transparency and Accountability in Washington DC to discuss the importance of the Statement on Algorithmic Transparency and Accountability and opportunities for cooperation between academia, government and industry around these principles. Also part of this panel was Ansgar, representing the IEEE Global Initiative on Ethical Considerations for Artificial Intelligence and Autonomous Systems, its P7000 series of Standards activities, and UnBias.
Just two days earlier, on September 12th, the IEEE news source The Institute published a blog article “Keeping Bias From Creeping Into Code“, based on an interview with Ansgar about the P7003 Standard for Algorithmic Bias Considerations.
USACM, the ACM U.S. Public Policy Council, will be hosting a panel event on “Algorithmic Transparency and Accountability.” The event will provide a forum for a discussion between stakeholders and leading computer scientists about the growing impact of algorithmic decision-making on our society and the technical underpinnings of algorithmic models.
Panelists will discuss the importance of the Statement on Algorithmic Transparency and Accountability and the opportunities for cooperation between academia, government and industry around these principles.
Prior to the June 8th snap election there were two Commons Select Committee inquiries that both touched directly on our work at UnBias and for which we submitted written evidence. One on “Algorithms in decision-making” and another on “Fake News”.
As part of our work to contribute to the development of the IEEE P7003 Standard for Algorithm Bias Considerations we are reaching out to the community of stakeholders to ask for use cases highlighting real-world instances of unjustified and/or inappropriate bias in algorithmic decisions.
Conference call to catch up on recent updates to the outline document for the P7003 IEEE Standard on Algorithm Bias Considerations.
The main item on the agenda for this call was identifying working group members who would want to volunteer to take the lead on developing specific sections of the outline document.
The list of past meeting minutes and agendas is available here.
An international workshop co-organized by LINKS, and the Center for Cyber, Law and Policy, University of Haifa, Israel in collaboration with UCLA Program on Understanding Law, Science, & Evidence (PULSE)
July 9, 2017 University of Haifa
Can fair use be implemented by design? Could artificial intelligence (AI) capability enable algorithms to identify fair use with a reasonable degree of accuracy?
How can we ensure the accountability of such systems? The purpose of this interdisciplinary workshop is to address these questions.
This Notice and Take down regime, enacted in the U.S. by the Digital Millennium Copyright Act in 1998, now involves algorithmic implementation. Large copyright holders deploy automated systems, which use bots to scour the Internet for copyright infringements and then to generate and send take down notices to the applicable intermediaries. Unfortunately, this algorithmic copyright regime removes or blocks access to large amounts of material that does not infringe copyright — a consequence of both deliberate misuse of the notice and take down process and the failure of current algorithmic enforcement mechanisms to distinguish between infringing and non infringing content.
In particular, as currently implemented, the algorithmic copyright regime has the effect of blacking out online fair use. Fair use is a legal doctrine that serves as a check on copyright, to make sure it does not stifle the very creativity that the law seeks to foster. Hence, it is often fair use to copy from a copyright-protected work – or even to copy the entire work – for purposes of criticism, commentary, parody, news reporting, or scholarship, or even using the original as raw material for a different expressive purpose. The U.S. Supreme Court has also emphasized that fair use is a vital free speech safety valve within copyright law, serving to ensure that copyright enforcement does not stifle free speech.
The purpose of this workshop is to explore whether fair use could be implemented by an algorithm and embedded in the design of the online enforcement system. The workshop will bring together experts from computer science, data sciences, and law with the goal of exploring the feasibly of developing fair use by design. If we conclude that fair use by design is feasible, even in part, a subsequent stage will aim at developing a proof of concept for algorithmic fair use (i.e., by open hackathons/competition).
The structure of the workshop will be fully participatory for each section. We have asked several participants to take the lead in the given sections and to present the main challenges.
8:30-9:00 Welcome and Coffee
9:00-9:30 Setting the agenda
Opening remarks by the organizers and round of introductions
9:30-11:00 Fair use: the legal challenge (Neil Netanel, Oren Bracha)
* A brief introduction to fair use
* Predictability/foreseeability in fair use
* Might some subset of fair uses be more predictable?
* How compared with other legal-tech systems?
11:00-11:30 Coffee Break
11:30- 13:00 AI: the technological challenges (Rita Osadchy, Tamir Hazan, Roi Reichart)
* A brief introduction to AI and machine learning
* What algorithms can and cannot do?
* What input and output is necessary?
* Is it useful to apply parameters and clusters identified by legal scholars?
14:00-16:00 Exploring the Feasibility of Fair Use by Design (Niva Elkin-Koren, Mayan Perel)
* Existing algorithmic tools applied for detecting infringing materials
* Can algorithms decide fair use?
* What are the standards of functionality?
* What are the barriers?
* How to test and evaluate the algorithm?
16:30-17:00 Coffee Break
17:00-18:30 Accountability (Chris Garstka, Ansgar Koene. Rita Osadchy)
* How to ensure accountability in such systems?
* How to protect against error and biases?
* How to certify, test and evaluate the algorithm?
* What procedures and standards could be useful for legal oversight?
* What possibilities might there be for human intervention?
* What lessons could be drawn for judicial oversight of algorithmic adjudication in other areas?
18:30 Concluding remarks and next stage
19:00 Reception & Dinner