Related projects

Council for Big Data, Ethics, and Society

In collaboration with the National Science Foundation, the Council for Big Data, Ethics, and Society was started in 2014 to provide critical social and cultural perspectives on big data initiatives. The Council brings together researchers from diverse disciplines — from anthropology and philosophy to economics and law — to address issues such as security, privacy, equality, and access in order to help guard against the repetition of known mistakes and inadequate preparation. Through public commentary, events, white papers, and direct engagement with data analytics projects, the Council will develop frameworks to help researchers, practitioners, and the public understand the social, ethical, legal, and policy issues that underpin the big data phenomenon.

The Council is directed by danah boyd, Geoffrey Bowker, Kate Crawford, and Helen Nissenbaum.


Algorithm Watch

The more technology develops, the more complex it becomes. AlgorithmWatch believes that complexity must not mean incomprehensibility (read our ADM manifesto).

AlgorithmWatch is a non-profit initiative to evaluate and shed light on algorithmic decision making processes that have a social relevance, meaning they are used either to predict or prescribe human action or to make decisions automatically.


countic_of_europeCommittee of experts on Internet Intermediaries (MSI-NET)

The Committee of experts on Internet Intermediaries (MSI-NET) will prepare standard setting proposals on the roles and responsibilities of Internet intermediaries. The expected results of the new sub-group will be the preparation of a draft recommendation by the Committee of Ministers on Internet intermediaries and the preparation of a study on human rights dimensions of automated data processing techniques (in particular algorithms) and possible regulatory implications.


page-shot-2016-11-8-ieee-sa-industry-connectionsIEEE Global Initiative for Ethical Considerations in the Design of Autonomous Systems

An incubation space for new standards and solutions, certifications and codes of conduct, and consensus building for ethical implementation of intelligent technologies

The purpose of this Initiative is to ensure every technologist is educated, trained, and empowered to prioritize ethical considerations in the design and development of autonomous and intelligent systems.


page-shot-2016-11-8-algodiv-anr-project-on-information-diversity-and-algorithmic-recommendationAlgorithm-based recommendation and information diversity on the web

Broad goal: understand the construction of online informational
landscapes by exploring:
Human “algorithms”
  • how diverse is the information produced by users ?
  • what is its socio-semantic structure, how is information
  • distributed over actors of a given online ecosystem ?
  • how diverse is information consumption?
Human-made algorithms
  • how do online platforms present, render and filter information ?
  • what kind of bias is being created by the underlying algorithms and their principles ? (e.g. PageRank, NewsFeed)

facebooktrackingexposedfacebook.tracking.exposed

Developing a tool to help increase transparency behind personalization algorithms, so that people can have more effective control of their online Facebook experience and more awareness of the information to which they are exposed.


fat_mlFairness, Accountability, and Transparency in Machine Learning

The FAT/ML workshop series brings together a growing community of researchers and practitioners concerned with fairness, accountability, and transparency in machine learning.

The past few years have seen growing recognition that machine learning raises novel challenges for ensuring non-discrimination, due process, and understandability in decision-making. In particular, policymakers, regulators, and advocates have expressed fears about the potentially discriminatory impact of machine learning, with many calling for further technical research into the dangers of inadvertently encoding bias into automated decisions.

At the same time, there is increasing alarm that the complexity of machine learning may reduce the justification for consequential decisions to “the algorithm made me do it.”

The goal of our 2016 workshop is to provide researchers with a venue to explore how to characterize and address these issues with computationally rigorous methods.

This year, the workshop is co-located with two other highly related events: the Data Transparency Lab Conference and the Workshop on Data and Algorithmic Transparency.

An interesting proposal from the FAT/ML community:

Principles for Accountable Algorithms and a social Impact Statement for Algorithms


enforce_projectEnforce project

This project intends to provide answers to the protection of the intertwined personal rights of non-discrimination and privacy-preservation both from a legal and a computer science perspective. On the legal perspective, the objective consists of a systematic and critical review of the existing laws, regulations, codes of conduct and case law, and in the study and the design of quantitative measures of the notions of anonymity, privacy and discrimination that are adequate for enforcing those personal rights in ICT systems. On the computer science perspective, the objective consists of designing legally-grounded technical solutions for discovering and preventing discrimination in DSS and for preserving and enforcing privacy in LBS.


IEEE P7003 Working Group: Developing a Standard for Algorithm Bias Considerations

Scope: This standard describes specific methodologies to help users certify how they worked to address and eliminate issues of negative bias in the creation of their algorithms, where “negative bias” infers the usage of overly subjective or uniformed data sets or information known to be inconsistent with legislation concerning certain protected characteristics (such as race, gender, sexuality, etc); or with instances of bias against groups not necessarily protected explicitly by legislation, but otherwise diminishing stakeholder or user well being and for which there are good reasons to be considered inappropriate.

Possible elements include (but are not limited to): benchmarking procedures and criteria for the selection of validation data sets for bias quality control; guidelines on establishing and communicating the application boundaries for which the algorithm has been designed and validated to guard against unintended consequences arising from out-of-bound application of algorithms; suggestions for user expectation management to mitigate bias due to incorrect interpretation of systems outputs by users (e.g. correlation vs. causation)


Algorithmic justice League

Algorithmic bias like human bias can result in exclusionary experiences and discriminatory practices. The Algorithmic justice League works to Support Inclusive Technology. To do this they ask support to Highlight Bias through:

MEDIA – Help raise awareness about existing bias in coded systems

RESEARCH – Support the development of tools for checking bias in existing data and software

PARTICIPATE – Stay informed about ways to help test software for bias and create inclusive data


iRIGHTS international
Digitisation for Democracy and the Public Good

iRights is a non-governmental organisation based in Berlin active at the intersection of digitisation and society for more than ten years. The iRights.info online platform has been running since 2005, one of Germany’s premier resources for information and discussions on copyright, privacy, media freedom and Internet governance issues.
iRights develop joint projects and provide research and consultancy for a wide range of stakeholders: foundations and other NGOs, government and public entities, private companies, academic institutions and individuals.
Mission: To harness the opportunities of digitisation for the promotion of democracy and the public good.
Approach: Offer expertise and create spaces for the cooperative development of practical solutions.

Artificial Intelligence and Law in New Zealand

A three-year project to evaluate legal and policy implications of artificial intelligence (AI) for New Zealand. The project is based at the University of Otago, and funded by the New Zealand Law Foundation.

Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy

%d bloggers like this: