Initiatives and Research projects

Algorithm Watch

The more technology develops, the more complex it becomes. AlgorithmWatch believes that complexity must not mean incomprehensibility (read our ADM manifesto).

AlgorithmWatch is a non-profit initiative to evaluate and shed light on algorithmic decision making processes that have a social relevance, meaning they are used either to predict or prescribe human action or to make decisions automatically.


countic_of_europeCommittee of experts on Internet Intermediaries (MSI-NET)

The Committee of experts on Internet Intermediaries (MSI-NET) will prepare standard setting proposals on the roles and responsibilities of Internet intermediaries. The expected results of the new sub-group will be the preparation of a draft recommendation by the Committee of Ministers on Internet intermediaries and the preparation of a study on human rights dimensions of automated data processing techniques (in particular algorithms) and possible regulatory implications.


page-shot-2016-11-8-ieee-sa-industry-connectionsIEEE Global Initiative for Ethical Considerations in the Design of Autonomous Systems

An incubation space for new standards and solutions, certifications and codes of conduct, and consensus building for ethical implementation of intelligent technologies

The purpose of this Initiative is to ensure every technologist is educated, trained, and empowered to prioritize ethical considerations in the design and development of autonomous and intelligent systems.


page-shot-2016-11-8-algodiv-anr-project-on-information-diversity-and-algorithmic-recommendationAlgorithm-based recommendation and information diversity on the web

Broad goal: understand the construction of online informational
landscapes by exploring:
Human “algorithms”
  • how diverse is the information produced by users ?
  • what is its socio-semantic structure, how is information
  • distributed over actors of a given online ecosystem ?
  • how diverse is information consumption?
Human-made algorithms
  • how do online platforms present, render and filter information ?
  • what kind of bias is being created by the underlying algorithms and their principles ? (e.g. PageRank, NewsFeed)

facebooktrackingexposedfacebook.tracking.exposed

Developing a tool to help increase transparency behind personalization algorithms, so that people can have more effective control of their online Facebook experience and more awareness of the information to which they are exposed.


fat_mlFairness, Accountability, and Transparency in Machine Learning

The FAT/ML workshop series brings together a growing community of researchers and practitioners concerned with fairness, accountability, and transparency in machine learning.

The past few years have seen growing recognition that machine learning raises novel challenges for ensuring non-discrimination, due process, and understandability in decision-making. In particular, policymakers, regulators, and advocates have expressed fears about the potentially discriminatory impact of machine learning, with many calling for further technical research into the dangers of inadvertently encoding bias into automated decisions.

At the same time, there is increasing alarm that the complexity of machine learning may reduce the justification for consequential decisions to “the algorithm made me do it.”

The goal of our 2016 workshop is to provide researchers with a venue to explore how to characterize and address these issues with computationally rigorous methods.

This year, the workshop is co-located with two other highly related events: the Data Transparency Lab Conference and the Workshop on Data and Algorithmic Transparency.


enforce_projectEnforce project

This project intends to provide answers to the protection of the intertwined personal rights of non-discrimination and privacy-preservation both from a legal and a computer science perspective. On the legal perspective, the objective consists of a systematic and critical review of the existing laws, regulations, codes of conduct and case law, and in the study and the design of quantitative measures of the notions of anonymity, privacy and discrimination that are adequate for enforcing those personal rights in ICT systems. On the computer science perspective, the objective consists of designing legally-grounded technical solutions for discovering and preventing discrimination in DSS and for preserving and enforcing privacy in LBS.

Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy

%d bloggers like this: