With Paris playing host to the Paris Peace Forum from 11 to 13 November, the GovTech summit on November 12th, the Internet Governance Forum (IGF2018) from 12 to 14 November, and concluding with a UNESCO/ISOC/Mozilla symposium on November 15th (and the ITU Plenipotentiary Conference 2018 running simultaneously in Dubai from 29 October to 16 November), the start of November saw a lot a activity relating to Internet (as associated) Governance. For those of us based in the UK, this series of conference was further continued with the UK IGF 2018 on November 22nd.
On October 25th we presented our Science Technology Options Assessment (STOA) report on “a governance framework for algorithmic accountability and transparency” to the Members of the European Parliament and the European Parliament Research Services “Panel for the Future of Science and Technology.
In response to the growing importance of algorithmic products in international trade, regional and international trade negotiations at the WTO and elsewhere are currently seeking to set down new rules regarding issues such as Intellectual Property and algorithmic transparency.
In order to try to avoid outcomes of the trade negotiations that inadvertently blocks algorithmic accountability, Ansgar is supporting Sanya Reid Smith of Third World Network in her efforts to brief trade negotiators on causes and consequences of algorithmic bias and the current status of regulatory and standards initiatives to address these issues.
As of May 25th 2018 the Data Protection Act 2018 (DPA2018) has taken effect in the UK, supporting and supplementing the implementation of the EU General Data Protection Regulation (GDPR).
An important requirement in the DPA2018, going beyond the GDPR, is the inclusion of an Age Appropriate Design Code (section 123 of DPA2018) to provide guidance on the design standards that the Information Commissioner’s Office (ICO) will expect providers of online ‘Information Society Services’ (ISS), which are likely to be accessed by children, to meet.
The ICO is responsible for drafting the Code and has issued a call for evidence is the first stage of the consultation process.
On June 21st 2018, the KAISTInstitute for Artificial Intelligence, Fourth Industrial Revolution Center in Korea hosted a public forum discussion on “Taming Artificial Intelligence: Engineering, Ethics, and Policy to discuss the ethics of artificial intelligence technology development as well as policy making around the world.
On 16th April the House of Select Committee on Artificial Intelligence published a report called ‘AI in the UK: ready, willing and able?”. The report is based on an inquiry by the House of Lords conducted to consider the economic, ethical and social implications of advances in artificial intelligence. UnBias team member Ansgar Koene submitted written evidence based on the combined work of the UnBias investigations and our involvement with the development of the IEEE P7003 Standard for Algorithmic Bias Considerations.
In the spirit of recent events surrounding the revelations about Cambridge Analytica and the breaches of trust regarding Facebook and personal data, ISOC UK and the Horizon Digital Economy Research institute held a panel discussion on “Multi Sided Trust for Multi Sided Platforms“. The panel brought together representatives from different sectors to discuss the topic of trust on the Internet, focusing on consumer to business trust; how users trust online services that are offered to them. Such services include, but are not limited to, online shopping, social media, online banking and search engines.
On March 5th and 6th UnBias had the pleasure of participating in a workshop that was organized to signal the launch of the European Commission’s Joint Research Center’s HUMAINT (HUman behaviour and MAchine INTelligence ) project.
The HUMAINT project is a multidisciplinary research project that aims to understand the potential impact of machine intelligence on human behaviour. A particular focus of the project lies on human cognitive capabilities and decision making. The project recognizes that machine intelligence may provide cognitive help to people, but that algorithms can also affect personal decision making and raise privacy issues.
On September 14th the US ACM organized a panel on Algorithmic Transparency and Accountability in Washington DC to discuss the importance of the Statement on Algorithmic Transparency and Accountability and opportunities for cooperation between academia, government and industry around these principles. Also part of this panel was Ansgar, representing the IEEE Global Initiative on Ethical Considerations for Artificial Intelligence and Autonomous Systems, its P7000 series of Standards activities, and UnBias.
USACM, the ACM U.S. Public Policy Council, will be hosting a panel event on “Algorithmic Transparency and Accountability.” The event will provide a forum for a discussion between stakeholders and leading computer scientists about the growing impact of algorithmic decision-making on our society and the technical underpinnings of algorithmic models.