A new report ‘Harmonising Artificial Intelligence: The role of standards in the EU AI Regulation’ provides an analysis of the envisioned role for technical standards in the governance of AI. It has been co-authored by experts from Oxford Information Labs, on behalf of OxCAIGG, Oxford Commission on AI and Good Governance.
The report sets out a series of recommendations for improvements in the draft EU regulation.
EU AI regulation
Establish a mechanism to address the gap between the ambition of the European Standards Organizations (ESOs) and the resources available to them
Involve key stakeholders early in the development of standardisation processes – e.g., publish standardisation roadmaps and develop national AI standards hubs for early engagement
Ensure standards developed by policymakers are flexible enough to reflect the rapid evolution of AI technology and products e.g., develop a fast-track process for adoption of standards
Ensure that sufficient education and training is in place for non-expert AI stakeholders to facilitate greater understanding and participation in ESO’s Ensure compliance tools are developed in close consultation with industry and standard experts.
Balance requirements of ESO’s for devising standards that meet specific European requirements with the need for global, open standards that facilitate trade with the rest of the world
Ensure that cooperation between businesses maximising participation whilst minimising the costs of engagement and reducing the inefficiency of duplicate voices
OxCAIGG has been convened by the Oxford Internet Institute and Professor Philip N. Howard sits on the Commission. The report is drawn from a desk-based study of publicly available sources, including the proposed regulation and consultation responses.
About the work of OxCAIGG:
The challenge of using AI for good governance urgently concerns public policy, administration and politics in democracies across the world. The goal of the Oxford Commission on AI and Good Governance is to develop principles and practical policy recommendations to ensure the democratic use of AI for good governance.
Most recently, the COVID-19 pandemic has prompted a rapid influx of AI solutions. While intended for the public good, these novel technologies bring with them challenges in assessing the suitability and legitimacy of these offerings to the forefront of international decision makers’ agendas. The mobilization of this field is unprecedented and demonstrates the need for policies around these products, their procurement, and implementation through governments.
The Oxford Commission on AI and Good Governance will investigate the procurement and implementation challenges surrounding the use of AI for good governance faced by democracies around the world, identify best practices for evaluating and managing risks and benefits, and recommend strategies to take full advantage of technological capacities while mitigating potential harms of AI-enabled public policy.
Drawing from input from experts across a wide range of geographic regions and areas of expertise, including stakeholders from government, industry, technical and civil society, OxCAIGG will bring forward applicable and relevant recommendations for the use of AI for good governance.
See more AI stories here.
Article by [author-name] (c) Irish Tech News - Read full story here.