
"It is clear that Artificial Intelligence (AI) systems and other autonomous technologies, by reducing the need for human intervention in a range of processes, have the potential to offer significant human and societal benefits, but also to create risks which should be guarded against. It is ultimately for policymakers to determine the range of interventions necessary across different sectors to achieve the optimal balance between encouraging innovation and the development and deployment of societally-beneficial AI systems on one hand, and protecting and promoting human wellbeing on the other. That assessment will require consideration and balancing of numerous factors, and where the balance lies may vary between technologies or sectors. However, there is an increasing acceptance that the deployment of AI should be underpinned by an ethical framework that helps to ensure that those technologies improve human wellbeing. Various governments and non-governmental organisations have already put forward principles, recommendations and the like on AI ethics and governance. Many of these take the form of voluntary principles directed at those developing or deploying AI systems. This report considers the core ethical principles for which there is emerging consensus, and assesses their implications, and the challenges they may raise, for policymakers in formulating any hard or soft law interventions considered necessary"--Extracted from executive summary
Page Count:
32
Publication Date:
2020-01-01
No comments yet. Be the first to share your thoughts!