EU Promotes ‘Human-Centric’ AI
European regulators continue to take the lead on a range of critical technology policy issues spanning data privacy and, now, “trustworthy” AI.
On the heels of its sweeping General Data Protection Regulation, considered by at least one observer as “the most significant change in privacy law in decades,” the European Union this week unveiled ethics guidelines for “building trust in human-centric AI.”
First and foremost, the EU framework emphasizes human oversight of AI development: Emerging systems should serve humans, not the other way around. “Proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop and human-in-command approaches,” regulators said.
Along with emphasizing consumer privacy and data governance, the AI guidelines also stress the need for thoroughly vetted algorithms undergirded by robust and rigorously tested software stacks. AI systems “need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible” the EU panel said.
Read the full story here at sister web site Datanami.