Q: Should every large enterprise have an AI ethics board, and what should its remit include?: Paul Dongha: “When it comes to the executives and decision-makers of large corporations, I think there are a few things here.”Firstly, I believe an ethics board is absolutely mandatory. It should be comprised of senior executives drawn from a diverse background within the organization, where those participants have a real feel for their customers and what their customers want.”Those members should be trained in ethics, should understand the pitfalls of artificial intelligence and should make decisions around which AI applications are exposed to customers.”Importantly, those ethics boards shouldn’t rely just on IT systems to answer ethical questions. Ethics boils down to a discussion between different stakeholders. An ethics board is there to debate and to discuss edge cases, for example, the launch of an application where there may be disagreement over whether it could cause harm or whether it could be a surprise to customers.”I also believe a chief responsible AI officer should be appointed to the board of every bank, and arguably every large organization, to oversee the end-to-end risk management of applications both during build and post-deployment. Ethics has to be considered at every stage of development and launch.”Risk management practices and the audit function should all be folded into the remit of a responsible AI officer to ensure strong oversight.”
Q: Are regulators and governments moving fast enough to keep AI risks under control?: Paul Dongha: “I believe our governments and democratically elected institutions, as well as sectoral regulators, have a huge role to play in this.”We as a society elect our governments to look after us. We have a legislative process, even with something as simple as driving, we have rules to ensure that vehicles are maneuvered correctly. Without those rules, driving would be very dangerous. AI is no different: Legislation and rules around how AI is used and deployed are incredibly important.”Corporations are accountable to shareholders, so the bottom line is always going to be very important to them. That means it would be unwise to let corporations themselves implement the guardrails around AI. Governments have to be involved in setting what is and isn’t reasonable, what is too high a risk and what is in the public interest.”Technology companies need to be part of that conversation, but they should not be leading it. Those conversations must be led by the institutions we elect to look after society.”
Q: How real is the threat of artificial general intelligence, and what risks demand our attention today?: Paul Dongha: “Artificial general intelligence, which is about AI approaching human-level intelligence, has been the holy grail of AI research for decades. We’re not there yet. Many aspects of human intelligence, social interactions, emotional intelligence, even elements of computer vision, are things the current generation of AI is simply incapable of.”The recent transformer-based technologies look extremely sophisticated, but when you open the hood and examine how they operate, they do not work in the way humans think or behave. I don’t believe we’re anywhere near achieving AGI and in fact the current approaches are unlikely to get us there.”So my message is that there’s no need to be worried about any imminent superintelligence or Terminator situation. But we do need to be aware that, in the future, it’s possible. That means we have to guard against it.”In the meantime, there are real and pressing risks with today’s generation of AI: weaponization, disinformation and the ability for nefarious states to use generative AI to influence electorates. Even without AGI, current systems have great power, and in the wrong hands, that power can cause serious harm to society.”This article is published as part of the Foundry Expert Contributor Network.Want to join?
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4071222/ai-red-flags-ethics-boards-and-the-real-threat-of-agi-today.html
![]()

