AI Investment Standards: Building consensus from emerging efforts

 

Open MIC | December 2023

  • Artificial intelligence systems are already being deployed both within and beyond the tech sector, exposing communities to potential bias and rights violations, while also introducing new legal, financial, and reputational risks for companies. The nascent development of ethical standards for building and implementing AI tools, however, is presently too piecemeal and or too generalized to keep pace.

  • Many stakeholders and third-party actors are developing sets of ethical AI principles and certification regimes. These efforts hold promise, particularly as the use of complex AI tools spreads into industry sectors where companies may not have the technical expertise to assess the impact of those tools in-house. As demand for AI certification and impact assessments increases, however, so does the risk of bad actors entering the space to “rubber stamp” poor governance practices for profit.

  • Investors have an opportunity to support the evolution of specific, operationalizable ethical AI standards by demanding transparency regarding these “black box” technologies and the processes companies use to develop, assess, and deploy them. In coordination with civil society organizations and academia, investors are also well positioned to inject pragmatic business concerns into the discourse surrounding AI.