Discussion Article

Escaping the Iron Cage: Approaches to AI Regulation

This was an essay that aimed to scope the landscape of AI regulation in early spring of 2023. In this discussion, I take a look at how policymakers, academics, and industry leaders have approached AI regulation at the dawn of the Generative AI era. Sources for any claims referenced can be found below.

Escaping the Iron Cage

In the span of less than a decade, Artificial Intelligence technologies have managed to proliferate into seemingly every corner of modern society. As with any technology, the rate of this proliferation will typically pan out in the shape of a sigmoid curve, starting with slow and steady momentum soon after its conception, followed by an explosion in its adoption, discussion, innovations, and use cases, and then eventually reaching a point of diminishing returns and maturation (Rogers, 1962).

The implications of this theory have long been used to justify a neoliberal approach to technological regulation. Proponents of this approach argue that introducing policies early in the technology lifecycle may come with the social cost of stifling the benefits that innovation would bring about (Héder, 2020). On the other hand, contemporary scholars have drawn comparison of this approach and its modern outcomes to that of Max Weber’s assessment of free market capitalism, arguing that lack of regulation, late regulation, and outward deregulation have combined to build an “Iron Cage” around Big Tech that works to increasingly cement the status quo (Maley, 2004).

With that being said, the topic of global regulation around AI has is not one that has gone undiscussed. Data ethicists, such as Cathy O’Neil and Virginia Eubanks, have gone to great depths to outline the shortcomings of existing regulatory efforts, and the role they’ve played in exacerbating social inequity. As for the efforts themselves, in an inventory of global AI Ethics Guidelines (AIGUs) introduced by regional & international regulatory bodies conducted by the UNICRI, a picture is painted of AI regulation reaching public consciousness in 2016, and taking off in 2018 onwards (UNICRI, 2020). What are these AIGUs bringing to the table, what gaps are they leaving, and what needs to be done to build the bridges required to usher in the new phase of tech regulation? 

AIGU’s

To get a better idea of what is being explored in these AIGUs, Vayena et al’s (2019) seminal exploration of The Global Landscape of AI Ethics Guidelines takes a look at 84 international documents determined by their retrieval process to be most relevant, and identifies clusters of consistent themes. 11 clusters are identified and ranked according to the number of documents they appear in, with transparency, fairness, and non-maleficence sitting in the top three, and sustainability, dignity, and solidarity sitting in the bottom three. Numerically, it is clear that a “normative core” is being converged on in the field of AI guidelines, but there remain fundamental differences in (1) how they are interpreted, (2) why they are important, (3) the domain/stakeholders they pertain to, and (4) their method of implementation.

In a methodically similar white paper exploration, Fjeld et al (2020) come to the same conclusion, also highlighting the wide array of stakeholders behind the corpus of AIGUs that they studied – from private sector companies, to civil society, to governments, to inter-governmental organizations, and to multistakeholder partnerships. Understanding the source of an AIGU is necessary if we are to holistically assess the expertise, motives, and target stakeholders in question.

An illustration of this being the difference between the New Zealand Governmental Algorithm Charter (2020) and Google’s Recommendations for Regulating AI (2019). The former example comes from a governmental organization, and limits its scope to internal commitments to upholding longstanding commitments to its citizens, delivering a concise (and yet broad) framework to do so. The latter example comes from a private leader in the AI space, setting out exhaustive topline recommendations that aim to optimize clarity and avoid the stifling of innovation. With such different perspectives and objectives between these two documents, the underlying shortcomings of AIGUs start to come to light.

Asymmetrical Knowledge

There is a fundamental disconnect between the knowledge required to build regulation, and the incentives of the knowledge holders. Leaps in Artificial Intelligence innovation typically come from one of two sources: academia or industry. The incentive structures in place for either source will vary wildly – while academics need to operate within established bounds of the scientific method and truth-seeking ethics, industry innovators are bound by the dynamics of the profit-seeking market they operate within. It is in the best interest of industry innovators to move as quickly and opaquely as they feasibly can, which creates an inherent tension with the concept of regulation (Sorkin et al, 2023).

This leaves regulators with a much smaller sample of experts that can help inform regulatory design – and many of these experts will be working within different paradigms than their industry counterparts. We saw this game of catch-up play out with the regulation of personal data in the 2010’s, and just as it seemed as though progress was on the horizon, the post-2018 shakeup effectively hit a reset button (Candelon et al, 2021).

And so, a power dynamic is established between private players and public players, whereby regulators need to resort to high levels of abstraction in AIGUs to patch up their blindspots (Héder, 2020). The New York Times describes this abstraction and lack of focused urgency as a failure to establish guardrails, effectively “creating the conditions for a race to the bottom in irresponsible A.I.” (Sorkin et al, 2023). 

Regulation Best Practices

Literature in AI Ethics points to a few best practices that may effectively guide the evolution of AIGUs so as to address some of the gaps in question, but it is important to recognize that much needs to be done before regulations are most effectively formulated.

In order to hedge for the speed and uncertainty that the AI boom brings about, it is important to take a process-oriented approach to building targeted standards. Suresh & Guttag’s Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle (2021) outlines, for instance, how biases can be built into an algorithm by virtue of their development process, and how evaluations at different intervention points in the process can help developers take the necessary steps to mitigate them.

With “Fairness” being a characteristic among the normative core of AI guidelines, taking a similarly reductive and process-oriented approach to frame auditing policies might just be what forces the hands of private actors to account for it. Importantly, it would also be a change less susceptible to the industry’s pace of innovation.

This alone is unlikely to be effective without also establishing compromise with the incentives of private actors (Candelon et al, 2021). For example, as was outlined in Google’s recommendations, it is important that legal frameworks build on existing regulations in some capacity, and are oriented towards parity between AI and non-AI legislation.

Taking a participatory approach with all stakeholders will beintegral, and seeing as the emerging international body of AIGUs has, in such a short timeframe, have already begun to converge, there may even be a case for optimism in the space. 

References

AI regulation is coming. Harvard Business Review. (2021, August 30). Retrieved March 4, 2023, from https://hbr.org/2021/09/ai-regulation-is-coming

Algorithm charter for Aotearoa New Zealand. data.govt.nz. (n.d.). Retrieved March 4, 2023, from https://data.govt.nz/toolkit/data-ethics/government-algorithm-transparency-andaccountability/algorithm-charter/

Artificial Intelligence and robotics. UNICRI. (n.d.). Retrieved March 4, 2023, from https://unicri.it/index.php/topics/ai_robotics

A criticism of AI ethics guidelines - infonia. (n.d.). Retrieved March 4, 2023, from https://inftars.infonia.hu/pub/inftars.XX.2020.4.5.pdf  

Jobin, A., Ienca, M., & Vayena, E. (2019, September 2). The global landscape of AI ethics guidelines. Nature News. Retrieved March 4, 2023, from https://www.nature.com/articles/s42256-019-0088-2  

Maley, T. (2004). Max Weber and the iron cage of technology. Bulletin of Science, Technology & Society, 24(1), 69-86.

Principled artificial intelligence: Mapping consensus in ethical and ... (n.d.). Retrieved March 4, 2023, from https://dash.harvard.edu/bitstream/handle/1/42160420/HLS%20White%20Paper%20Final_v3.pdf

Recommendations for regulating AI - Google Ai. (n.d.). Retrieved March 4, 2023, from https://ai.google/static/documents/recommendations-for-regulating-ai.pdf

Sorkin, A. R., Mattu, R., Warner, B., Kessler, S., Merced, M. J. D. L., Hirsch, L., & Livni, E. (2023, March 3). Why lawmakers aren't rushing to police A.I. The New York Times. Retrieved March 4, 2023, from https://www.nytimes.com/2023/03/03/business/dealbook/lawmakers-airegulations.html

de Zwart, F. (2015). Unintended but not unanticipated consequences. Theory and Society, 44(3), pg. 283-297. 

Suresh, H., & Guttag, J. V. (2021). A framework for understanding sources of harm throughout the machine learning life cycle. arXiv.org. Retrieved March 4, 2023, from https://arxiv.org/abs/1901.10002