noun_Email_707352 noun_917542_cc Map point Play Untitled Retweet Group 3 Fill 1

Addressing the AI governance challenge in the age of Generative AI

The fast adoption of Generative AI calls for a reimagining of governance measures to maintain control in this dynamic environment.

Sebastian Reichmann / April 03, 2024

The fast adoption of Generative AI is reshaping the AI landscape, driving innovation at an unprecedented speed and fostering widespread adoption of AI technology. This rapid transformation calls for a reimagining of governance measures to maintain control in this dynamic environment.

The rapid rise of Generative AI and the imperative for AI governance initiatives

Generative AI methods are not only accelerating the adoption of AI but also democratizing its development, empowering a diverse array of individuals to create AI-based solutions.

This marks a tide shift in AI development. Just a year ago, AI development was mostly about using highly trained data scientists that developed purpose-build AI/ML models, solving one problem at the time. Today, with the release of groundbreaking language models ChatGPT (open AI), Llama (Meta), or Bard/Gemini (Google), millions now have access to the most powerful AI models. With user-friendly natural language interfaces and a growing technology ecosystem around them, a whole new industry is growing around the next generation of chatbots, assistants and copilots.

Open AI recently announced that after just 2 months after they created their GPT app store (GPTs), over 3 million custom versions of ChatGPT had been created by users.[1] This is a stunning number of 50.000 custom GPTs per day!

This rapid adoption of Generative AI and the related fast evolving landscape of AI concept and solution innovation represents a significant challenge for AI governance initiatives. How can traditional regulation approaches cope with such a “new normal” of millions of AI innovators that have access to the worlds most advanced AI models from their private PC? These people do not require any extensive training and are not governed by a corporate code of conduct or quality control systems.

Traditional “top-down” approaches to AI regulation will struggle in such a dynamic environment and need to be combined with new “bottom-up” concepts that are woven into the Generative AI solution design and development practice. Two concepts discussed in the article are “Risk-Dialogue-Governance” and “Value-Focused AI Governance”.

The state of AI and Generative AI regulation and its practical implementation

In recent years, discussions surrounding the adoption of AI technologies have become widespread, focusing on both potential opportunities and risks. However, the practical implementation of AI regulation is still in its early stages.

There is a multitude of different AI risk management frameworks out there from standardizing bodies, NGO’s or research institutions, like the NIST AI Risk Management Framework [2], ISO/IEC 23894[3] or Oxford University’s capAI Framework [4] , to name just a some of them. But the recently released European Union’s Artificial Intelligence Act [5] (AI Act) represents the most comprehensive and legally binding AI regulation, providing much needed general definitions and guidelines for trustworthy AI and expected levels of governance efforts.

In a recent update to the AI Act proposal, the EU Parliament suggests specific obligations on providers of Generative AI model (Foundation Models) [6]:

· Train, design and develop the Generative AI system in such a way that there are state-of-the-art safeguards against the generation of content in breach of EU laws.

· Document and provide a publicly available detailed summary of the
use of copyrighted training data

· Comply with stronger transparency obligations.

These requirements primarily aim to protect against the infringement of intellectual property rights, especially copyright infringement. Similar initiatives have been taken by the US Government[7] and even the Republic of China[8], which seems to aim for a centralized licensing process[9] for Foundation models as well as service providers that want to facilitate them. While the specific focus of the initiatives might differ based on the economic and political interests of the initiating body, all approaches share a limited concrete coverage of crucial aspects such as the significant energy footprint of the related infrastructure or working conditions for involved content workers[10].

Another important problem with these general regulation efforts is the lack of practical implementation frameworks and tools, that translate such high-level guidelines into affective hands-on support for general AI development work. This is particularly hindering smaller organizations in engaging with AI governance on the scale required in the age of Generative AI.

The state of Governance practice in the AI industry

When looking at the general ethics maturity within the AI industry and the level of practical implementation of AI governance it can be expected to find significant differences across industries, company sizes and general level of professionalism. A recent study from the Stanford Human-Centered AI centre, highlights for example: “Many technology companies have released AI principles, but relatively few have made significant adjustments to their operations as a result.”[11] The same report identifies multiple reasons for the insufficient corporate focus on AI ethics and fairness, like missing institutionalized support for the area, disincentivising metrics and time budgets for development tasks or frequent organizational changes in the respective teams.

To summarize, regulators must accelerate their efforts and find more scalable governance methods to keep pace and gravity of Generative AI development and adoption. Regulation needs to set clear expectations on AI governance implementation and as well provide pragmatic tools and guidelines to support a broad adoption and maturity building at companies of different sizes and experience.

However, regulation usually is following technology innovation with a certain time gap and should not be seen as a replacement of the requirements for AI developers and solution suppliers towards ethical evaluation of their technology usage. Regulating bodies should invest in efforts for respective competence and maturity building and build guiding structures in their markets.

Challenges for practical AI regulation of Generative AI

There are several specifics to generative AI technology and Generative AI based solutions, that need to be considered when designing effective governance tools.

Democratized AI and the Competence Challenge for AI Governance

Despite the much-needed start to establish standards for foundation models, regulators struggle to keep up with the speed of Generative AI adaptation across various industries. The acceleration of Generative AI reduces time-to-market for solutions and simplifies interacting with the technology drastically. This is enabling a much wider range of professionals to engage with AI technology. In just a few days, some models have gained over 1 million users, with Open AI’s Chat GPT having around 100 million weekly users and growing. These users may easily transition from consumers to AI innovators with the creation of apps and services based on Generative AI. Teachers that create engaging learning apps for their students or research teams travel agencies that provide personalized digital travel guides as part of their offering.

The growing eco system of Generative AI application development infrastructure is fuelling this process. OpenAI for example is offering a growing number of tools for developers and has introduced its own app store in November 2023, which produced over 3 million custom versions of Chat GPT in just 2 months.[12] 

At the same time has Microsoft integrated Open AI’s GPT models as part of their Azure cloud development platform, making it available on am industry grade cloud infrastructure for a global developer community[13]. 

While this democratization holds promise for significant innovation, it also poses challenges in ensuring the quality and ethical integrity of development on a larger scale. Generative AI exacerbates existing challenges in AI governance, with practical implementation often lagging conceptual discussions on fairness, transparency, and human-centred design.

These new groups of AI solution developers usually not have any formal education towards AI ethics or responsibility topics, many of them are not even part of a larger organization that could ensure at least some basic risk management or compliance standards. In this context is it also important to remember that inadequate or harmful usage of AI technology is not dependent on a negative intention of the developers. Experience shows that far more problems occur based on limited understanding of the technology and the implications of its usage. Without dedicated training of these AI developers and support from practical and easy to use governance methods will this all to often result in a cycle of good intentions and fatal misunderstandings.

Transformative Nature of AI Adoption

The pervasive and transformative nature of AI adoption requires a broader risk evaluation compared to traditional security or privacy reviews. Large-scale AI adoption has shown the potential to change entire business processes and disrupt traditional market structures across various industries. But also, for single usage scenarios is the adoption of AI technologies is leading over time to a transformation of the underlying processes including roles and interactions with process participants. Effective AI governance must address the ongoing transformation journey rather than providing a one-off evaluation of solutions. Continued stakeholder dialogues emerge as a central tool for maintaining control and providing valuable guidance in navigating the flexibility required in the face of evolving solutions.

The two layers of Generative AI applications

Generative AI solutions typically combine the general capabilities of pretrained models (categorized by the EU Parliament as “Foundation models”) with a customization layer for a specific usage scenario. This customization is done through prompt design, combination with domain-specific data sources, or integration with other systems as “Copilots” or “Agents”. While there is significant focus on the regulation of such foundation models, there is still a significant gap towards effective regulation of the customization layer.

One foundation model can be used for thousands of usage scenarios and even if the model would have been developed with the best intentions, there are millions of ways to misuse its capabilities intentionally or unintentionally.

Regulating what all these people do with such models is much harder than just regulating the models themselves, as it would require looking at every usage idea and its potential impacts separately. And even though suppliers like Microsoft ensure formally data privacy and security[14] and give advice how to ensure responsible AI usage[15], will they not take responsibility for the end usage either.

New “bottom-up” Generative AI governance practices to support scalability and acceptance

It seems obvious that traditional “top-down” approaches with soon get to their limits given the scale of current Generative AI adoption. To ensure easy and broad adoption of AI governance methods will they need to be integrated into the general AI development practices.

Risk-Dialogue based governance

Governance regulation should prioritize enforcing dialogues, along with related tracking and reporting requirements, over providing categorical rules. Connecting control and reporting mechanisms to dialogues, rather than predefined categories, ensures needed dialogues are performed adequately. Control-based risk frameworks often point only at single groups like product owners and data scientists for taking responsibility. This is important but far from sufficient. It should be obvious that none of the traditional participants of AI development projects are aware of all direct and indirect implications of possible new solutions towards all stakeholder groups.

The main purpose of such a “risk-dialogue-governance” is to ensure that needed dialogues are performed in the needed extend and that all participants acquire sufficient understanding of the planned usage in order to make informed risk decisions.

A good example for this are AI solutions directed at workforce productivity/efficiency gains. From a distance, such solutions free employees from many tedious and repetitive tasks but there is an imminent risk connected to them regarding employee surveillance and unethical performance tracking. In order to navigate the multitude of different impact scenarios it is absolutely needed to actively involve all stakeholder groups.

Another example would be solutions that identify sensitive data in public communication to prevent intended or unintended misuse. This looks like a genuine positive capability, but when looking closer to the needs of stakeholder groups, other possible negative effects surfaces. One concern being that a capability of identifying sensitive content could be misused by bad actors for intentional misuse. Another concern would be the that such capabilities could lead to a lower responsibility at the involved public servants and as such ultimately have negative effects on quality of the work.

Any global categorical risk treatment for such use cases falls short of handling the complexity of such direct and indirect effects. A “risk-dialogue-governance” approach should aim to provide guidance on the type of required dialogues, defining clear goals for the process and documentation of results. This documentation is crucial for external communication of AI developments, promoting external transparency and trust.

Value-Focused AI Governance

A control-based view on AI governance risks can easily turn governance tasks into a bottleneck for AI innovation. This approach tends to result in minimum-consensus or minimum-scope solutions, where the functional scope of a possible solution constantly is reduced to minimize risk exposure to an acceptable minimum. This approach might reduce risks, but it also strongly limits the value that new solutions could provide for users and the public.

The focus of implementing “bottom-up” AI governance needs to be on value creation rather than control and check boxes. As part of design and development best practices are AI governance practices driving valuable and sustainable solutions. Responsibility principles like human cantered design, fairness, robustness, or security of solutions towards better design are basic quality and acceptance criteria any type of solution.

This should also contribute to a mind shift on how to handle specific risk elements, following the pattern:
1. Identifying and describing specific risk elements and
2. trying to seek for the most beneficial (value creational) proactive handling of the respective solution.

Embedding stakeholder value as a central driver for AI governance is a tool to identify additional value points of a planned solution and ensures better solution acceptance. Value-Focused AI governance encourages positive engagements with AI, promotes structural maturity building, and enhances awareness of associated risks. By this, it supports responsible and sustainable AI development.

Examples of value-focused AI governance mindset:

· Use Case: Personalized Healthcare Recommendations
Governance aspects: Privacy

Conventional Governance: Focuses on minimizing the amount of used personal data for analysis to protect patient privacy, leading to potentially less accurate and less personalized healthcare recommendations.

Value-Based Governance: Actively testing the inclusion of more personal data and possible quality gains for the analysis as bases for an “by-patient” evaluation of the best analysis model choice. Embracing an active dialogue between values of patient privacy and potential health gain.

· Use Case: Financial Fraud Detection
Governance focus: result quality, explainability and acceptance

Conventional Governance: Emphasizes minimizing the risk of false positives increasing the model thresholds, potentially limiting the system’s ability to detect sophisticated or new fraud patterns.

Value-Based Governance: Acknowledges the value of false negatives and false negatives and seeking to optimize solution for user validation rather than automation. By this the controlled amount of false results can ensure a continued active user involvement (active friction) in order to prevent user fatigue and con contribution to detect new or more sophisticated fraud patterns.

· Use Case: AI-Powered Educational Tools
Governance focus: Fairness, diversity and inclusivity

Conventional Governance: Aims to minimize the risk of biased content, potentially restricting the scope of educational materials.

Value-Based Governance: Identifies biases in the current material and tries actively to amend and enrich it with new content that enhances inclusivity, diversity, and personalized learning experiences, maximizing the educational value for diverse users, leading to a richer total offer for students and teachers.

· Use Case: AI in Hiring Processes
Governance focus: Fairness, diversity, and inclusivity

Conventional Governance: Focuses on minimizing the risk of biased hiring decisions, potentially leading to overly cautious algorithms and hindering diversity in the workplace.

Value-Based Governance: Analysing biases in historic hiring practice and actively promoting the adjustment of AI driven hiring tasks but as well uses gained insight for awareness training of involved human workforce.

Value-focused AI governance aiming to drive valuable AI innovation, ensuring that systems align with societal values and positively contributes to the communities they serve. By embracing a value focused mindset, organizations will be able to harness the full potential of their AI initiatives and will over time build a better understanding of AI Governance as a strategic value driver, rather than “external requirement” or a just cost factor.

AI Ethics Competence development

Looking ahead, a crucial requirement for regulation and research is the establishment of standards for defining ethical AI development and drive related competence development in the whole ecosystem. This imposes the need for dedicated educational concepts that combine AI ethics aspects, basic technology training and dialogue practices based on the typical stakeholder roles. In order to foster a more value-based AI governance, educational programs should incorporate elements of solution design and product management aspects other than solely focus on control and risk elements. In this way governance processes can become a central value driver for AI development, combining compliance, quality, and value of the final solution, which in turn are the foundation for user and market acceptance.

In Conclusion

In the era of Generative AI, it’s crucial to focus on value- and dialogue-based governance structures as well as cross-sector and role-based competency development. These elements play a vital role in effectively integrating governance into AI development, fostering an “AI governance by design” mindset and practice. This is key for supporting responsible AI development at the scale needed for democratized AI development in the age of Generative AI.

[1] Introducing the GPT Store (openai.com)

[2] NIST, “AI Risk Management Framework: Second Draft” https://perma.cc/6EJ9-UZ9A .

[3] “ISO/IEC 23894 Information Technology — Artificial Intelligence — Guidance on Risk Management” https://www.iso.org/standard/77304.html

[4] capAI — A Procedure for Conducting Conformity Assessment of AI Systems in Line with the EU Artificial Intelligence Act, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4064091

[5] Draft of AI Act of the European Commission by April 2021, https://perma.cc/4YXM-38U9

[6] Amendments adopted by the European Parliament on 14 June 2023 towards the AI Act, Texts adopted — Artificial Intelligence Act — Wednesday, 14 June 2023 (europa.eu)

[7] The White House, October 30, 2023, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

[8] King & Wood Mallesons 20 July 2023, China’s first regulation on the management of generative ai, https://www.kwm.com/cn/en/insights/latest-thinking/china-first-regulation-on-management-of-generative-ai.html

[9] Forbes online, How Does China’s Approach To AI Regulation Differ From The US And EU?, 18th July 2023, https://www.forbes.com/sites/forbeseq/2023/07/18/how-does-chinas-approach-to-ai-regulation-differ-from-the-us-and-eu/

[10] The Washington Post Online, By Rebecca Tan and Regine Cabato, August 28, 2023, Behind the AI boom, an army of overseas workers in ‘digital sweatshops’ Scale AI’s Remotasks workers in the Philippines cry foul over low pay — The Washington Post

[11] Stanford HAI, 12 2023, Walking the Walk of AI Ethics in Technology Companies, https://hai.stanford.edu/sites/default/files/2023-12/Policy-Brief-AI-Ethics_0.pdf

[12] Introducing the GPT Store (openai.com)

[13] What is Azure OpenAI Service? — Azure AI services | Microsoft Learn

[14] Data, privacy, and security for Azure OpenAI Service — Azure AI services | Microsoft Learn

[15] How do we best govern AI? — Microsoft On the Issues

Sebastian Reichmann
Head of AI & Insights, Tietoevry Industry, Public 360°

Author

Sebastian Reichmann

Head of AI & Insights, Tietoevry Industry, Public 360°

Share on Facebook Tweet Share on LinkedIn