Design for Responsible AI development in the public sector

What role should design play in shaping AI? Designers must rise to the challenge of guiding the transformation to create public services that prioritise the common good.

Anette Hiltunen / May 26, 2025

The conversation around AI's impact on design tends to overshadow an equally relevant question: what role should design play in shaping AI? Designers must rise to the challenge of guiding the transformation to create public services that prioritise the common good.

Achieving this vision takes more than design innovation. To ensure our institutions remain meaningful, we need informed public sector professionals who can navigate this evolving landscape responsibly. That starts with us, the teams designing and delivering AI systems. We need to recognise the ethical risks and opportunities they present. Just as importantly, we must empower people in public institutions to critically engage with these choices and work together toward equitable societal outcomes. This transition brings new responsibilities and learning experiences for designers.

In this article, I delve into how designers can facilitate ethical decision-making in collaborative development, driving responsible innovation and better data practices. I cover these four topics:

  • Responsible AI Matters. Designers Have a Say

  • Ethics by Design: A Practical Framework

  • Internal engagement: Participatory Workshops

  • External Engagement: Co-Create Understanding

While I stress the importance of AI governance in shaping ethics, responsible practices remain valuable even without formal structures. Most designers work without established models. My hope is to inspire them to embed critical thinking into their work, regardless of governance frameworks.

Responsible AI Needs Designers!

Before moving on, let me address a common concern many designers may have: How can I contribute to ethical AI design without a formal background in ethics, data ethics, or moral philosophy? While knowledge of these theories can be helpful, ethical design practices are often rooted in Applied Ethics, a practical branch of ethics that guides daily decision-making in AI development.

Applied Ethics and Responsible AI are related but not the same. While Responsible AI is based on ethical principles, it is important to understand their differences. According to Olivia Gambelin’s Responsible AI: Implement Ethical Approach in Your Organisation, Applied Ethics is the use of moral principles and values to guide decision-making in real-world contexts. It focuses on addressing questions of right and wrong, fairness, and justice.

For example, imagine a system used in welfare benefit distribution. Applied Ethics might define fairness as a core principle, asking questions like, “How do we ensure that this does not discriminate against citizens based on gender or ethnicity?” Fairness as defined will serve as a guiding principle for the design and development of the system. 

Responsible AI, on the other hand, focuses on applying these defined principles. It operationalises ethics by implementing them throughout the development lifecycle, turning values into actionable strategies and measurable practices. This ensures that the technologies align with human values in real-life scenarios.

So whereas Applied Ethics defines fairness as a core principle, Responsible AI ensures that fairness is implemented in practical, measurable ways, answering questions like “How do we ensure that the data used to allocate benefits represents all communities, including marginalised groups?”

In essence, while ethics tells us the "why", the prioritisation of certain values, Responsible AI tells us the "how" by integrating those values into processes, governance, and accountability.

Responsible AI and Applied Ethics help translate ethical principles into practices. Illustration: Tietoevry.

Why Responsible AI matters? 

Responsible AI is linked to inclusion, a term many designers are familiar with. Diverse perspectives and data help create systems that serve everyone, including marginalised groups. By training models on diverse datasets, they learn to perform effectively across a broader spectrum of inputs, improving accuracy and fairness. Such enhanced performance helps attract a wider user base, expand customer reach, and unlock new business opportunities. 

Another benefit of responsible practices is compliance. Following guidelines like the EU AI Act ensures that AI systems meet regulatory standards, minimises legal risks, and fosters trust with users and regulators. This not only safeguards the organisation but also strengthens its reputation in the market. 

However, technology often outpaces regulations. While the EU AI Act is currently a key framework for quality and risk, it does not offer detailed guidance on how to operationalise it in practice. Organisations must proactively go beyond compliance, and embed responsible practices to ensure fairness, transparency, and accountability. 

When the law falls short, best practices become our compass, guided by the values at the heart of responsible innovation. Responsible AI development is not just about compliance. It is about building trust, prioritising human impact, and striving to create technology that uplifts society.

Ethics by Design: A Practical Framework

The real value of AI lies in its integration into products and services that reflect human values and meet real-world needs. Integrating ethics is not a one-time task but a continuous process that grows and adapts with the system. 

By integrating AI governance into Design-Driven Development teams are empowered to align innovation with societal and organisational goals. This fosters collaboration, inclusivity, accountability, and iteration, and ensures that ethical design principles are embedded at every stage. 

Integrating ethical checkpoints throughout the iterative process enables teams to adopt an Ethics-by-Design approach. This proactive approach ensures AI is developed with foresight, addressing potential risks early rather than merely reacting to them.

In a data-driven world, power increasingly shifts to us who design and deploy AI systems. We make critical decisions about what data is collected, how it is interpreted, and how outputs are used, shaping societal narratives and priorities. This shift in power amplifies the moral responsibility of everyone involved in the AI lifecycle and calls for a collective effort to ensure equitable outcomes. By integrating ethical considerations from the start, responsibility and accountability remain at the forefront throughout the process.

AI governance steps for the design and development process. Integration of Design-Driven Development by Tietoevry Public360° (2024): Geir Are Bjørnsrud, Morten Jensen, Trude Stanger, Andre Fangueiro, and Jenny Felldin. Responsible AI (AI Governance) by Tietoevry AI & Insights Team (2022): Dhivya Gopalakrishnan and Sebastian Reichmann. Illustration: Tietoevry.

Ethics is not limited to governance. It can be integrated into everyday design choices, encouraging everyone to prioritise human values and societal impact, regardless of scale or structure. The journey to Responsible AI is less about tools, it’s about adopting new mindsets. Once those mindsets are in place, tools become a vehicle to put them into action. Some of the following activities relate to this governance model, but they can be applied independently to guide development in any context.

Invisible by Design: Who’s Missing? 

In Closing the Loop: Systems Thinking for Designers, Sheryl Cababa synthesises principles from systems thinking and equity-centered design. She critiques traditional user-centered design for its narrow focus on individual users, which can overlook systemic inequities and broader societal impacts.

She champions equity-centered design, which encourages designers to acknowledge their positionality and challenge power structures and biases to strive for more equitable outcomes.


Acknowledging your own positionality and power can help to address potential biases. 

An equity-centered approach is vital for addressing unconscious biases which teams can unintentionally transfer into systems through various stages of development. These biases can emerge particularly in data collection, where teams may select datasets reflecting their own experiences, overlook underrepresented groups, or mislabel data based on personal assumptions. 

For instance, an AI system used in hiring may be trained on historical data. If no one questions whether that data reflects existing biases in society, it might unfairly favour similar patterns and exclude qualified candidates. This is why it is crucial to adopt an equity-centered approach to identify such risks early on.

Internal Engagement: Participatory Workshops

A practical approach is to host internal ethical risks reviews and bias awareness workshops to raise awareness of biases in data and algorithms. Such workshops, featuring hands-on activities and discussions, can help your teams shape a shared understanding of ethical challenges and establish bias-mitigation practices. 

Reflecting on Product Impact

Have participants reflect on the following questions to examine the product's value and inclusivity. This kind of pairing explores the best-case (defining success) and worst-case outcomes (defining risks), prompting critical thinking about the full spectrum of the product's impact.

  • What Success Looks Like?

Imagine you have achieved great success in your endeavors. What does that success look like? How will you measure it? List your key performance indicators and associated metrics. 

  • Positive vs. Negative Impacts:

What specific positive outcomes can this product deliver to society, and how can we amplify its impact? 

What specific negative outcomes can this product deliver to society? How likely is this consequence to occur? What can be done to minimise the potential of this negative outcome? 

These first three questions are from the Sociotechnical Framework developed by Lisa Talia Moretti. Struggling to assess potential impacts? Explore them further using her Social Impact Canvas to encourage future-scenario thinking and planning. 

  • Impact on People:

Who are you designing your product for? Who are the people impacted by this product, and how can we ensure their needs and perspectives are effectively represented in the design? 

For whom might this product fail, and what unintended negative impacts could it have on vulnerable or marginalised groups? What steps can we take to mitigate these risks? 

  • Inclusivity and Accessibility: 

How can we design a product that is inclusive and accessible to all groups, and how will we validate its success through testing or feedback? 

Are there any groups or perspectives that may be unintentionally excluded or disadvantaged, and what measures can we take to address these risks? How might such exclusions lead to harm, and how can we prevent them? 

Bias Reflection Activity

Your team might not represent the people you are designing for. Encourage your team to reflect on their own experiences and perspectives, and discuss how these might shape their decisions. 

  • Are we diverse, and where does power lie in society?
  • What are our implicit biases, and how might they shape our decisions in AI development?
  • How can we avoid bias and ensure our AI systems are inclusive and fair?


This tool can be used to inspire building an activity to reflect power dynamics in your own context.  

By fostering a shared understanding of these issues, such sessions encourage teams to adopt practices that minimise bias, and promote fairness and accountability.

External Engagement: Co-Create Understanding

In external engagement with public institutions, building trust in AI is essential. However, over-reliance can create significant risks. Failing to scrutinise AI in the public sector can allow biases or errors to go unnoticed, leading to perpetuating systemic biases. The goal is to balance automation to ensure that responsibility stays where it belongs -with humans.

Co-creating understanding around AI and ethics is challenging but rewarding. The Nordic public sector is among the most digitised in the world. However, for many public servants this may be their first encounter with AI, requiring a step-by-step approach to build shared terminology and knowledge. Effective engagement starts with understanding peoples’ capabilities, limitations, and knowledge level, and creating a foundation of trust and curiosity.

If it is not hands on, it can quickly feel unrelatable. Using participatory design methods, such as visual and tangible tools in collaborative activities, can make abstract concepts more accessible

Less experience with the product and the topics, the more granular and context-specific the material should be. For example, tackling realistic challenges through role-specific, scenario-based activities helps the topic resonate on a personal level. People’s capacity for imaginative thinking and the ability to consider perspectives on behalf of others can be diminished when faced with new and complex topics. This makes it particularly challenging to envision broader consequences, such as societal impact, especially when individuals may struggle to consider even their own perspectives.

Also, for a first-time facilitator, starting with a lower level of abstraction can make learning easier. As knowledge builds, facilitators can introduce more abstract ideas and broader societal impacts.

Context Sensitivity of Ethics

Ethics is also inherently context-sensitive, shaped by the unique needs, values, and challenges of each situation. As such, there is no universal, one-size-fits-all approach to addressing ethical considerations. Tailoring content to participants' specific contexts, such as their industry, knowledge levels, and the societal impact of the system, helps make complex topics relatable. This approach also ensures discussions remain effective and paves the way for contextually relevant solutions.

Customised workshops allow for meaningful exploration. They foster deeper engagement and actionable insights. By tailoring engagement, these workshops empower people to critically evaluate the risks and benefits of AI, equipping them to become more responsible and informed users of the technology.


Conclusion: Shaping AI Responsibly Through Design

Designers have a unique role in guiding AI development to prioritise fairness, inclusivity, and societal impact. By integrating ethical principles into every stage of the design process, fostering collaboration, and tailoring engagements, we can bridge the gap between innovation and responsibility. This journey is not just about new tools, it's about adopting new mindsets. Through human-centered and equity-focused practices, we can create systems that empower individuals, foster trust, and reflect our societal values. Together, we can ensure AI becomes a force for the common good and upholds the public interest.


Thank you:

Digitalising governments takes more than technical and business know-how. It's about thinking ahead, understanding people and society, and designing with impact in mind.

Special thanks to the teams in Tietoevry Industry, Public 360°: Raphaela Bieber Bardt, Isabelle Wikman, Ashley Muller, Valeria Ferreira, Sebastian Reichmann and the AI & Insights team for the ethics review workshops. Thank you, Kitty Toft, Milos Mladenovski, Torgeir Haugholt, Olga Safonova, and Aleksandra Bratek for providing valuable input on this article. And thanks to Geir Are Bjørnsrud, Morten Jensen, Trude Stanger, Andre Fangueiro, and Jenny Felldin for their leadership and support.

Anette Hiltunen
Lead Product Designer

Author

Anette Hiltunen

Lead Product Designer

Share on Facebook Share on Threads Share on LinkedIn