Interviews
Prof. Saeema Ahmed-Kristensen, Director of DIGIT Lab – Interview Series

Professor Saeema Ahmed-Kristensen is a leading design engineering scholar and Associate Pro-Vice-Chancellor (Research & Impact) at the University of Exeter, where she also serves as Director of the DIGIT Lab, a major interdisciplinary research initiative focused on digital innovation and transformation. Her research spans design creativity and cognition, data-driven and digital design, and the integration of advanced technologies into complex engineering and product development, with a strong emphasis on translating academic insight into real-world impact through industry collaboration, policy engagement, and large-scale research programmes.
Your career has spanned Cambridge, DTU, Imperial College London, the Royal College of Art, and now the University of Exeter. Looking back, what experiences or turning points most shaped your thinking about design, creativity, and the role of digital technologies?
My work in design has spanned across many different cultures and disciplines. I began at Brunel on one of the few courses at the time that combined technology, human-centred design and an understanding of form. It taught me early on that creativity and innovation are closely linked.
Studying at Cambridge then opened my thinking further. The college environment exposed me to many disciplines and showed me how innovation depends on the knowledge coming together across fields. My PhD focused on the aerospace sector and examined how engineering designers find and use information. I studied how people access knowledge, how expertise can be supported or replicated, and the intersection between cognition, computer science and engineering design. This human-centred lens has stayed with me ever since.
As digital technologies have grown, so have the questions in my work. The rise of IoT data, AI and advanced computation has shifted design away from being only human-centred and towards being society-centred. This continues to shape my work at the University of Exeter, where I lead DIGIT Lab and focus on the role of LLMs in the creative process, the barriers industries face in adopting them, and how data can drive innovation.
My time at Imperial and the Royal College of Art reinforced that design is far more than shaping products or services. With the right people, processes and culture, design becomes a driver of new and scalable technologies, materials and ideas that can address today and tomorrow’s global challenges.
DIGIT Lab focuses heavily on digital transformation inside large established organisations. From your vantage point, what do you believe leaders misunderstand most about how AI will change design, innovation, and decision-making?
For decades, AI has advanced in research and been adopted in certain industries, but progress has often been limited by skill gaps, leadership understanding, and clarity on the value and the infrastructure required. With the rise of LLMs and generative tools such as DALL·E, AI is now more accessible and needs far less specialist expertise or setup. But this also raises new questions about privacy, data security and how well general-purpose models apply to specific domains.
In design and innovation, these issues are especially clear. Our research, which examined more than 12,000 ideas generated by humans and by AI, showed that AI ideas tend to cluster around similar concepts. This highlights the need to build human expertise into generic tools, adapt AI for the domain, or understand when and how to use AI alongside human creativity and decision-making.
Much of your research explores creativity and cognition in design. With generative AI now capable of producing ideas, concepts, and iterations at scale, what aspects of creativity do you see as uniquely human — and which parts can responsibly shift toward AI-driven processes?
Creativity has always been more than generating alternatives for me. It’s about intent, cultural meaning and the emotional connection a design creates. Our recent DIGIT Lab survey brought this into sharp focus: 82% of people told us that human-led or hybrid work feels more meaningful, and 71% said they feel less emotionally connected to AI-only design. Many described AI-generated work as “lacking emotion” (48%) or “overly perfect” (40%), and 36% felt its impact faded quickly. These responses reinforced something I’ve believed for a long time. Emotional engagement isn’t a nice-to-have; it’s essential to how people experience and value creative work.
Our research comparing human and AI ideas also shows that human designers are better at creating diverse, novel ideas and ensuring the creative output, whether artwork, product design or services, has depth and meaning. Creative experts hold a skill set that is not yet possible to replicate. Designers need to understand the problem before generating ideas, and LLMs are very useful in gathering information to help designers shift from one problem to another. If we can build models of human expertise into AI tools, they can also support evaluation of ideas, allowing AI to take better advantage of human creative skills.
The chain‑of‑thought approach we are experimenting with supports LLMs to follow expert reasoning, not just give scores. In all cases, human oversight is required to interpret results and ensure that design choices align with users’ lived experiences.
It’s clear that we must either create models capable of capturing how people experience products, services, and interactions in ways that computers can interpret, or integrate thick data (rich qualitative insights that provide context) with the thin or big sensor data we collect. Developing these models is not straightforward, and this is exactly where human involvement remains essential.
So for me, the takeaway isn’t that AI has no place in creativity. Far from it. It’s that AI and humans contribute different strengths. The fact that people consistently respond more positively to human or hybrid work simply tells us where the centre of gravity lies. AI can help explore a broader design space, analyse patterns and offer structured critique, but those perceptions of flatness, algorithmic perfection and emotional distance show where AI still needs human judgment to turn possibilities into something that resonates.
That’s why I see the future of creativity as fundamentally collaborative. AI can widen the field of possibilities. Designers bring the empathy, cultural understanding and sense of intent that give those possibilities meaning. When the two work together, with human judgment setting the direction and AI enriching the exploration, the result is a creative process that is more rigorous, more imaginative and ultimately more human in its outcomes.
You’ve pioneered approaches for quantifying user experiences and structuring design knowledge. As AI systems become more responsible for generating products and services, how do we ensure that human experiences, emotions, and cultural signals remain central to the design process?
To centre human experience, we need to embed knowledge of perception and emotion into our methods.
There are two main approaches. The first recognises the need for qualitative data that enables a rich understanding of human experience, perception, and emotion, informing effective human–AI collaboration. The second—on which my work has focused—aims to translate this knowledge into models that AI systems can understand and use.
These models are complex to develop, as they must integrate user experience, human perception, and the characteristics of the products or systems being designed, in order to predict human responses and overall experience.
You work extensively with complex industries – aerospace, medical, manufacturing, and consumer products. In these high-stakes environments, how do you balance the potential of AI-supported design with the need for safety, traceability, and trust?
In high-risk sectors such as healthcare, aerospace and manufacturing, the question is not whether AI can be used, but how it is governed. Trust in these environments depends on clear accountability, traceability and explainability at every stage of the design and decision-making process. AI can play a powerful supporting role in simulation, optimisation and early-stage exploration, but it cannot become the final authority.
Many of these fields are tightly regulated and subject to stringent safety requirements, which demand secure handling of all data, personal or commercially sensitive. In these contexts, prompts or queries often need to be developed using local data to ensure specificity and relevance, and it is common for organisations in these sectors to build and maintain their own AI tools.
What our wider research consistently shows is that hybrid systems are essential: AI should augment expert judgement, not replace it. Human oversight must remain built into every critical decision point, particularly where safety, risk and liability are concerned. For regulators and end users to trust AI-enabled systems, organisations also need transparent documentation of how models are trained, what data they use and how outputs are generated. Without that transparency, trust cannot scale, no matter how advanced the technology becomes.
Many organisations struggle with the gap between “experimenting with AI” and meaningfully integrating it into product development. What practical steps would you recommend for teams trying to move from experimentation to strategic implementation?
Many organisations stall at the experimentation stage because they adopt AI without a clear strategic purpose. The first practical step is to be explicit about what role AI is meant to play in the development process, whether that is supporting ideation, accelerating testing, improving evaluation, or enhancing decision-making. Without that clarity, pilots remain disconnected from real business and design outcomes.
Teams also need the right foundations in place. That means investing in high-quality, well-governed data, particularly data that reflects real user experience rather than purely technical performance. It also means being realistic about the current limits of AI, especially in creative and human-centred judgement, where expert oversight remains essential.
Many sectors are beginning to develop AI policies that guide teams through the process of experimenting with AI from building business cases and running pilots to broader adoption. These policies help organisations identify where AI can genuinely add value, while also ensuring that humans remain in the loop wherever necessary.
Finally, organisations should move through structured, low-risk pilots that are embedded in real workflows, not run in isolation. These pilots should be interdisciplinary, bringing designers, engineers, data scientists and domain experts together so that learning is shared and transferable. AI delivers value when it is designed into everyday practice, not treated as a separate experimental layer.
You have a long track record of developing methods for structuring and automating knowledge. How close are we to AI systems that can reason about design intent, user needs, and context in a way that genuinely adds value rather than simply generating content?
In some areas, predicting user preferences is relatively straightforward, as data such as browsing history or records of which films or television shows have been watched can be used to make recommendations. These areas benefit from readily available data.
By contrast, a key challenge in the design of products and services is that data about people’s choices, needs, and lived experiences is often not easily available.
My recent research with Digit Lab investigated the ability of an LLM, when given a model of how people perceive and respond to design features However, current models operate on patterns in data and cannot contextualise meaning. Earlier studies linking shape to perceptions show that even small changes in form can shift emotional responses, and such subtleties are hard for AI to anticipate without human guidance or sophisticated models to be brought in. Therefore, AI reasoning about intent is improving, but it remains a complement to human expertise.
As AI accelerates design cycles — from ideation to prototyping — what new skills will designers need? How should universities and organisations rethink training for the next generation of creative talent?
Designers will need to be fluent in both human perception and AI-enabled tools. Understanding how form, material and proportion shape emotional response will remain fundamental to good design. At the same time, designers must be able to work confidently with AI systems that support idea generation and evaluation. That means not just using the tools, but understanding what they are optimising for and where their limitations lie. As AI becomes more embedded in design workflows, the ability to critically interpret its outputs and combine them with human judgement will become one of the most valuable creative skills.
As AI accelerates design cycles from ideation to prototyping, designers will need a new blend of capabilities and ways of thinking that go beyond traditional craft skills. They will need to understand how digital technologies work, what different types of data can (and cannot) reveal, and how to combine design expertise with AI literacy. This includes knowing how to work with high-quality, well-governed data that reflects real user experiences, rather than relying solely on technical performance metrics. Alongside this, designers will also need the judgement to recognise where AI is helpful and where human creativity and critical thinking must remain central.
To meet these needs, universities and organisations will have to rethink how they train the next generation of creative talent. Some universities are already integrating data science into design programmes,; an important step, but not enough on its own. What’s still missing are design-thinking methods that are equipped for the realities of the digital age: methods that help designers collaborate with AI, work across disciplines, and navigate rapid experimentation while maintaining ethical and human-centred oversight.
Addressing this gap is essential. It’s why my colleague Dr. Ji Han and I are writing a book with Cambridge University Press on Design Thinking in the Digital Age, which brings together the frameworks, skills, and ways of thinking needed to design effectively alongside AI.
DIGIT Lab emphasises responsible transformation. In your view, what ethical or societal risks need more attention as AI becomes embedded in design workflows across industries?
One example is ensuring the ethical use of data, including obtaining informed consent and maintaining transparency about the datasets used to develop AI products, as well as any potential biases they may contain. For instance, datasets embedded in healthcare systems must be carefully examined to ensure they adequately represent the full population, identify any groups that may be underrepresented, and confirm that the AI system is fit for purpose and inclusive. From a societal perspective, there is often concern that AI will replace jobs; however, it is important to understand where human expertise remains essential and how AI can be used to augment, rather than replace, human capabilities.
However, there are deeper ethical issues too. When designers rely on human data, they must handle privacy, bias and transparency responsibly. A DIGIT Lab workshop identified the manufacturing sector “data”, “human” and “governance” as the main challenge categories, highlighting the need for better data capture, human‑in‑the‑loop oversight and clear policies on security, trust, intellectual property and regulation. Addressing these risks means ensuring AI systems are built on diverse data, embedding human judgement at critical points and developing inclusive design standards that respect privacy, consent and cultural context.
You’ve researched how data and AI can customise products around user experience. Do you see a future where products evolve dynamically based on real-time data after they leave the factory? If so, how should designers prepare for that world?
Data‑driven design used for products can be personalised, customised, or adapted to individual behaviours. They then become “smart” systems that collect data about how they are used and communicate through embedded sensors and IoT connectivity. In our framework, customising activities involve using that data to update and adapt products after they leave the factory. Examples include linking gesture‑recognition models to a digital twin for human–robot collaboration and using machine‑learning–assisted scanning to create customised components.
This shift creates new responsibilities. Designers need to decide which human data, behavioural, physiological, feedback or emotional, is relevant. They must also ensure that updates preserve the intended aesthetic and emotional qualities we know are linked to form and perception. Finally, governance matters: our industry workshop highlighted that issues around data, trust and privacy require clear policies and human oversight. When done well, evolving products can offer lasting value and responsiveness without sacrificing meaning or ethics.
Looking ahead, what are the big research questions that motivate you right now? And what breakthroughs do you believe the field will see in the next few years at the intersection of AI, creativity, and design engineering?
Many of the challenges described above remain unresolved – several of which I am currently, including work to ensure that general-purpose generative AI tools can be effectively tailored to the specific sectors that wish to adopt them.
At a sector level, this can look quite different: in manufacturing, it may involve the use of localised models trained on domain-specific knowledge, alongside strong privacy and security measures; in creative industries, the focus may be on diversifying outputs and enabling more meaningful collaboration between humans and AI.
At the technical level, we are experimenting with large language models to support evaluation tasks. One study shows that LLMs can assess novelty and usefulness and align more closely with human experts when guided by well‑designed prompts. A related paper uses chain‑of‑thought prompting and multi‑model aggregation to make AI evaluation more reliable. We are also exploring conversational agents to capture organisations’ digital‑transformation requirements, demonstrating that chatbots can conduct structured interviews effectively. Combined with work on using human data in design, these initiatives point to a future in which AI helps us preserve expertise, make better decisions and engage users ethically.
Thank you for the thoughtful and insightful interview; readers who wish to learn more about Professor Ahmed-Kristensen’s work on AI-driven design, creativity, and responsible digital transformation can explore ongoing research and initiatives at DIGIT Lab.












