Interviews
Christian Pantel, Chief Product Officer at D2L – Interview Series

Christian Pantel is Chief Product Officer at D2L, where he leads global product strategy, product management, product design, user experience research, and accessibility. He was appointed CPO in 2024 after joining the company in 2015, where he expanded his leadership across product, design, and engineering.
Pantel has more than 25 years of experience building enterprise software, with prior leadership roles at Workday, Infor, and PeopleSoft. His work is grounded in user-centered design, with a focus on creating intuitive, accessible learning experiences for diverse learners and educators.
D2L is a Canadian education technology company best known for developing Brightspace, a cloud-based learning management platform used by schools, universities, governments, and enterprises to deliver online and hybrid learning experiences. Founded in 1999 by John Baker, the company focuses on personalized and accessible digital education, integrating AI-driven tools, analytics, course authoring, and adaptive learning features into its ecosystem. D2L’s platform supports everything from K-12 education to corporate training and professional development, with a strong emphasis on learner engagement, accessibility, and lifelong education. The company has expanded globally and now serves millions of users through a suite of products designed to modernize how organizations teach, train, and manage learning programs
You’ve spent over two decades shaping user experience at companies like Workday, Infor, and PeopleSoft before rising through the ranks at D2L. How has that journey influenced your approach to embedding AI into learning platforms without compromising usability and accessibility?
Spending that much time in enterprise software teaches you where products break down. Teams add features, but they lose sight of the user, and complexity creeps in. That experience shaped how I approach AI. We avoid chasing shiny objects and focus on addressing genuine challenges that educators and learners face daily. That carries directly into how we build at D2L. AI has to fit into how educators and learners already work and support how people actually learn. If a feature adds friction, creates confusion, or weakens accessibility, it does not ship.
As Chief Product Officer, you oversee product, design, and research. How do you ensure AI features are genuinely improving learning outcomes rather than just adding complexity to the platform?
We start from a simple principle. Learning requires productive struggle. If AI removes the effort required to learn, it is the wrong solution. Learning depends on practice, feedback, reflection and application, and we design AI to support that process. Every feature has to help educators align learning experiences and assessments to outcomes and understand whether learners are actually progressing. We measure that impact directly.
D2L’s Brightspace platform integrates AI directly into the learning experience rather than treating it as an add-on. What advantages does this embedded AI approach create for educators and institutions?
Embedding AI matters because context matters. When the system understands the course, the content, and what the learner is doing, it can guide learning instead of just generating answers. That leads to better support in the moment and stronger outcomes over time. It also keeps institutions in the driver’s seat. They can set policies, manage data, and understand how AI is used, which is critical for trust, privacy, and academic integrity.
Many AI tools in education promise personalization. What does meaningful personalization actually look like at scale, and where do most platforms fall short?
Personalization should drive learning forward, not remove the level of challenge required for real progress. AI can eliminate unnecessary friction, but learning still depends on sustained engagement, problem-solving, and effort over time. The goal is to keep learners at the right level of difficulty so they continue progressing without getting stuck or disengaged.
You’ve emphasized accessibility throughout your career. How should AI systems be designed to better serve learners with disabilities rather than unintentionally excluding them?
AI can remove real barriers by offering multiple ways to engage with content and making learning more flexible. It can support different formats, improve captioning, and reduce manual work for educators. However, AI systems tend to average users, which means they can miss the people who need support the most. Everyone learns differently and some rely on assistive technologies to support their needs. Teams need to design and test for those learners purposefully and include them in the research and development process to make sure accessibility improves in practice. By prioritizing inclusive design, we strive to reach all learners regardless of ability and create meaningful opportunities for all.
With AI increasingly involved in assessments and feedback, how do you think institutions should balance automation with maintaining trust and academic integrity?
AI should support assessment, not take it over. It can help scale feedback and create multiple versions of assessments that test the same concepts, which strengthens integrity and deepens the overall learning experience. Educators still need to own grading and final decisions. Trust depends on knowing a human stands behind the outcome.
From a product perspective, what are the biggest misconceptions universities have when adopting AI into their learning ecosystems?
Treating AI like a tool you can switch on and solve the problem. In some cases it can make things worse by removing the effort that learning requires. Institutions need to be clear about what they are trying to improve. More automation does not mean better outcomes without the right data, governance, and design in place.
D2L operates across K-12, higher education, and enterprise learning. How does the role of AI differ across these segments, and where are you seeing the fastest adoption?
The role of AI shifts based on what each segment values most. In K-12, the focus is safety, age-appropriate use, and giving educators and parents strong control over how AI is introduced in the classroom. In higher education, institutions care more about scale and quality, especially around assessment, learner support, and managing large student populations. In enterprise learning, the emphasis shifts toward speed and efficiency, with AI helping teams move faster and reduce operational overhead.
Adoption tends to follow those priorities, but it also varies significantly by region. We see particularly strong momentum in higher education globally, especially in places like Singapore, where institutions are investing aggressively in AI to scale learning and improve outcomes. Across Singapore, we’ve had long-standing early adopters of D2L Lumi, our AI-powered learning assistant. They were among the first to embrace these capabilities, and in 2025 alone, generations increased 7.5x. What stands out is not just the volume of usage, but the breadth of it. Institutions there are often the first to experiment with new AI features and deploy them at scale across real learning environments.
We’re also seeing strong and accelerating momentum in LATAM. From September 2025 through April 2026, Lumi maintained consistently high usage across the region, signaling that institutions have moved well beyond experimentation and are embedding AI directly into instructional workflows.
In contrast, markets like the U.S. often take a more structured approach, with pilots, governance reviews, and phased rollouts before broader deployment.
AI is now capable of generating content, assessments, and even tutoring. How should educators rethink their role in a world where these capabilities are becoming standard?
Educators do not become less important as AI improves. They become more important. Their role shifts toward guiding the learning process, setting expectations, and making sure students engage with material in a meaningful way. AI can assist with content and feedback, but it cannot replace judgment, motivation, or accountability. We should be using AI to scale what really matters in learning, not replace the thinking process or deliver completed assessments for learners.
Looking ahead, what are the most important product decisions edtech companies need to get right today to ensure AI enhances, not dilutes, the quality of education over the next decade?
If I anchor it back to product decisions, the winners will be those who build on strong data, embed AI into real workflows, and ground everything in trust, accessibility, and learning science.
If we get that right, AI becomes a core capability that continuously improves learning, helping educators focus on high-impact work and giving learners support at the right moment.
The real opportunity is to move beyond one-size-fits-all education to something far more responsive and effective, where every learner is better supported, and every educator is better equipped to help them succeed.
Thank you for the great interview, readers who wish to learn more should visit D2L.












