IHPST alumna Helena Gagnier (MA, 2024) has taken her research on AI ethics beyond academia through a Mitacs Business Strategy Internship with TELUS’s Data Ethics Team. As one of the lead researchers behind the 2025 Responsible AI Report, she helped examine how Canadians perceive and engage with artificial intelligence. The report, one of the most comprehensive studies of public opinion on AI in Canada, reflects Helena’s commitment to bridging ethical theory and real-world technology.
In our conversation, Helena reflected on her journey from studying the philosophy of AI at IHPST to helping lead one of Canada’s most comprehensive reports on public trust in AI. She discussed the research process, key findings about how Canadians view AI, and why keeping technology human-centered remains essential.
Let’s begin with your academic journey. What drew you to the MA program at IHPST, and what project did you develop during your time in the program?
I was drawn to IHPST because of the interdisciplinary work being done across philosophy, science, and technology. I wanted a fuller picture of how these areas intersect, and the program offered a unique space to explore that. I first started looking at MA programs in 2022—right around the time ChatGPT’s public model launched. It quickly became clear that AI was going to profoundly shape the future. I wanted to be part of understanding what that shift might look like, and of navigating the ethical risks that would emerge alongside the technology.
I was especially inspired by Dr. Karina Vold’s work on AI ethics and the implications of emerging technologies. That was the area I knew I wanted to focus on, and she’s done such incredible work.
While in the program, my main project focused on AI extenders —AI tools as legitimate extensions of the human mind— and I argued that these tools can transmit their values (or beliefs and biases) to the user. Think of an AI writing assistant, for example: the values embedded in that tool can shape your thoughts, override your intentions, or influence what you produce. It matters profoundly what kinds of values we embed and accept in our AI systems, because they’re the same values we’re allowing into our minds. A paper, based on this research titled "Value Inheritance: The Transmission of Values Through Cognitive Extenders" was published last month in Synthese.
How did your graduate studies prepare you for working on real-world AI ethics projects like the Responsible AI report?
My graduate studies at IHPST gave me a solid grounding in the core ethical tensions around AI and an understanding of the frameworks emerging to address them—both of which were vital for working on the 2025 TELUS AI Report. One insight that stuck with me was recognizing that the people building AI systems often aren’t the ones most affected by them. The same goes for policymakers and those shaping governance structures: those setting the guidelines may not be those most impacted.
This understanding shaped how we approached the report. We made sure to include a broad range of perspectives from across Canada. Our goal was both to analyze trends and to create a space for people to express their real hopes and concerns about AI. Ideally, it serves as a kind of guidepost for how we move forward—grounded in what people actually want from these technologies.

Photo credit: Duane Cole
Congratulations on your Mitacs Business Strategy Internship! Could you tell us a bit about how you got involved with the Data Ethics Team and what your role there entailed?
Thank you! As I was finishing my degree, Karina generously offered to connect me with people working in industry. That included Dr. Yoelit Lipinsky, a Data Ethicist at TELUS, and Jesslyn Dymond, the Director of AI Governance & Data Ethics—who’s also a U of T alum from the iSchool. We had a great conversation about the work they were doing and how I might be able to support the 2025 AI report.
My role involved translating research findings into an actionable, accessible report that could help policymakers, academics, and industry leaders understand how Canadians are using AI—and what they expect from it. It was a blend of research, writing, and strategy, all viewed through the lens of public trust and ethical governance.
This report is one of Canada’s most comprehensive sources of public opinion on AI. How did your team go about designing and conducting research to capture how Canadians feel about emerging AI technologies?
TELUS has a strong data ethics and AI governance program, and understanding public expectations is key to earning trust authentically. We asked wide-ranging questions, with a particular focus on healthcare, as it’s a high-stakes sector where trust is paramount. We were intentional about including vital perspectives that might otherwise get lost in aggregation.
Given AI’s rapid evolution and broad scope, we wanted to ground the research in specific values and real-world contexts. There’s consistently strong consensus that AI should be developed ethically, but what do people actually mean by that? Safety, for example, came up as especially important. Human oversight was seen as essential by a majority. And across the board, privacy and data security concerns were top of mind.
We also designed the survey to be demographically reflective. “Being Canadian” encompasses a wide range of experiences—as does “being human.” To capture that diversity, we worked with a large sample size—over 5,500 respondents—so we could meaningfully understand patterns across regions, identities, and communities.

Image credit: TELUS
What were some of the most compelling or surprising findings from this year’s report?
One thing that really stood out was just how many Canadians are already using AI. Around 80% have used it in the past year, and nearly a quarter are using it daily. At the same time, 91% expressed concern about AI’s impact on Canadian society. So we’re seeing this dual recognition: people see the value, but they’re also very aware of the risks. We need to understand what it means to live in that tension—and how to address those risks so that the massive adoption rate doesn’t come at the cost of public trust, safety, or human values.
Another striking finding was how strongly people want AI systems to remain human-centric. One of the top concerns was “overreliance on AI and reduced human interaction.” We saw that play out in healthcare especially, where trust in AI output nearly doubled when human oversight was present. And strikingly, only 1% of respondents said they trust AI to operate without any human oversight.
More broadly, Canadians have clear expectations: they want AI to be human-centric, with strong oversight, transparency, and values built in from the start.
As someone coming from an academic background, what was it like translating your research skills to a corporate context? Were there any challenges—or unexpected opportunities—that stood out?
It was an adjustment for sure. In academia, you’re often aiming for depth, getting super granular with a concept or theory. In a corporate context, especially in a space like AI ethics, it’s still about rigour, but you have to move quickly and translate complexity into clear, actionable insights. That shift in audience and purpose really shaped how I approached my work.
One challenge was learning how to strike the right balance between nuance and accessibility, especially when writing for policymakers or the public. But that also became one of the biggest opportunities. I loved the challenge of taking big ethical questions and making them concrete, relevant, and useful for people making important decisions about AI.
The report touches on key themes like trust, transparency, and accountability. Were there specific ethical concerns you found particularly urgent or underexplored during your research?
One concern is the uneven distribution of both risk and benefit. AI isn’t experienced equally. We know that some communities are more exposed to harms, like biased decision-making, while others are better positioned to reap the rewards. And yet, those most affected are often the least consulted. That gap between who’s building the systems and who’s being impacted is an ethical issue in itself. For instance, we saw that while “respect and fairness” wasn’t one of the most frequently selected principles overall, it was chosen more often by groups more vulnerable to bias.
There’s also a real disparity in AI literacy. While over 80% of Canadians have used AI in the past year, that doesn’t always translate to understanding. Only 34% feel they understand how and where AI is used. That creates a power imbalance: when only a small group truly understands the system, they become the ones shaping how it’s used. It also raises questions about consent and agency, especially in high-stakes contexts like healthcare. Can people meaningfully opt in or out of something they don’t understand?
AI literacy is as much an ethical imperative as it is a technical or educational one. If we want to build public trust, we need to equip people with the tools to understand and critically engage with these systems. Otherwise, transparency alone isn’t enough. We need to give people the tools to interpret and question what’s made visible.
Tell us about the team you worked with at TELUS. How did you see different disciplines and perspectives come together in the creation of the report?
I worked with the AI Governance & Data Ethics team within the Data & Trust Office, and I’m lucky to continue working with them now in my role as Strategy Manager. What’s remarkable about TELUS is the way it’s built a culture where diverse backgrounds genuinely come together —from computer science and ethics to law, communications, and research. Bringing together such a wide range of expertise allowed us to shape a report that’s both technically rigorous and deeply connected to the people it’s meant to serve.
This inclusive and multidisciplinary approach is evident in other areas as well. The “purple teaming” approach to testing AI systems combines traditional red and blue-teaming methodologies. What’s particularly unique is that they invite team members from any background or department to participate in this testing, which enables the discovery of more fringe use cases, edge cases, and varied user experiences based on each tester’s unique background and perspective.
Now that the 2025 report is out, what impact do you hope it will have—whether for industry leaders, policymakers, or everyday Canadians navigating an AI-driven world?
I hope the report can be both a mirror and a map. A mirror in the sense that it reflects where we are right now—how Canadians are using AI, what they’re excited about, and what they’re concerned about. And a map, offering direction for where we can go next.
For industry leaders and developers, I hope it’s a reminder that public trust isn’t built through technical excellence alone. It requires transparency, accountability, and a willingness to engage with the values of the people you’re building for. Concretely, this might mean involving diverse voices in design processes or prioritizing explainable and accountable AI.
For policymakers, I hope it’s a resource that highlights the nuance in public perception and helps shape policy that’s both protective and enabling.
And for everyday Canadians, I hope it offers a sense that their voices matter in shaping the future of AI. AI isn’t something happening to them—it’s something we all have a democratic stake in directing.
Ultimately, the goal is to support more human-centric, ethically grounded innovation, and to help make sure the future of AI reflects the kind of society we actually want to live in.