Exploring Philosophy of Computing, AI, and Society with Jessie Hall - Posdoc Spotlight

August 14, 2025 by Dr. Pamela Fuentes Peralta

Jessie Hall is a postdoctoral fellow at the IHPST, where she explores the conceptual foundations of computing and artificial intelligence. With a background in philosophy, physics, and computational experimentation, Jessie’s research challenges the assumptions behind what it means to call a system “computational.” Her work intertwines the philosophy of mind, language, and mathematics, and engages critically with emerging technologies like large language models (LLMs), positioning philosophy as a vital contributor to contemporary AI discourse. She works with Professor Karina Vold on projects that investigate the intersections of technology and society. This Fall, she will join Carleton College as a Cowling Postdoctoral Fellowship in Philosophy.

What first drew you to the philosophy of computing?

I became interested in philosophy of computing during my M.Phil at Memorial University in Newfoundland. I had been reading some “classic texts” in the philosophy of mind that articulate the “computational theory of mind,” and I was struck by how strange their notion of computing seemed. At the time, I thought I had a decent grip on what computing was. During my undergraduate degree, I double-majored in philosophy and physics, and I spent much of my physics coursework on electromagnetism, where we did a lot of computational physics to assist in building experiments and so on. So, I had built many rudimentary circuit board-based structures, which interfaced with (simple) programs I had written and this gave me a very tactile and ground-up view of computing.

My initial intuitions informed my reading of the philosophy in a way that made certain things stick out: why were they talking about representation? What did they even mean by representation, and what did it have to do with transistors or logic gates?

I have since found that, although my initial intuitions were fruitful, they also led me down a path where I realized that I too, was not altogether sure what computing was, after all! Unfortunately, despite doing an entire doctoral dissertation on the topic, I must say that I’m still not sure what computing is, but I have a great deal more insight into what metaphysical commitments are being brought to bear when ‘computations’ are attributed to things like brains, or vision, or whatever else.

Can you tell us about your current research projects?

Right now, I am (perhaps unwisely) juggling a few projects. One focuses on the current hype surrounding large language models. I am revisiting some old philosophy of mind as a resource for making sense of modern discourse on LLM intelligence, sentience, that kind of thing. Media coverage is full of familiar claims—some breathless about “truly intelligent” machines, others more reserved, calling them “stochastic parrots” (to borrow a phrase from linguist Emily Bender, computational linguistic Angelina McMillan-Major, and computer scientists Timnit Gebru and Margaret Mitchell).  I am working out whether we can secure some of our intuitions about the nature of LLM ‘intelligence’ with some rigour: If we think LLMs are ‘intelligent’ in all the relevant ways, why? If not, why not? And most importantly, do these reasons stand up against close inspection, and especially against the rapid evolution of the technologies and/or our knowledge of human intelligence?

Another project is more epistemological. Especially in the realm of policy-making and the adoption of AI in sensitive domains like medicine, law (and law-enforcement), there is a growing discourse on trust in AI. In the first place, I have reservations about even using the word trust. I am interested in thinking about AI as a source of testimony in the social epistemological sense. For instance, do AI have rational authority? When we say that someone has rational authority, typically we mean that they are trustworthy– that they are not liars and will tell the truth when asked, but also that they are competent with respect to the information being asked of them. So, while there is a tendency to focus on the AI trustworthiness, I think even more important to ask is whether (or what it would even mean to say that) AI are (or even can be) competent knowers?

What does philosophy of science and technology offer to current conversations about AI?

Philosophy of science has had a major impact on AI from the start. The Turing machine –the formalization of computability which led to the earliest computers– can trace its lineage through philosophers of science who were thinking about language, meaning, and its interfacing with the world (among whom I would include Turing himself, though he was a mathematician first).

But the philosophy of science has more to offer than just a claim to influence. As I suggested by my project on AI testimony, the epistemology of science has rich and underutilized resources for thinking about AI as a kind of epistemic agent; as an interloper in the open exchange of ideas, as a contributor, or even as a teacher. In my postdoc work with Karina Vold, we are bringing philosophy of cognitive science, mind, and psychology to bear on this idea of AI as teachers. It is not just that we can probe a chatbot for answers to factual questions, but we could potentially learn things that are new to all humans; new ways of seeing patterns, new techniques of problem solving, and so on. The philosophy of science can bring novel points of view to a subject matter and really open up rich avenues of exploration like this, avenues which can have implications not just for how we use extant technologies, but even for the kinds of technologies we build in the future.

How has your time at IHPST shaped your thinking?

When I came to the IHPST, while I knew that my project was firmly situated in the ‘philosophy of science’ field, I did not have a very rich knowledge base of the history of science nor its influence on the ‘birth’ of philosophy of science as a named discipline. Through its interdisciplinary nature, the IHPST has given me a solid base of background knowledge in the broader field, and most importantly, the opportunity to gather up a wider variety of methodological resources that enliven and strengthen my philosophical work. While I am certainly no historian (my historian friends can attest!), I continue to appreciate the value of thinking about philosophical topics not in a vacuum, but situated within a profusion of interlaced ideas extending backward in time and across geographies.

One thing that I find particularly striking these days, and which I have discussed with some of my fellow IHPST community members, is the parallels that can be drawn between our current political climate and the climate of Austria in the period between the two world wars, when the ‘Vienna circle’ was active. The philosophers of the Vienna circle, like Moritz Schlick, Rudolph Carnap or Otto Neurath, have been dubbed “logical empiricists” for their preoccupation with a logical and empirical basis for any given claim. They cared about precision, rigour, specificity, and verifiability. Sometimes the logical empiricists get a bad rap for their insistence on clarity and verifiability, and get charged with a kind of proto-scientism, or intellectual gate-keeping. But I think a good argument can be made that they were informed as much by their political context as by their philosophical convictions: in a time of upheaval, and crucially, of burgeoning authoritarianism, their insistence on clarity did not only serve philosophical purposes. A strong case can be made that Neurath and Carnap saw the political dimensions to their logical empiricism: holding claims accountable to intersubjective scrutiny– to verifiability– was a bulwark against authoritarian mystification. Right now, we hear a lot about ‘fake news’, and we are exposed to a deluge of conflicting or contradictory, outright fictional or ‘spun’ information, ‘alternative facts’ or the idea that truth is a personal subjective matter (e.g. ‘my truth’ or ‘my facts’), and we hear it everywhere: on the news, on Tik Tok, on internet forums– in many ways we are grappling with authoritarian mystification of another sort.

What makes IHPST a distinctive place for postdoctoral research?

The interdisciplinary environment is a huge strength. Being surrounded by thinkers who approach problems with different tools and frameworks enriches the work in really productive ways. The department is also incredibly well-connected across the university, thanks to the efforts of faculty and grad students who build bridges between units.

For instance, Karina Vold has cultivated collaborations between IHPST and the Schwartz Reisman Institute, which have led to great events like the Technophilosophy September soiree and AI speaker series, both of which have brought a wealth of interesting research on a breadth of topics in AI and technology.

What do you see as the most pressing philosophical challenges posed by AI?

I think the most pressing challenges posed by AI are probably not entirely philosophical, but the role of philosophers is to ask: what are the actual and potential issues posed by AI? The list is staggering, off the top of my head, we have issues of algorithmic bias, that is, statistical signal processing architecture that reproduces unfair and unjust correlations in their training data. Differential performance is also a problem, as AI systems work better for some groups than others. Another significant problem is value misalignment, where AI is tasked with a specific goal but, due to unanticipated task exploits or interpretations, ends up doing something unintended. There is also the risk of misplaced or naive reliance, where decisions are made based on AI without adequate due diligence. We also have transparency and interpretability: how do we even do due diligence with the outputs of AI technologies? The list goes on. I am reluctant to rank them, but I will say that we are writing informational cheques we do not yet know how to cash. I think that is a decent metaphor, even if it is a bit opaque!

What are you most excited about as a Cowling Postdoctoral Fellow at Carleton College?

My primary role at Carleton will be teaching, and I am thrilled to be designing my own courses. I will be teaching “Philosophical Foundations of AI”, and another class on objectivity in science. I am hoping to also pitch a third course on ethical issues in modern AI technology, focusing on architecture, design, and development of ML/AI technologies.

Beyond teaching, I have had some great conversations with faculty at Carleton that reawakened a few dormant research projects. One is about AI companionship and human relationships, inspired by a discussion with Allison Murphy, who is an expert on friendship in Aristotelian philosophy. Another is a project on reinforcement learning—an offshoot of my LLM intelligence work—that took on new life after talking with Anna Rafferty, who is an expert in symbolic systems and chair of the computer science department. Hopefully Anna and Allison, and the other wonderful faculty members whom I met at Carleton, won’t mind me saying that I look forward to picking their brains more, and maybe even collaborating on a paper or two!

 

Categories

Tags