"Dealing with Money": PhD Candidate Jon Cooper on Economic Theology and the Role of Artificial Intelligence in History

Jon Cooper is a 7th year Ph.D. candidate in British History focusing on political economy and empire. His dissertation, ‘Dealing with Money: A Genealogy of Economic Theology in England, c. 1542-1623,” is supported by a 2024-25 Stanford Humanities Center (SHC) Dissertation Prize Fellowship. He is also the Founder of Leo, a document management and transcription platform that uses cutting-edge AI to transform images of historical manuscripts into plain text.
Congratulations on your fellowship! Please tell us about your dissertation project.
My dissertation retraces the emergence of economic thought in early modern England as a response to what I call the problem of monetary commensuration. This is the difficulty that, for money to render other things equal, it must function as a fixed measure that remains identical to itself over time. But this has throughout history proved exceptionally difficult to achieve. Since ancient times, the two greatest threats to monetary self-identity have been usury (the stipulation of a positive difference between present and future money) and fiduciarity (the nominal premium on coinage relative to the otherwise attainable price of its constituent precious metal).
During the commercial revolution between the tenth and thirteenth centuries, theologians, jurists, and moralists across Christendom managed this problem by emphasizing the fragility of money as a symbolic instrument. They insisted that its proper use required moral discipline, imploring subjects to resist usury and urging kings to refrain from debasing the coinage. My dissertation shows how this framework began to unravel amid the stresses of the later medieval period, before it collapsed altogether amid the unprecedented monetary, commercial, and financial upheavals of the mid-sixteenth century.
In an era of warfare, debasement, expanding financial markets, and the subjection of foreign trade to a regime of floating exchange rates, English commentators recognized that they could no longer rely on the traditional resolution to the problem of monetary commensuration. Instead, they began to ask a striking new question. Inspired by currents in late-humanist thought, they asked what could explain, and so promise to stabilize, the pricing of money in terms of itself, in a way that would be compatible with its ability to express equivalence relations between other things. My dissertation proposes that it was in answering this question that seventeenth-century thinkers began to assemble an interpretive framework that laid the conceptual foundations for modern economics.
What is “economic theology”?
Traditionally, historians of economic thought have recounted how representation came to mirror reality, as a new field of inquiry apprehended the laws of value and capital as they took hold in early modern Europe. I take a less objectivist stance. Rather than describing some already-extant sphere of reality, I argue that economic theology systematically constructs and justifies the economic as a sacred entity. Through this lens, it becomes easier to understand economics in functional terms, as harnessing the resources of theology to resolve three distinct yet closely interrelated problems, each intrinsic to human sociality yet rendered especially acute by the use of money as a medium of symbolic communication.
First, economic theology addresses threats associated with semiotic conventionality. Because human beings use signs which bear only a conventional relation between signifier and signified, they can lie, deceive, and imagine otherwise, all of which threaten to undermine the social order. Just as other religions resolve the resulting instability by declaring that some signs, such as those in rituals or written scripture, are necessarily true, economic theology insists that prices are not arbitrary, but index an underlying reality.
Second, economic theology resolves the problem of violence, by unifying society in subjection to a transcendent externality that promises to contain it, in the dual senses of encompassing and limiting. The economic, that is to say, appears to inflict intolerable suffering on people, who can for instance immediately lose their livelihoods at the seemingly random whim of the market. But its theologians argue that this suffering is necessary for the greater good, in a theodicean logic according to which society as a whole is better off surrendering to a benevolent exteriority located outside normal subjectivity.
Finally, economic theology mitigates against the radical uncertainty of the human condition, by denying the openness of the future. It refuses to admit that money, as a symbolic medium, can be exploited by agents wielding pricing power or manipulating currency. In doing so, it can posit a predictable logic behind the nominal veil of prices, known as the laws of value and capital. Economic theologians see in these laws various miraculous workings, which they expound to prophesize inevitable, even salvific outcomes, from the realization of perfect market equilibrium to proletarian revolution. In this way, they offer reassurance that the chaos of human life can be subsumed within an intelligible, purposeful unfolding of history along a determinate trajectory.
How did this intersect with other important developments during this period?
I argue that the pivotal moment in the rise of economic theology was a technical, neglected debate that raged within English government circles at the turn of the seventeenth century, at the intersection of the problems of usury and fiduciarity. Over the previous decades, overseas trade had come to depend on bills of exchange, which allowed merchants to transfer funds without the risk and expense of shipping metal. But whenever the exchange rate fell, so that pound sterling purchased fewer units of foreign currency, import prices rose, export returns fell, and it became profitable to transport bullion overseas. After failing to prohibit bills outright, the Elizabethan and Jacobean regimes appointed a series of commissions to investigate low rates, sparking a complex debate about contingent monetary factors like mint prices, bimetallic ratios, laws to force foreign merchants to purchase home goods, the institutional structure of credit markets, and the various implications of policy interventions for the distribution of wealth between the crown, nobility, gentry, and merchants.
The turning point came in the 1620s, when a group of merchants, some associated with the East India Company, proposed a novel explanation. Exchange rates might temporarily be affected by monetary factors, they argued, but would be determined ultimately by the need to settle trade balances with the relevant foreign country. Rather than trying to exert influence over financial markets or debasing the coinage, they encouraged the regime to focus on this deeper sphere of reality. So long as there was a positive balance of trade, where the total value of exports surpassed imports, they argued, the exchange rate would take care of itself.
Though almost completely ignored by later historians, the resolution of this controversy had major implications for several other important developments during this period. For one, the belief that a positive balance of trade was necessary to raise the value of sterling on European exchange markets pushed governments to pursue more aggressive imperial strategies. This conviction led them to support merchant-led imperial ventures such as the establishment of extractive re-export trades like sugar, cotton, and tobacco, which were inextricably bound up with the expansion of plantation slavery in the Atlantic world.
Economic theologians during the later seventeenth century would also discover that they could apply the same logic to interest rates. The cost of borrowing, they proposed, corresponded to the accumulation of stock, meaning the proportion of total goods reserved for employment in future production. As they grappled with the workings of this self-regulating, infinitely expanding substance, later known as capital, they laid the basis for the recognition that, as John Robertson has phrased it, in The Case for The Enlightenment (2005), ‘a society of self-interested men, driven by their passions rather than their reason, could … survive and meet its needs with no external assistance from divine providence and only limited intervention by government’. My dissertation shows that, before it became thinkable to treat self-interest as a regulative ideal in this way, to inspire faith in the autonomous logic of society, rather than the saving grace of a deity, it was necessary to deal with money.
Could you share with us a bit more about your archival sources?
My most important sources are manuscript treatises and position papers written during the later sixteenth and early seventeenth centuries, scattered across a range of archives in the UK, but primarily concentrated in the British Library. These sources are important because they discuss technically complex and politically sensitive issues more frankly than anything which appeared in print. Since they were often produced by or for inner circles of government, they allow me to get a much better understanding for how the Elizabethan and Jacobean regimes were thinking through issues such as debasement, usury, foreign exchange, and the illicit bullion trade.

(Calculations of arbitrage profits in a 1564 commission report on foreign exchange, from Harley MS 660 at the British Library)
How did your archival research help you to develop Leo, an AI-powered tool that turns manuscripts into plain text?
Mainly through an experience of frustration. These documents were hard to interpret holistically, not just because they were about complex subjects and articulated in archaic language, but also because they were written in difficult handwriting, known as secretary and court hand. To establish an efficient workflow, and to be able to search for relevant passages when I needed them, I’d have to spend long hours manually transcribing each image into plain text. I didn’t want to do that, so I tried the most popular service for automatic handwritten text recognition (HTR), but it didn’t work well out-of-the-box. If I wanted to use this systematically in my research, I’d have to spend a lot of time fine-tuning models and correcting their output.
It was around this time, in November 2022, that OpenAI released ChatGPT and the capabilities of new multimodal transformer models became apparent. My friend Jack had just finished his PhD that used the latest advancements in machine learning to read particle physics data and had co-founded a startup that used this technology to detect the early signs of cognitive impairment associated with neurodegenerative disease. I told him how I thought AI could transform my work and it became clear over the course of our conversations that we would be well-positioned as a team to develop Leo.
Leo, also after the Spanish “I read,” is powered by a state-of-the-art AI model that delivers exceptionally accurate transcripts of handwritten and printed texts in Latin scripts produced over the past half millennium, which encompasses the vast majority of primary sources in western Europe, the Americas, and other colonial contexts since the early sixteenth century. It’s also a convenient web-app that allows users to upload their documents, organize them, add metadata, rapidly generate transcripts, and then annotate, filter, and search them with ease. It aims to streamline the research process, eliminating the usual friction between discovery and analysis.
Who is Leo’s target audience? What sort of feedback have you received so far?
So far, we’ve reached out to graduate students and faculty in history departments, as well as archivists and professionals who manage collections of manuscript material. But we anticipate there will be a lot of interest among family researchers, genealogists, and committed amateurs whose research involves interpreting historical documents. The response so far has been very positive, with many compliments for Leo’s transcription quality and intuitive interface.
Some historians are concerned about the implications of HTR, fearing that students will become overly reliant on automatic transcriptions or engage less critically with historical sources. Those are legitimate concerns and deserve careful thought. I hope and suspect that the effect of this technology on archival research will be less tragic, perhaps more closely resembling that of the calculator in mathematics, which didn’t replace fundamental skills but freed up time to focus on higher-level work. I’m also optimistic that HTR will help to improve how students learn to read old handwriting in the first place, like a kind of Duolingo for paleography. Already, when I’m struggling with a difficult word or passage, I’ll have a go with Leo. Even when the transcription isn’t completely accurate, the second pair of eyes usually gets me closer to the right answer.
What challenges and opportunities does AI pose to the discipline of history?
To set aside larger issues of energy consumption, job displacement, and existential risk, the challenges that generative AI poses to history as an academic discipline seem to me to be formidable but not insurmountable. There is much discussion about plausible hallucinations and cognitive offloading, with students simultaneously being misled and outsourcing critical thinking to chatbots. I also worry about discursive homogenization. As these models mediate more of our communicative practices, everything from term papers to monographs will gravitate toward high-probability responses that mirror statistical patterns embedded in training data. That could, over time, produce a flattening of intellectual discourse that systematically reinforces prevailing structures of thought rather than interrogating them.
But since AI is here to stay, we need a pragmatic approach that steers away from fatalistic denial as much as naïve boosterism. Rather than disengaging, there are opportunities to develop and regulate this technology in ways that are aligned with our highest ends. I would encourage my colleagues in history departments to consider three prospects for pedagogy and research.
First, there are creative ways that instructors can incorporate AI into their teaching practices, to counterbalance its potentially detrimental effects on learning outcomes. Benjamin Breen, a history professor at UCSC, has, for instance, designed interactive games for his students that simulate historical situations to raise methodological and interpretive questions in an engaging way. The point of these simulations, as he has written, is not that they provide accurate answers but precisely that they do not. When asked to reflect on the AI’s response, students developed the critical skills that we want them to learn: contextualization, perspective mapping, comparison, genre and form analysis, corroboration, historiographical positioning, and so on.
Second, the inherently textual nature of large language models (LLMs) makes them well-suited to addressing many of the central themes of historical research since the linguistic turn of the late twentieth century, when historians began to focus on the constitutive role of discourse in shaping knowledge, identity, and our perception of reality. Beyond generating natural language, like simulating the plausible speech of a historical agent, these models can also serve as tools to analyze how language produces meaning. They can ingest vast corpora of primary sources (pamphlets, books, parliamentary debates, newspapers, law reports, diaries, and so on) and map out discursive structures through vector embeddings. This method goes much further than traditional approaches to textual analysis which rely on exact or near-exact strings like keyword frequency counts. Because vector embeddings can detect the persistence of a given concept behind variant spellings, translations, and euphemisms, they will offer intellectual and cultural historians ways to retrace the semantic drift of particular terms or to identify subtle conceptual shifts across disparate contexts.
Finally, recent improvements to LLMs through retrieval-augmented generation (RAG) suggest that AI will help to discover which primary sources are relevant to research in the first place. RAG combines semantic indexing with the ability to retrieve and cite particular passages, so a single query for something like “Lancashire cotton famine” across British newspapers in the later nineteenth century would also surface articles which describe the same event as “the distress”, “cotton panic”, “the sufferings of the working men in Manchester”, and so on. I think it is likely, as this technology is implemented in digital repositories of primary sources, that historians will discover many relevant sources in places that they never thought to look.
Leo’s most ambitious goal is to integrate advanced AI tools in one place, so that historians will have a single hub where they can store, organize, transcribe, and analyze their primary sources. I think that’ll transform academic research and open new pathways for public engagement with the past, making otherwise obscure records accessible, searchable, and interpretable.