[GUEST POST] Beyond the Algorithm by Susan Mernit
What humanities scholars can teach us about AI


Optionality advisor Susan Mernit has listed a longtime leader, first in tech, then in locally-focused non-profits, and now combining her areas of interest and expertise by consulting with non-profits on how to leverage technology, most notably AI.
When I read the following essay she wrote for her website, I knew it was something worth examining with the Optionality community. Yes, AI can provide a tremendous boost to productivity, among many other potentially positive attributes. But having lived through the evolution of the social web and gone from a digital utopian to being pretty clear-eyed about how digital has empowered dystopian outcomes, I think it’s essential for us all to consider the viewpoints Susan assesses below.
Check out her work at her website and subscribe to her Substack for an eclectic overview of all of Susan’s professional and personal interests. -Elisa
By Susan Mernit, March 6, 2025:
How do humanities scholars view the intersection of AI with human creativity and knowledge production? The recent draft paper, “Provocations from the Humanities for Generative AI Research,” written by a team of humanities scholars led by Lauren Klein from Emory University, provides a framework for understanding AI that computer scientists and engineers–along with the rest of us– will find well worth a read.
As the authors state, “The humanities’ most universal commitment is to the diversity and complexity of the human experience. More than that, centuries of humanities scholarship have confirmed the asymptotic relationship between increased understanding and complete knowledge. The idea that we could ever build a model of artificial ‘general’ intelligence is not only a fool’s errand; it is uninformed by how intelligence works.”
There is more to learn from this paper’s eight provocations, which challenge assumptions about creativity and generative AI.
Words vs. Meaning: The Human Element of Language
The first provocation—”Models make words, but people make meaning“—addresses something I’ve struggled to articulate. AI language models excel at predicting which words should follow others, creating fluent and coherent text. However, they do not understand what those words signify.
The authors draw on literary theorists like Roland Barthes to explain:
Meaning doesn’t reside in the words but emerges through human interpretation within social and cultural contexts.
This helps clarify why AI text can appear authentic while producing factual errors or hallucinations–the models manipulate symbols without comprehending their significance.
Culture Beyond Categories
These humanities scholars point out that culture is nuanced—it’s both “the way of life of a people” and “the works and practices of intellectual and artistic activity.” This matters because AI training data reflects cultures and consists of expressions of culture.
The paper suggests we should view AI models as cultural objects shaped by tech culture.
This perspective would better assess how these technologies affect different communities.
The Myth of “Representative” Data
Perhaps the most provocative claim in this paper is that AI can never be “representative” or “unbiased” because it’s fundamentally impossible. Drawing on archival theory and Black feminist thought, the authors explain that historical inequalities, power structures, and intentional erasures mean some voices and perspectives will always be missing from our datasets.
Rather than treating bias as a technical problem to “fix,” we should acknowledge these structural limitations and develop methods that highlight rather than gloss over these gaps.
We should also take responsibility for the incomplete perspectives in our models.
Are you an Optionality member with advice and big ideas to share with the community? Message us and let us know what you’d like to share with people who are working to prepare for, execute on, and/or continuously improve an optionality-driven life.
Bigger Isn’t Better
The AI field has pursued increasingly larger models with more parameters and training data for years. However, recent research shows diminishing returns from this approach. The authors argue that this reflects a philosophical point: the goal of creating a universal “general intelligence” contradicts centuries of humanities scholarship about human intelligence.
Instead, these scholars advocate for smaller, more specialized models designed with input from domain experts.
They argue that these systems would better meet specific knowledge needs while avoiding the environmental and economic costs associated with ever-larger systems.
Training Data: Quality and Context Matter
AI training data is often treated as an undifferentiated resource, where individual text samples matter only for their downstream utility. The paper proposes that understanding training data’s specific sources, contexts, and characteristics is crucial for responsible AI development.
Humanities approaches, such as digital archaeology, data narratives, and contextual essays, can reveal how datasets are shaped by history, society, and power structures.
This deeper understanding helps avoid perpetual cycles of “after-the-fact fixes” like content moderation, addresses intellectual property concerns, and potentially inspires innovation through more intentional data curation.
The Complexities of “Openness”
While open-source AI models appear to tackle concerns regarding transparency and accessibility, the notion of “openness” introduces intricate questions that lack straightforward answers. We all know, for example, that the way AI designers have appropriated large sets of data–including copyrighted materials–as training materials and resources has been both biased and appropriative.
And yet, researchers could argue that these flaws in AI reflect the historical shortcomings in our culture. Is this a valid argument? Not for me.
The authors also highlight how community-generated content may technically be “open,” yet it can also raise ethical concerns regarding privacy expectations and potential harm. Drawing from archival ethics, the authors propose that decisions about openness should be made on a case-by-case basis, considering power dynamics, possible harms, and community values instead of a one-size-fits-all approach.
Corporate Power and Computational Resources
The enormous computational resources required for advanced AI development inherently concentrate power in the hands of wealthy corporations.
Universities, governments, and individual researchers cannot afford the massive datasets, computation clusters, and infrastructure needed to develop cutting-edge AI models.
This creates a research environment where only a few tech giants can participate at the highest levels, giving them control over AI’s development path and applications. The paper frames this as a manifestation of late capitalism, where corporations monopolize access to these technologies and rely on this monopolization for their financial success and survival.
AI Universalism and Human Reduction
The final provocation challenges how AI systems approach human experience and identity. The authors argue that AI development has embraced a “data episteme”—a worldview where everything about humanity can be understood through data collection and statistical analysis. This transforms the rich complexity of human culture into standardized “content” that can be easily processed, measured, and replicated.
The paper connects this reductive view to larger historical patterns rooted in European modernity and colonialism, where human differences were classified and hierarchically ordered.
When AI systems present themselves as “universal knowledge machines,” they often erase the specific cultural, historical, and social contexts that shape human experience.
The authors suggest that humanities scholarship offers alternatives by emphasizing human life’s moral, expressive, and contemplative aspects that resist data-driven abstraction, potentially guiding AI development toward technologies that serve rather than reduce human complexity.
Moving Forward Together
What stands out to me about this paper are its critiques and its vision for a path forward. The authors advocate for meaningful collaboration between AI researchers and humanities scholars, explaining that expertise in the humanities is essential for developing AI that enhances our understanding of human experiences and cultures.
As someone invested in the evolution of human-centered AI, I believe this interdisciplinary approach is both valuable and necessary.
The most intriguing questions in AI are not purely technical—they concern meaning, culture, bias, power, and what it means to be human. Scholars in the humanities have developed sophisticated theories and methods in these areas over the years.
We must bridge the gap between technical development and humanistic understanding if we want to build AI that serves humanity in all its complexity and diversity. This paper presents a roadmap for aspects of this journey.