Report: The 2024 Critical AI Funding Landscape

This report was prepared by HUML graduate researcher Owen Leonard in December of 2024. You can view the introduction and recommendations online below, or download the full report as a PDF.

Introduction

This document is intended to provide an overview of Critical Artificial Intelligence (Critical AI or CAI) in the United States, with a particular focus on projects supported by major humanities funding organizations and the outputs of those projects. Following Rita Raley and Jennifer Rhee (2023), this document defines CAI to be a mode of engagement with artificial intelligence that, “while recognizing the reductive, even absurd aspects of the term,” nonetheless undertakes an analysis of AI “as an assemblage of technological arrangements and sociotechnical practices, as concept, ideology, and dispositif.” In particular, CAI counters “the pervasive presentism of the discourse of AI” by attending to its history both as technology and as ideology. Although broad, this definition is also exclusionary—AI research institutes funded through the NSF’s National Artificial Intelligence Research Institutes program, for instance, would not be considered part of the CAI landscape. The program’s 2023 funding call (NSF 2023) seeks “new methods for strengthening AI,” reflecting a pervasive assumption among funding bodies in the sciences: that the goal of research is to make progress in AI, where progress is typically understood to mean performance improvements on narrowly defined technical problems. Some of the NSF institutes engage research in the social sciences and ostensibly seek to promote “trustworthiness” or “equity,” but these efforts remain limited by a broadly techno-optimist framing that investigates how, rather than whether, contemporary AI can be used for social good. The NSF-funded Institute for Trustworthy AI in Law & Society, which investigates strategies for participatory governance in AI, does so by “explor[ing] how policymakers at all levels in the U.S. and abroad can foster trust in AI systems” (TRAILS n.d.)—apparently precluding the very real possibility that distrust in AI systems is the more desirable research and policy outcome. Another NSF recipient, Carnegie Mellon’s AI Institute for Societal Decision-Making, looks to build “hybrid human-AI decision systems that leverage AI capability while ensuring social acceptance” (Carnegie Mellon University n.d.). Leveraging AI comes first, ensuring social acceptance comes second. Both of these institutes do valuable work advancing accountability and explainability in areas where AI systems will inevitably be deployed, but the nature of NSF funding and its explicit alignment with the geopolitical goals of the United States leaves little room for a broader examination of an ideology that always requires “advancement” and “promotion” (and never “critique” or “suspicion”) of AI–thus the pressing need for the critical artificial intelligence projects reviewed below. In addition to critical-theoretical approaches, this document also considers research into the pedagogical danger and/or utility of AI to be Critical AI research; by its nature, such work tends to engage the broader problematic of whether AI is or can be socially useful in a way that strictly technical research often does not.

Unfortunately, the scope of this document is limited to the United States (although European organizations are sometimes involved as co-funders). Many of the projects discussed here engage researchers and case studies from across the world, but almost all are based in the U.S. and publish primarily in English-language journals. Humanities and social sciences funding outside the U.S., and especially outside the E.U., is somewhat opaque to those unfamiliar with local languages and academic and nonprofit structures. Identifying global opportunities for CAI research and funding would thus likely require a multilingual, multinational team.

Recommendations

For the moment, Critical AI is an area of work that attracts considerable interest from funding bodies—but this interest is tied at least partly to a broader AI economic boom (or bubble), the fate of which is uncertain. This document offers three recommendations for Critical AI research, with the goal of taking advantage of existing investments and ensuring that the resultant work remains relevant in a quickly changing technological and academic environment.

First, existing Critical AI projects frequently lack a significant web presence. With the partial exception of papers published in academic journals, which are typically paywalled, many of the outputs of CAI centers and initiatives are difficult or impossible for members of the public to find online. Most project sites, if they exist at all, have an “Events” page that lists workshops, conferences, talks, etc.—but these are typically listed as RSVPs or save-the-dates rather than recaps or reports. Together with the preference for broad, ambitious mission statements that appeal to funders, this can have the unfortunate effect of making CAI projects seem like big plans with little follow-through. Making outputs more visible online could help mitigate this problem. The 2023 AIAI Network Kickoff (AIAI 2023) or the NHC “In Our Image” Conference (National Humanities Center 2021) are exemplary in this regard—the respective webpages include video, audio, and text recaps along with recordings of speakers and panels. Although not every organization has the resources for that kind of production, a blog post summarizing a lecture series or an article about a conference in a campus newspaper would go a long way towards foregrounding the actual work being done. Partnering with local humanities organizations, like those discussed under “NEH State Affiliates,” would also help to create a more robust record of CAI activities. Such groups lack the funding to support in-depth scholarly work, but can offer valuable infrastructure for public communication.

Second, although CAI projects frequently bring together researchers from multiple disciplines alongside engineers and policy advisors, outputs can sometimes reflect a methodological siloing. For instance, valuable work is being done both from a political-economic perspective that analyzes AI in terms of unequal resource extraction and labor exploitation and from a linguistic perspective that considers AI in terms of its effect on speech and writing patterns. When outputs are limited to journal articles and monographs, restrictions on scope can inhibit conversation between these (and other) perspectives. Critical AI projects, especially long-term projects like research centers, have the opportunity to bring together various strands of critique in less formal settings like workshops and panel discussions. Making these conversations visible to other scholars and the public can emphasize the generativity of the center as a space for original ideation, which can otherwise risk appearing as merely a repository for research work that was happening anyway.

Finally, CAI initiatives find themselves in a somewhat paradoxical relationship with tech boosterism: on the one hand staking scholarly credibility on the thesis that AI models are not as revolutionary as their promoters claim, and on the other relying on public excitement about AI to drive interest in (and funding for) their work. To ensure that the relevance of their research outlasts the current iteration of chatbot hype while remaining seriously engaged with artificial intelligence as such, CAI scholars should work towards technospecificity: attention to the technological substrate of AI as a heterogeneous material formation that conditions its social and political effects. Critical and humanistic analysis of microchip design or model architecture is likely to prove more valuable in the long term than analysis of AI as purely a media discourse, a volatile abstraction subject to the mercurial inclinations of venture capital and popular media. Using “AI” metonymically to refer to a certain contemporary sociotechnical formation is coherent for now, but the term is likely to wear out quickly; the semantic bleaching of “AI” as valuations climb threatens to confine CAI scholarship to the very cultural moment it seeks to criticize. Simply writing “transformer-based language models are…” instead of “AI is…” would be a good first step in mitigating this problem, with further granularity securing further relevance beyond the current hype cycle. Technospecific scholarship can ensure that critical perspectives on machine learning remain useful regardless of changes to the advertising and public perception of AI.

Read the full report