The US has led AI innovation; however, global competition, is rising. In response to a call from the White House regarding the future of Artificial Intelligence, faculty from the Cornell Ann S. Bowers College of Computing and Information Science and Cornell Tech submitted a proposal emphasizing the critical need for sustained investment in AI education and research.
The group argues that the key to maintaining US dominance lies in pioneering radically new technologies that transforms the playing field.
Contributing Authors:
Bharath Hariharan, Associate Professor, Cornell University
Yoav Artzi, Associate Professor, Cornell University
Tanya Goyal, Assistant Professor, Cornell University
Haym Hirsh, Professor, Cornell University
Thorsten Joachims, Professor, Cornell University
John Thickstun, Assistant Professor, Cornell University
Kilian Weinberger, Professor, Cornell University
This document is approved for public dissemination. The document contains no business-proprietary or confidential information. Document contents may be reused by the government in developing the AI Action Plan and associated documents without attribution.
March 14, 2025
Executive Summary
The US has historically dominated the field of AI, and in recent years the US private sector has produced revolutionary AI products such as OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude. However, this historical US dominance is at risk. Companies in other countries like China have come up with competing AI products that potentially outperform their US counterparts, and have invested in educating their workforce on AI. US strength lies not in further engineering existing technology, which other nations can do too, but in developing radically new technology that transforms the playing field. Historically, such revolutionary new technology has emerged from fundamental research carried out in US universities.
As such, to preserve US technical lead, AI policy must continue to invest aggressively in the fundamental AI research needed to create the next AI revolution. Taking lessons from history, we make the following concrete recommendations:
1. Bolster funding of fundamental academic AI research through NSF, ONR, DARPA and other funding agencies that have a strong track record.
2. Expand and increase accessibility to national compute resources, removing bureaucratic barriers.
3. Incentivize the private sector to fund fundamental research in universities through new mechanisms of public-private partnerships.
Securing America’s Future in AI
In spite of the many successful AI companies in the US, the US’ technological lead in AI is rapidly narrowing. Companies in other countries now have AI products with similar capabilities to the most recent US offerings. A notable example here is China. DeepSeek, a Chinese artificial intelligence company, recently released its own LLM offering, which outperformed even some of the best US-based products according to some benchmarks [1]. DeepSeek’s massive engineering effort also substantially lowered the cost of training these models; a finding which significantly impacted the valuations of US-based companies including NVIDIA. Thus it is very likely that China and other countries already have the in-house expertise to build competitive AI products compared to US companies, and US technological lead in AI has all but evaporated. Importantly, other countries are no longer copying US efforts, but are increasingly spearheading new innovations.
We posit that the US cannot re-establish its dominance in AI simply by engineering existing technologies better, because other nations can do so too. Instead, history tells us (see below) that the US’ unique strength is in inventing radically new technology that fundamentally transforms the playing field. In fact, LLMs were one such transformational advance, but it is time to move on: to show sustained dominance in AI, we need to invest in exploratory research that develops the next generation of AI systems.
Indeed, there is ample reason to believe that contemporary LLMs have fundamental flaws and their capabilities have started to plateau. Their inability to perceive the physical world [2] limits their deployment in robots in the factory or home. Their tendency to hallucinate is a fatal flaw in military or safety-critical applications. They integrate poorly with scientific and engineering pipelines which have precise mathematical models and constraints that must be satisfied. A radical new approach is needed that can address these limitations; with the potential to unlock massive gains in manufacturing, defense, scientific discovery and many other applications.
Such radical ideas rarely come from industry, because they are risky initially and will offer limited immediate potential. Fortunately, US universities are ideally placed to take the risk and explore potentially transformative new ideas. AI policy must fund this fundamental research in universities, as agencies like DARPA, ONR and NSF have done in the past. As we discuss below, funding such fundamental research has paid off multiple times in the past; in fact, many of the advances that we laud today trace their roots to precisely this funding. If we are to re-establish US dominance in AI, AI policy must bolster this robust public funding of academic research into next generation AI.
Maintaining a technological lead also requires a workforce that is educated in the latest advances. Other nations like China are rapidly developing a workforce with great technical competence in AI, and to lead, the US must do so as well. Here too, US universities have an outsized role to play. By ensuring that US academics are pushing the frontiers of AI, we also ensure that American students are learning from those at the very frontier, and as such are educated in the technologies of the future rather than those of the past.
Historical Context
The need for radical and transformative ideas and the role of academia in producing them is evident in the history of AI, as in the history of computer science itself. Neural networks and machine learning grew out of the perceptron introduced by Frank Rosenblatt at Cornell in 1958 [3]. Similarly, the field of computer vision arose out of a summer project at MIT in 1966 [4]. Reinforcement learning, which powers both the large language models of today as well as other high-profile successes such as AlphaGo, was pioneered by Andrew Barto and his then student Richard Sutton at the University of Massachusetts in the 1980s (Sutton and Barto were recently awarded the Turing Award) [5]. As another example, vector space models of text as well as relevance feedback, both fundamental to search engines and the structure and training of language models today, were developed first by Salton at Harvard and Cornell [6]. All these ideas had limited immediate utility and were thus too risky for private enterprise at the time. As such, it was academic research that pursued them. Only several decades later, when the techniques had matured enough to be practically useful, were they incorporated with great success into private sector innovation.
This pattern continues into the modern era. The global comeback of neural networks was prompted by the meticulous collection of the ImageNet dataset, which was led at Stanford by Fei-Fei Li [7]. Modern Image generative models use diffusion models which were developed by researchers in Berkeley [8]. Going forward, diffusion-based language models, developed at Stanford and Cornell, are being explored as a radically new approach [9,10]. Academic research of today continues to power the major innovations of tomorrow.
US government policy has had a big role to play in enabling the above academic research, and thus the subsequent private sector innovation. For instance, the development of ImageNet, which launched the modern era of neural networks, was funded by an NSF grant. The invention of diffusion models was funded by NSF and ONR. The funding amounts were considerably small compared to their outsized downstream impact. For example, the NSF graduate fellowship that funded students who invented diffusion models totals about $160K. In contrast, the revenue of Midjourney, a US image generation company that capitalizes on this advance, is rumored to be about $200M: three orders of magnitude higher. Thus, the history of AI has been one of small government grants funding radical ideas that yield multi-billion dollar industries.
The pipeline from academic research to industry involves not just the movement of ideas but the movement of people as well. The behemoths of today were entrepreneurial ventures of graduate students yesterday. Funded by NSF, DARPA and NASA, Stanford PhD students Sergei Brin and Larry Page developed the search algorithm that became Google. Three of the founders of OpenAI, the current leader in the AI race, were graduate students at Berkeley, Stanford and NYU. This entrepreneurial trend continues: there are new startups built by Berkeley and Stanford roboticists, Stanford and Cornell machine learning researchers, computer vision researchers out of MIT, and many more. Once again, small government investments in academic research power the cycle of innovation.
Beyond startups, the workforce powering the major AI companies today have also benefited greatly from being educated by a faculty at the forefront of research. This includes both graduate and undergraduate students. For instance, at the graduate level, 66% of CS PhD students educated by US universities are hired by industry [11] (overwhelmingly US companies) . At the undergraduate level, universities introduce tens of thousands of American students to AI each year. This strong workforce is the lifeblood of the American private sector.
Taken together, the history of AI is testament to the disproportionate impact of government funding into academic research. Given this impressive return on investment and the goal of maintaining US leadership in AI, our recommendation is to expand robust funding of academic research into AI.
Recommendations
Our key recommendation is to fund fundamental research in AI to develop the next revolution in AI and power the coming decades of technological innovation and its economic rewards, in the process re-establishing American leadership. This is all the more important today than in the past, because other nations have now seen value in AI research, and may develop the next revolution first if the US does not take the lead.
This fundamental research is needed to produce a new generation of AI systems that removes critical flaws in the current generation of language model-based AI. Some of these flaws are described below:
1. An open challenge is AIs that can perceive and interact with the physical world, especially in cluttered, dynamic, “open world” and potentially adversarial environments. For example, a robot in the kitchen might need to reach into cluttered cabinets with unknown organization while navigating around hot stoves and running kids. Such challenges are also pervasive in medical settings, rescue operations and even on the factory floor. Solving this challenge needs fundamental new advances in robotics, including multi-sensory perception, fine-grained control and integration of perception with planning and reasoning.
2. AI systems today are extremely data hungry, limiting their usefulness in specialized domains like national security or science and engineering where data collection and annotation requires expertise and is expensive. There is no reason to believe that so much data is indeed necessary. In fact, people are known to learn from orders of magnitude less data. It is also not at all clear that passing over the same data multiple times is a necessity. Fundamental research is needed to identify new learning techniques that can learn from minimal training data and possibly even a single training pass.
3. AI can potentially accelerate discovery in science, mathematics and engineering, as evidenced by specific applications like protein folding. However, broad acceleration of all scientific domains requires new AI systems that can synthesize existing scientific knowledge, autonomously perform experiments, and produce formal mathematical reasoning. Current systems are fundamentally flawed in this regard, with their tendency to hallucinate and inability to perform precise mathematical reasoning.
4. Real world operations are often intolerant of failure. As such, we need AI systems that can provably achieve specified goals. Unfortunately, existing AI systems do not offer such guarantees and can be difficult to control or secure. We need new kinds of AI systems for which we can provably constrain their behavior and ensure that they can be precisely controlled. This is essential for deployment in safety-critical systems.
It is essential that AI policy funds basic research into these areas, among others. This can be done by bolstering the mandate of existing funding agencies such as NSF, DARPA and ONR, which have a long and robust history of funding exactly this kind of research. Our recommendation is that, through these agencies, AI policy must ensure large-scale investments into fundamental AI research.
A key characteristic of AI systems is the emergence of new capabilities at larger scales. Thus, a key necessity for any kind of fundamental AI research is the availability of large compute clusters. In addition to research funding, AI policy should prioritize the creation and expansion of national compute resources (such as the NAIRR [12]) and reduce bureaucratic hurdles to accessing this compute resource so that any American institution can innovate without significant capital investment.
Finally, complementing government funding, a key thrust of AI policy should be to incentivise the private sector to invest in fundamental academic research. This would benefit the investing companies in both providing a constant source of new ideas as well as a workforce educated in the evolving frontier. New public-private collaboration models are needed, since companies alone cannot shoulder the risk and investment horizon of basic research.
Conclusion
In sum, the US lead on AI is narrowing as existing systems plateau in their abilities and other nations catch up in the AI race. To re-establish US dominance, our key recommendation is to focus on fundamental AI research to invent the next generation of AI systems. Building on lessons from the past, this is best done through robust government funding of fundamental AI research, incentives for the private sector to fund the same, and expansion of and improved access to shared compute resources at the national scale.
References
1. Liu, Aixin, et al. “Deepseek-v3 technical report.” arXiv preprint arXiv:2412.19437 (2024).
2. Fu, Xingyu, et al. “Blink: Multimodal large language models can see but not perceive.” European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024.
3. Rosenblatt, Frank. “The perceptron: a probabilistic model for information storage and organization in the brain.” Psychological review 65.6 (1958): 386.
4.Papert, Seymour A. “The summer vision project.” (1966).
5.Turing award citation: https://awards.acm.org/about/2024-turing
6. Salton, Gerard, A. Wong, and C. S. Yang. 1975. A vector space model for automatic indexing. Commun. ACM 18, 11 (Nov. 1975), 613–620. https://doi.org/10.1145/361219.361220
7. Russakovsky, Olga, et al. “Imagenet large scale visual recognition challenge.” International journal of computer vision115 (2015): 211-252.
8. Ho, Jonathan, Ajay Jain, and Pieter Abbeel. “Denoising diffusion probabilistic models.” Advances in neural information processing systems 33 (2020): 6840-6851.
9. Li, Xiang, et al. “Diffusion-lm improves controllable text generation.” Advances in neural information processing systems 35 (2022): 4328-4343.
10. Sahoo, Subham, et al. “Simple and effective masked diffusion language models.” Advances in Neural Information Processing Systems 37 (2024): 130136-130184.
11. https://cra.org/crn/2021/04/the-ai-index-emerging-trends-in-ai-education