An analysis of 5,290 NeurIPS papers reveals that direct US-China AI lab collaboration remains a small but persistent fraction of global research, with Meta's Llama model appearing in over 100 Chinese papers despite export controls.
A comprehensive analysis of AI research presented at the NeurIPS conference suggests that direct collaboration between US and Chinese AI labs, while politically sensitive, continues to form a measurable part of the global research ecosystem. The study, which examined 5,290 papers from the 2024 conference, found that 141 papers—approximately 3%—involved collaboration between researchers based in the US and China. This figure represents a slight increase from 134 such papers identified in the previous year's analysis.
The analysis, conducted using OpenAI's Codex to parse author affiliations and acknowledgments, provides a data-driven look at the practical realities of AI research amid escalating geopolitical tensions. While US export controls have restricted the flow of advanced AI chips and certain technologies to China, the research community appears to maintain channels for academic and technical exchange. The steady collaboration rate suggests that the scientific pursuit of AI advancement often operates in parallel to, and sometimes despite, national policy restrictions.
One of the most telling findings from the analysis is the prevalence of Meta's Llama model in Chinese research. The open-source large language model was featured in 106 papers authored by Chinese researchers. This highlights a critical dynamic in the current AI landscape: while US companies and researchers face restrictions on collaborating directly with Chinese entities, open-source models released by American firms can still be freely used, modified, and built upon by Chinese labs. This creates a pathway for Chinese researchers to advance their AI capabilities using cutting-edge tools developed in the US, even as direct institutional partnerships are discouraged or prohibited.
The persistence of US-China collaboration, however small, points to several underlying factors. First, the global nature of academic research means that researchers often maintain personal and professional networks that transcend national borders. Conferences like NeurIPS serve as neutral grounds where ideas are exchanged, and collaborations can be initiated. Second, the open-source movement in AI, championed by companies like Meta, has created a commons of tools and models that are accessible to all, regardless of nationality. This has allowed Chinese researchers to stay at the forefront of AI development by leveraging state-of-the-art models, even as they face hardware and software import restrictions.
From a counter-perspective, some experts argue that the 3% figure might understate the true extent of indirect collaboration. While direct co-authorship between US and Chinese lab employees may be limited, the flow of ideas through open-source repositories, preprint servers like arXiv, and international conferences creates a more diffuse form of collaboration. A Chinese researcher can build directly on a model released by a US company, incorporating it into their work without ever formally co-authoring a paper with a US-based scientist. This form of "collaboration by proxy" is harder to track but may be more significant than the raw numbers suggest.
Conversely, others argue that the 3% figure is already too high from a national security standpoint. The concern is that even limited direct collaboration could facilitate the transfer of sensitive knowledge or methodologies that could enhance China's AI capabilities in ways that have military or strategic implications. The analysis itself, by using OpenAI's technology to parse the papers, underscores the dual-use nature of AI tools—they can be used for both commercial and research purposes, but also for intelligence and security analysis.
The findings also shed light on the practical challenges of enforcing export controls in a globally interconnected research community. While governments can restrict the sale of specific hardware or software, they cannot easily control the dissemination of ideas or the use of open-source models. This creates a complex compliance environment for researchers and institutions, who must navigate both legal restrictions and the open, collaborative ethos of the academic AI community.
Looking ahead, the trend of steady collaboration suggests that the AI research landscape will continue to be shaped by a mix of geopolitical forces and scientific imperatives. While direct institutional partnerships may become more difficult, the use of open-source tools and the informal networks of the research community will likely ensure that knowledge continues to flow across borders. For policymakers, the challenge will be to balance national security concerns with the need to maintain the United States' leadership in AI innovation, which has historically been fueled by global collaboration and open exchange.
For the broader tech community, this analysis serves as a reminder that the AI race is not just a competition between companies or countries, but also a collective endeavor to advance human knowledge. The fact that US and Chinese researchers continue to collaborate, even in a limited way, indicates that the pursuit of scientific breakthroughs often transcends political divides. However, as AI becomes increasingly integrated into economic and military systems, the tension between open collaboration and national security is likely to intensify, shaping the future of global AI research in profound ways.

Comments
Please log in or register to join the discussion