The U.S. Startup That’s Using AI to Design Viruses—And Getting Pentagon Funding
There has always been a tense rhythm to the relationship between Silicon Valley and the Pentagon: times of cooperation interspersed with unexpected spikes in public unease. It’s hard to imagine the quiet tension that surrounds Anthropic when you’re standing outside their glass-fronted office building in San Francisco earlier this year and watching engineers come in with laptops and coffee cups. Inside, scientists are developing artificial intelligence-based systems that can simulate intricate biological structures, including viruses. The American military is also keeping an eye on this.
The concept initially sounds like it belongs in a speculative novel. Genetic sequence analysis, mutation prediction, and even the suggestion of novel viral structures are all done by algorithms. However, engineers tend to describe it in a less dramatic way within the company. In the same way that weather models forecast storms, they discuss simulation tools and mapping biological behavior. The implications, however, are difficult to overlook.
| Category | Details |
|---|---|
| Company | Anthropic PBC |
| Founded | 2021 |
| Headquarters | San Francisco, California, USA |
| CEO | Dario Amodei |
| Core Technology | Advanced AI systems and biological modeling tools |
| Major Investors | Google, Amazon, venture funds |
| Government Links | U.S. Department of Defense research collaborations |
| Focus Area | Artificial intelligence, advanced modeling, national security applications |
| Reference | https://www.anthropic.com |
Former OpenAI researchers, led by CEO Dario Amodei, founded Anthropic in 2021 after leaving the previous organization due in part to worries about AI safety. Even now, when I walk through the firm’s offices, those worries seem to be there. The space has a somewhat academic feel to it thanks to posters about ethical AI and “alignment research” that hang on the walls. However, the business has become intricately linked to the national security ecosystem of the United States, particularly as Washington’s attention turns more and more to China’s technological rivalry.
The Pentagon is particularly interested in AI’s rapid biological system simulation capabilities. Culturing viruses, testing mutations, and observing behavior in controlled labs are all steps in traditional biological research that can take years. Researchers can now examine millions of theoretical variations in a matter of hours thanks to AI’s dramatic compression of that timeline. It helps scientists foresee new biological threats before they manifest in the real world, according to some defense officials who view it as a defensive tool.
On paper, that explanation makes sense. However, even though it is theoretical, the thought of creating viruses still makes people uneasy.
The debate has gotten surprisingly heated in Washington. Tensions between Anthropic and the Pentagon surfaced earlier this year when the company refused to lift limitations on the use of its technology. Several reports claim that Anthropic drew two clear boundaries: no fully autonomous weapons using its AI systems and no widespread domestic surveillance. The Pentagon resisted strongly, seemingly unwilling to accept restrictions imposed by a private company.
The ensuing situation resembled a corporate-political impasse. The Pentagon classified Anthropic as a possible supply-chain risk, and President Donald Trump directed federal agencies to phase out the company’s AI tools. That’s as bad as it gets when it comes to defense contracts. The action is “the contractual equivalent of nuclear war,” according to one government contracts attorney.
There were differing opinions within the tech sector. In private, a few executives chastised Anthropic for endangering profitable government alliances. Others, especially engineers, seemed to respect the company’s adherence to its declared values.
In the meantime, the larger AI race continues to pick up speed.
The Pentagon has already inked several deals with leading AI companies, such as Google and OpenAI, totaling up to $200 million. Advanced algorithms, according to defense officials, are becoming crucial for cyber defense, logistics, and intelligence analysis. Simply put, biological modeling is an additional component of that puzzle. They believe that the United States could fall dangerously behind if it doesn’t investigate these technologies.
Nevertheless, it seems like everyone involved is navigating uncharted territory as the situation develops.
The concept of biology aided by AI is not wholly novel. For years, pharmaceutical companies have been using machine learning to design drugs, which has accelerated research on proteins and vaccines. AI tools aided researchers in tracking new variants and analyzing viral mutations during the COVID-19 pandemic. A different kind of gravity is introduced when medical research is transferred to national security applications.
Strong modeling tools may eventually lower the barrier to biological experimentation, according to critics. They contend that if algorithms are able to create viruses in simulation, someone may eventually attempt to create them in the real world. Although history indicates that powerful tools rarely remain limited to their original purpose, the technology itself is not intrinsically malevolent.
The leadership of Anthropic seems to be conscious of this conflict. CEO Dario Amodei has stated time and time again that AI systems need to have safeguards against risky applications. In interviews, he frequently sounds more like a researcher concerned about unforeseen consequences than a tech founder seeking expansion. It’s unclear if those worries will endure in the face of government contracts and international competition.
It’s difficult to overlook the larger trend here. Silicon Valley businesses have been progressively closer to the defense establishment over the last ten years. Artificial intelligence, cybersecurity, and now biological modeling have all grown out of what started with cloud contracts and drone analysis software. Every year, the lines separating technology research from national security appear to become increasingly hazy.
Nevertheless, despite all the controversy, a large portion of this work continues to be done in secret—in labs with bright monitors, where researchers modify lines of code while simulations run overnight.
As this plays out, a question remains that no one seems to be able to fully address. AI could shield troops from biological threats or assist in forecasting the next pandemic. However, the same technology may also facilitate manipulation of the biological world.
For the time being, the algorithms continue to operate, creating hypothetical viruses—strings of genetic code floating inside data centers—in digital space. It is still, clearly, unclear if that research will lead to a global safety breakthrough or something much more complex.