This Leaked U.S. Memo Shows the NSA Was Testing Emotion AI on Civilians
At first glance, the document itself didn’t seem particularly dramatic. It was just another government memo, with bureaucratic language, simple formatting, and multiple black rectangle-shaped redacted passages strewn all over the page. But when I carefully read it, line by line, I noticed something more disturbing.
The National Security Agency had been testing “emotion AI” systems using civilian data, according to the leaked memo that was making the rounds among policy analysts. On paper, the concept was fairly straightforward: train algorithms to identify human emotions using facial expressions, voice patterns, and behavioral cues collected from digital communications. The implications feel much less neat in real life.
| Category | Details |
|---|---|
| Agency | U.S. National Security Agency (NSA) |
| Headquarters | Fort Meade, Maryland, United States |
| Primary Role | Signals intelligence and cybersecurity |
| Controversy | Surveillance programs and classified data collection |
| Technology Discussed | Emotion AI (affective computing) analyzing facial expressions, voice tone, and behavioral patterns |
| Legal Framework | Surveillance laws including the Foreign Intelligence Surveillance Act (FISA) |
| Public Debate | Privacy, civil liberties, and AI governance |
| Reference Source | https://www.nsa.gov |
Affective computing, also known as emotion AI, has been quietly evolving for years. University and tech researchers have developed systems that can analyze minute variations in voice pitch, facial expressions, and even typing speed. Proponents contend that these tools could enhance mental health monitoring, healthcare diagnostics, and customer service.
However, the tone of the technology changes when intelligence services show interest.
The leaked memo suggests the NSA was exploring whether emotion detection could help analysts interpret communications more effectively. An anxious pause in a video feed, a tense voice on the phone, or a slight tremor in speech patterns. Theoretically, these signals could indicate distress or deceit.
The researchers may have considered the work to be purely technical—just another dataset and model to train. Reading between the lines, however, gives the impression that the line separating experimentation from surveillance quickly becomes hazy.
That kind of hazy boundary has long been connected to Fort Meade, the NSA’s expansive headquarters in Maryland. Some of the world’s most potent intelligence infrastructure is housed in rows of secure buildings behind guarded gates and layers of fencing. Every day, analysts in those facilities process massive amounts of digital data.
The agency spent years promising stricter oversight and more precise restrictions on domestic monitoring after the Snowden disclosures in 2013 exposed extensive surveillance programs. However, the new memo raises the possibility that AI experimentation is taking the discussion back into uncharted territory.
Though not quietly, civil liberties organizations have responded cautiously.
Consent is one issue. The people involved probably had no idea that their voices or facial expressions were being examined for psychological cues if emotion-detection systems were trained using data from regular people, such as emails, video calls, and social media posts. The precise method used to collect the datasets is still unknown.
There is a lot of room for conjecture because of that uncertainty.
These days, when one walks through Washington policy circles, the conversation frequently veers toward the same unsettling question: how much of contemporary life has been turned into raw material for machine learning? Face recognition at airports, microphones built into phones, and cameras on street corners. The infrastructure is already in place.
AI for emotions merely adds another level. The technology is far from perfect, according to researchers. Human emotions are messy, culturally influenced, and often contradictory. Raising an eyebrow can convey skepticism, curiosity, or nothing at all. Algorithms that try to classify these signals run the risk of oversimplifying something that is fundamentally human.
However, intelligence services seldom wait for ideal instruments. According to the memo, analysts believed that emotional analysis could be used in addition to conventional surveillance techniques. Software could determine whether a speaker sounded anxious, irate, or composed rather than merely reading messages or listening to calls. In foreign intelligence or counterterrorism operations, that kind of context might seem helpful.
However, the transition from studying foreign targets to testing systems on civilians—whether on purpose or not—raises challenging legal issues.
The Foreign Intelligence Surveillance Act, which is intended to restrict domestic spying, is one of the legal frameworks that U.S. intelligence agencies operate under. However, detractors have long maintained that technological advancements frequently surpass legal protections.
Perhaps the most recent example is emotion AI. It’s difficult to ignore how rapidly the AI sector as a whole is heading in the same direction. Sentiment analysis tools that measure user responses to ads are being tested by tech companies. Startups promise software that can identify employee fatigue or classroom engagement. Sensing new markets, investors continue to aggressively fund these initiatives.
As these trends converge, it becomes harder to distinguish between national security technology and commercial analytics than most people would think.
The leaked memo doesn’t fully explain how far the NSA’s experiments progressed. Officials have not provided much public comment beyond the usual allusions to legal oversight, and a number of sections are still classified. This silence leaves analysts searching for hints in language fragments.
For the time being, the more significant problem might be philosophical rather than technical.
In the past, surveillance concentrated on what individuals said or did. Emotion AI makes an effort to decipher human emotions or what machines think they are. This change places observation in a more personal setting.
The scope of contemporary intelligence work becomes nearly ethereal when one stands outside Fort Meade on a calm morning and observes workers passing through security checkpoints. Behind encrypted networks and concrete walls are thousands of mathematicians, engineers, and analysts.
Algorithms may now be researching human emotion itself somewhere within that system.
It’s unclear if that capability will be helpful or problematic. However, it’s evident from the leaked memo that the era of machines deciphering emotions has already begun.