In 2019, a Uyghur man living in Xinjiang, China, received a knock on his door. He had done nothing wrong by any recognizable legal standard. He had attended a mosque. He had contacted a relative abroad. He had downloaded WhatsApp. An algorithm had flagged him. A policeman had been dispatched. And within hours, he was in a detention facility whose existence the Chinese government officially denied.
His story was documented by the Xinjiang Police Files — a cache of internal Chinese government documents and photographs leaked in 2022 that provided the most detailed evidence yet of how the surveillance state operates at the granular level of individual human lives. Not as an abstraction. Not as a policy debate. As a knock on a door at night.
The technology that made that knock possible is now being exported. And the governments buying it are not all authoritarian in the Chinese mold — but they are all discovering that the same tools built to monitor Uyghurs can be turned toward journalists, opposition politicians, ethnic minorities, and anyone else a government decides requires watching.
What the Surveillance State Actually Looks Like
The popular image of surveillance — cameras on street corners, agents reading mail — is decades out of date. The modern surveillance state operates through layers of integrated technology that would have been technically impossible fifteen years ago.
Facial recognition systems scan crowds in real time, matching faces against databases of persons of interest, flagging individuals for follow-up without any human operator actively watching. Mobile phone data — location history, app usage, communication metadata — is collected either through direct access to telecommunications networks or through malware installed on target devices. Financial transaction monitoring tracks spending patterns for behavioral anomalies. Social media analysis identifies networks of association, political sentiment, and protest organization before it materializes on the street.
None of these systems, individually, is new. What is new is their integration — the ability to combine data from multiple sources into a comprehensive behavioral profile of any individual in a monitored population, updated in real time, searchable by any authorized user, and increasingly analyzed by AI systems that can identify patterns no human analyst would detect.
China has built the most advanced integrated surveillance infrastructure in the world. Xinjiang is its most intensive deployment — a region of 25 million people where a predominantly Muslim ethnic minority lives under a level of monitoring that human rights organizations have described as a live-in laboratory for total surveillance.
The Export Market
China is not keeping this technology to itself.
Huawei, ZTE, Hikvision, Dahua, and a constellation of smaller Chinese technology companies have sold surveillance infrastructure to governments across Africa, Asia, Latin America, and the Middle East. The sales are often packaged as “smart city” or “safe city” solutions — public safety infrastructure that happens to include facial recognition networks, communication monitoring capabilities, and integrated data platforms.
Carnegie Endowment for International Peace research has documented Chinese AI surveillance technology deployments in at least 63 countries. The list includes democracies and autocracies alike — because the technology is sold as crime prevention infrastructure, and crime prevention is a need that governments of every political character claim to have.
The problem is that infrastructure built for crime prevention is also infrastructure built for political control. A facial recognition network that can identify a wanted criminal can also identify a protest organizer. A communication monitoring system that can detect criminal conspiracy can also detect opposition coordination. The technology does not distinguish between these uses. The government operating it does — and not always in the direction the “safe city” branding implies.
NSO Group and the Western Surveillance Industry
It would be convenient if surveillance technology were exclusively a Chinese export problem. It is not.
NSO Group, an Israeli company, developed Pegasus — a piece of malware capable of fully compromising a smartphone, accessing all communications, activating the camera and microphone, and extracting data without any action by the target. Pegasus requires no click, no download, no interaction. It can be installed remotely on any targeted device.
NSO Group marketed Pegasus exclusively to governments, for use against criminals and terrorists. The Pegasus Project — a collaborative investigation by 17 media organizations published in 2021 — found that the actual target list included at least 180 journalists, 600 politicians and government officials, 85 human rights activists, and heads of state including French President Emmanuel Macron.
The clients using Pegasus against these targets included governments in Saudi Arabia, the UAE, India, Mexico, Rwanda, and Azerbaijan — a range that spans absolute monarchies, competitive democracies, and everything in between. The common thread was not ideology. It was the desire to monitor people the government found inconvenient.
NSO Group insisted it was not responsible for how its clients used its product. This argument — the neutral tool defense — is the same one advanced by every surveillance technology vendor whose product has been turned against civilians. It has not been accepted by courts in every jurisdiction, but it has been accepted by enough of them to keep the industry functioning.
The African Laboratory
Africa has become a significant market for surveillance technology — and a significant laboratory for understanding what happens when that technology is deployed in contexts with weak judicial oversight, limited press freedom, and governments facing genuine security threats that provide cover for broader surveillance programs.
Ethiopia used Pegasus against journalists and opposition figures. Rwanda used it against dissidents abroad. Uganda deployed Chinese-supplied facial recognition technology ahead of its 2021 elections — an election that independent observers described as neither free nor fair. In each case, the surveillance infrastructure was acquired under a security justification and used for political purposes.
The pattern is consistent enough to constitute a rule: surveillance technology acquired for security purposes will be used for political purposes wherever the institutional constraints preventing that use are weak. Building those constraints — independent judiciaries, press freedom, functioning opposition — takes decades. Deploying surveillance infrastructure takes months.
The Democratic Surveillance Problem
The surveillance state is not exclusively an authoritarian phenomenon. Democracies have built extensive surveillance infrastructures of their own — and the revelations of Edward Snowden in 2013 demonstrated that the most powerful of those infrastructures belonged to the United States and its Five Eyes allies.
The NSA’s bulk collection programs — PRISM, XKeyscore, upstream collection from undersea cables — operated for years under legal interpretations that the public had not approved and courts had not fully reviewed. When they were revealed, the initial government response was not accountability but damage control.
Democratic surveillance differs from authoritarian surveillance in degree and in the presence of institutional constraints — courts, oversight bodies, press freedom — that provide some check on abuse. But those constraints are imperfect, unevenly applied, and subject to erosion under the pressure of security justifications that are difficult to evaluate publicly because the relevant information is classified.
The post-Snowden reforms in the United States were real but limited. The surveillance capabilities that were revealed were not dismantled. They were adjusted, subjected to additional oversight in some cases, and continued. The infrastructure remains.
Your Phone Is the Surveillance Device
State surveillance infrastructure is powerful. But the most pervasive surveillance system in human history was not built by governments. It was built by technology companies — and most people carry it voluntarily in their pocket.
The data that smartphones generate — location history, communication content, browsing behavior, purchasing patterns, social networks, physical movement — is more comprehensive than anything the most ambitious Cold War surveillance state could have collected on its most-watched citizens. It is collected continuously, stored indefinitely, and available to governments through legal process, through data broker markets, or through the kind of direct access arrangements that intelligence agencies have developed with telecommunications companies.
The surveillance state and the commercial data economy are not separate systems. They are integrated — the commercial system generating the data, the state accessing it through mechanisms that vary in their legality and transparency but are consistent in their direction of travel.
The Resistance
The spread of surveillance technology has not gone entirely uncontested.
The EU’s General Data Protection Regulation has imposed genuine constraints on data collection and use within Europe — constraints that have affected the behavior of global technology companies operating in European markets. Several US cities and states have banned government use of facial recognition technology. Courts in multiple jurisdictions have ruled that certain surveillance practices violate constitutional or human rights protections.
These are real constraints. They are also geographically limited, legally contested, and outpaced by the speed of technological development. A facial recognition ban in San Francisco does not affect how the technology is deployed in Xinjiang, Addis Ababa, or Riyadh.
The surveillance technology genie is not going back in the bottle. The question is whether the institutional frameworks for governing it — nationally and internationally — can be built fast enough to prevent its worst applications from becoming normalized.
The knock on the door in Xinjiang is not a distant problem. It is a proof of concept. And the companies that built the technology that enabled it are still in business, still selling, and still insisting that what governments do with their products is not their responsibility.
If this analysis interests you, read next: Autonomous Weapons: Who Is Responsible When AI Kills?

