In January 2023, the Associated Press quietly revealed that it had been using artificial intelligence to write thousands of earnings reports since 2014. Not drafts. Not outlines. Published articles, under the AP byline, read by millions of people who had no idea a machine had written them.
The revelation was not a scandal. It was a preview. What the AP had been doing quietly for nearly a decade, every major media organization in the world is now doing openly — and the transformation is moving faster than the journalism industry’s ability to understand what it is losing.
AI is not coming for media. It has already arrived. The question is no longer whether artificial intelligence will reshape how news is created, distributed, and consumed. It is whether anyone in power over that process is paying attention to what the reshaping is actually doing.
What AI Can Do — and What It Is Actually Being Used For
The capabilities of AI in media are genuinely impressive. Large language models can produce grammatically correct, factually structured articles on routine topics — earnings reports, sports scores, weather forecasts, election results — faster and cheaper than any human journalist. Computer vision systems can analyze satellite imagery and identify changes — troop movements, deforestation, construction — that would take human analysts weeks to process. Audio transcription tools can convert hours of interview recordings to searchable text in minutes.
These are real capabilities with real value. The problem is that the media industry’s actual deployment of AI has not been primarily focused on using these capabilities to produce better journalism. It has been focused on using them to produce cheaper journalism.
In 2023, CNET quietly published hundreds of AI-generated articles on personal finance topics — and then quietly issued hundreds of corrections when the articles were found to contain significant factual errors. Sports Illustrated published AI-generated articles under fake bylines with AI-generated author photographs. Several regional newspaper chains, facing financial pressure, began using AI to generate local news content with minimal human oversight — producing articles about local government, crime, and community events that were technically accurate but substantively hollow.
The pattern is consistent. AI is being used not to enhance journalism but to replace the labor costs of journalism while maintaining the appearance of journalistic output. The result is content that looks like news but lacks the editorial judgment, source relationships, and contextual understanding that distinguish reporting from information processing.
The Attention Economy and AI Amplification
AI’s impact on media is not limited to content production. The algorithmic systems that determine what content reaches which audiences — recommendation engines, content ranking systems, personalization algorithms — are also AI systems, and they are making decisions with profound consequences for public information.
These systems are optimized for engagement, not accuracy. They amplify content that generates emotional responses — outrage, fear, tribal solidarity — regardless of whether that content is true. They create filter bubbles that show users primarily content that confirms their existing beliefs, reducing exposure to information that challenges or complicates their worldview. They favor content that is emotionally provocative over content that is analytically rigorous, systematically disadvantaging the kind of careful, contextualized journalism that democratic societies most need.
A 2023 study by researchers at MIT found that AI-driven content recommendation systems on major social platforms increased users’ exposure to politically extreme content by an average of 60 percent compared to chronological feeds. The systems were not designed to radicalize users. They were designed to maximize watch time. Radicalization was the externality.
The Trust Collapse
The Reuters Institute Digital News Report has tracked public trust in news media annually for over a decade. The trend is unambiguous. Trust in news media has declined in most countries surveyed, with the steepest declines in countries with the most active social media ecosystems.
AI is accelerating this decline in two ways. First, the proliferation of AI-generated content — some of it accurate, much of it not, all of it difficult to distinguish from human-reported journalism — is making it harder for audiences to know what to trust. When any piece of text could be AI-generated, the credibility signals that audiences have historically used to evaluate sources — known publication, recognized byline, editorial standards — become less reliable.
Second, AI-generated misinformation is becoming increasingly sophisticated. Deepfake video and audio, synthetic images, and AI-generated text that mimics the style of credible sources are being deployed in deliberate disinformation campaigns by state actors, political operatives, and commercial interests. The 2024 US election cycle saw the first widespread deployment of AI-generated political advertising, AI-generated candidate impersonations, and AI-generated news articles designed to suppress voter turnout in specific demographics.
The infrastructure for detecting and countering AI-generated misinformation is significantly less developed than the infrastructure for producing it. This asymmetry is not accidental. The actors who benefit from disinformation have stronger incentives to invest in offensive capabilities than democratic institutions have to invest in defensive ones.
The Ownership Concentration Problem
The AI systems that are reshaping media are not distributed evenly across the information ecosystem. They are concentrated in the hands of a small number of technology companies — primarily Google, Meta, Microsoft, and Apple — whose algorithmic decisions affect what billions of people read, watch, and believe.
These companies did not seek editorial power. They acquired it as a side effect of building platforms that people use. But the exercise of that power — through algorithmic curation, content moderation decisions, and recommendation systems — is as consequential for public information as any editorial decision made by a traditional media organization.
The governance of this power is severely underdeveloped. The companies making these decisions are accountable to their shareholders, not to the public whose information environment they are shaping. Their algorithmic decisions are proprietary, subject to limited external scrutiny. Their content moderation policies are applied inconsistently across languages, geographies, and political contexts in ways that systematically disadvantage non-English-speaking populations and political viewpoints that lack resources to engage with platform appeals processes.
What Local News Lost
The financial crisis of local journalism — accelerated by the shift of advertising revenue to digital platforms, and now further disrupted by AI — has consequences that are only beginning to be measured.
Research by the Shorenstein Center at Harvard has documented a consistent relationship between the decline of local newspapers and increases in municipal corruption, reduced voter turnout, and higher government borrowing costs. When nobody is watching local government, local government behaves differently. The accountability function that local journalism performs is not replicated by national media or social media platforms — and it is not replicated by AI-generated content that aggregates wire service reports without original local reporting.
The communities most affected by local news deserts are not wealthy urban centers with multiple competing news sources. They are small cities, rural areas, and low-income communities where a single local paper — now closed or reduced to a skeleton staff — was the primary source of civic information. These communities are now information dark zones, and AI is not filling the void.
The Journalist Who Cannot Be Replaced
There are things AI cannot do that journalism requires.
AI cannot cultivate a source who trusts a specific journalist with sensitive information because of a relationship built over years. It cannot sit in a courtroom and observe the moment a defendant’s composure breaks. It cannot walk into a neighborhood destroyed by flooding and ask the right question of the right person at the right time. It cannot make the editorial judgment that a story is important even though it will not generate clicks, because accountability journalism serves a public interest that engagement metrics do not capture.
These capabilities are not mystical. They are the product of human judgment, human relationships, and human presence in the world that AI systems — however sophisticated — do not possess. The risk is not that AI will replace journalism. The risk is that the economic incentives created by AI will lead media organizations to behave as if AI has replaced journalism, eliminating the human capacities that cannot be replicated while believing they have maintained the journalistic function.
That substitution — cheaper content for real reporting — will not be visible in the short term. It will be visible in the institutions that go unaccountable, the abuses that go unreported, and the communities that lose the information they need to participate in their own governance.
By then, the journalists who could have reported those stories will have found other work. And the organizations that eliminated them will have moved on to the next efficiency.
If this analysis interests you, read next: The AI Arms Race Nobody Voted For


Pingback: AI and the New Cold War: Who Controls the Future - Novarapress