Mamdani’s Win Shows How Human Contact Can Defeat the Algorithm and the Chatbot

In the final weeks of New York’s mayoral race the streets hummed with familiar promises about housing, transport and trust. Yet somewhere beyond the rallies and the posters another kind of campaign was quietly under way. Voters sitting in cafés and kitchens were turning to chatbots, asking simple questions about policies and personalities, and getting confident answers. What most did not realise was that the machines had already started shaping the conversation.

The greatest modern threat to democratic choice is not the fake video or the troll farm. It is the polite helpful chatbot that narrows choice without intending to.

The regulator’s early warning

Weeks before the Dutch general election the Netherlands Data Protection Authority told voters bluntly: do not trust chatbots for voting advice. After twenty one thousand tests the agency found that popular models kept pointing to the same two parties while ignoring the rest. Deputy Chair Monique Verdier called their replies unreliable and clearly biased. The fault was not in malice but in design: systems trained on stale data and uncertain correlations.

Across the Atlantic the New York Attorney General issued a similar warning a year earlier. Chatbots, her office said, were often wrong even about polling hours and locations. If a machine cannot tell you when to vote, how can it tell you whom to vote for?

Together these warnings expose the new architecture of influence. As voters migrate to conversational AI for guidance, unseen design choices begin to shape what the public thinks it knows.

The New York example

On 4 November 2025 Zohran Mamdani, a 34 year old assembly member from Queens and a self described democratic socialist, defeated Andrew Cuomo and Curtis Sliwa to become the city’s first Muslim mayor and its youngest in a century. His campaign rested on housing justice, transport affordability and relentless community contact.

Yet the digital air around him felt weighted toward the past. Search engines and chatbots still summarised New York politics in the language of older names. No official audit of chatbot prompts has been released, so there is no proof of misconduct. But by parity of reasoning with the Dutch findings, there was a foreseeable risk that general-purpose chatbots would emphasise the better-known candidates and bury the insurgent.

It is not a conspiracy, only inertia disguised as intelligence.

The persuasive companion

Large language models can now match and sometimes surpass human persuasiveness in short conversations. That power now lives inside billions of private exchanges. People ask these systems for comfort, for career advice, even for moral reassurance. Each response is fluent, plausible and softly directive.

Multiply that across millions of users and you have something more intimate than a social-media feed: a digital counsellor that can shape world views by tone alone. Surveys already show that more than half of under-thirty year olds have sought personal or professional advice from AI tools. The scale of interaction gives these models psychological leverage that no previous medium ever held.

Beyond Facebook and into the market of influence

A decade ago regulators worried about Facebook’s influence on elections. That controversy in my opinion will soon seem small. I believe money now poured into generative AI by other corporations, tens of billions of dollars, buys not just software but control over the language that frames public thought. Whoever owns the models owns the metaphors.

Here lies the real dilemma. The great models in my opinion are controlled by a few private consortiums, billionaire founders and global investors who already dominate attention markets. Their counterpart is government, which claims to regulate them but also has its own appetite for control.

So whom do you trust: the corporate oligarchs or the political ministries? Both shape narratives and both insist they act for our good. The only honest answer is neither uncritically. Power, whether private or public, must be fenced by law and balanced by literacy.

Guardrails or gateways

The instinctive solution is more guardrails. Yet invisible filters can distort information as easily as protect it. What citizens need is not algorithmic paternalism but open access with transparent warnings. Education is the answer, not censorship. A democracy survives when its citizens can test claims, check sources and recognise the rhetoric of machines.

The analogue counter attack

Mamdani’s campaign succeeded because it refused to live entirely online. His volunteers knocked on doors, gathered in union halls and spoke in markets and mosques. The work was old-fashioned, human and slow. Yet in a year dominated by algorithms, the warmth of a handshake proved more persuasive than a thousand automated messages.

For responsibility follows the money

  1. Transparency duty: Companies whose models handle civic or electoral questions should disclose how those queries are processed and ranked.
  2. Audit right: Independent researchers should repeat the Dutch tests in American cities and publish every prompt and result.
  3. Oversight of choice architecture: When default chat interfaces steer civic reasoning they become part of a nation’s opinion infrastructure and should be regulated accordingly.

These recommendations are statements of opinion not disclosed facts. They allege no misconduct by any named company

The caution

The modern citizen must learn what earlier generations learned about newspapers. Verify, cross-check and never rely on a single voice. Use chatbots if you wish but use them sceptically. Compare them with human experts, opposing sources and first-hand evidence. Freedom does not lie in silencing algorithms but in out-reasoning them.

References

Netherlands DPA 2025 chatbot advisory; New York AG 2024 consumer alert; EU AI Act Article 6 and Recital 62; CMA foundation model market study 2025; peer reviewed research on AI persuasion; Pew and AP-NORC surveys on AI use; historic ICO and US Senate Cambridge Analytica inquiries.

You might also like to read on telegraph.com

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *