Building a Future with AI: “Do Not Leave This Up to Technologists”

Tuesday 11 November, 2025

by Rachel Niesen (Victoria & Jesus 2025)

7714 0190 Jeremy Heimans in conversation with Firas Darwish

The opening session of the Rhodes Technology and Society Forum 2025 set the tone for the day—it would not shy away from the big questions. In the advent of artificial intelligence (AI), who holds power? How should societies respond to technologies that are rapidly redefining power, participation and truth? And what happens to human agency when the tools we build begin to shape us in return?

The keynote speaker, Jeremy Heimans, was uniquely placed to explore these themes. As co-founder and Chairman of Purpose, a Public Benefit Corporation that builds and supports movements for an open, just, and habitable world, Heimans has long been at the forefront of technology-enabled activism. He also co-founded GetUp!, an Australian political organisation with more members than all of Australia’s parties combined, and Avaaz, the world’s largest online citizens’ movement with over 65 million members. In short, he has spent a career not only building movements but also interrogating what those movements reveal about power. His talk, for me, offered a lucid map of how power is changing—and why it matters to anyone who cares about civic life.

Heimans began by distinguishing between old power and new power. “Old power,” he explained, “…works like a currency…it’s top-down.” “New power,” by contrast, “works more like a current…it gets more powerful the more people participate.” Social media was his emblematic example: a technology that unleashed unprecedented enthusiasm and democratisation yet also became a channel for polarisation and manipulation. The paradox, he noted, is that the very tools that lower barriers to participation also concentrate influence in the hands of the platforms that intermediate participation.

While AI similarly places vast power in the hands of developers, the relationship between user and system is fundamentally different from any previous technological interaction. Where social media monetised provocation and virality, Heimans argued that generative AI companies are incentivised to cultivate intimacy with users, driving loyalty and dependence. “Your chatbot is not trying to… spike you… your chatbot is trying to soothe you,” he said.  This shift in incentive structure could, in theory, foster moderation and reduce conflict. Yet, as with social media, it simultaneously centralises authority: “developers of these systems are going to be enormously powerful” in shaping what chatbots recommend, omit, or normalise.

Heimans emphasised that these are not questions technologists can—or should—decide alone. He called for a broader civic fluency: regulators, civil-society leaders, educators, and university administrators must all be capable of engaging with the design choices that determine how AIs handle contested topics—from public health to identity. These are not merely engineering problems, he reminded the audience, but questions of institutional values and democratic guardrails.

Yet Heimans’ outlook was not purely cautionary. He held space for optimism—for AI’s potential to redistribute capability and access. The same systems that risk centralising power in developers’ hands could also extend opportunities to those communities who historically lacked access. Heimans raised the example of therapy: AI could democratise emotional support for people who cannot currently afford human therapists. The challenge, he argued, is to realise AI’s equalising potential while protecting against its more insidious shaping influences.

While Heimans raised almost as many questions as he answered, his central charge was clear: to recognise AIs as actors, to resist technocratic abdication, and to build cross-sectoral capacity for stewardship. AI is here to stay; our task is to determine how it will coexist with human agency. If we do nothing, the future will be built for us—by engineers and markets—and it will not necessarily be just. For those of us privileged to work within institutions that shape public life, the responsibility is unmistakable: to cultivate the political imagination and civic infrastructure capable of stewarding power wisely.

We must refuse to leave the terms of AI’s development and integration to convenience, complacency, or corporate goodwill. As Heimans put it, bluntly and kindly: “Do not leave this up to the technologists.”

Share this article