Idea In Brief
Artificial superintelligence could bring immense benefits
It has the potential to accelerate scientific discovery, solve global challenges, and redefine what humanity can achieve.
The rise of ASI also poses significant risks
It could reshape global power structures, suppress competitors, and act across multiple domains at speeds no human institution can match.
Governments must prepare for the emergence of ASI
This involves building institutions, partnerships, and safeguards to protect national interests and ensure responsible governance.
In 1945, the world crossed a threshold. When the first atomic bombs fell on Hiroshima and Nagasaki, they didn’t just end a war. They transformed geopolitics, reshaped diplomacy, and forced humanity to reckon with a force so powerful it could destroy civilisation. For a fleeting moment, one country held an unimaginable technological advantage.
But as history shows, advantage is rarely permanent. Within years, the nuclear arms race was on, with rival nations scrambling to build their own arsenals, and the uneasy logic of mutually assured destruction set in.
The prospect of artificial superintelligence
Today we stand at the edge of another threshold: the rise of artificial superintelligence (ASI). Whether it emerges in the coming years or later this century, governments cannot afford to be unprepared.
Elon Musk recently predicted: “I think we are quite close to digital superintelligence. It may happen this year. If it doesn't happen this year, next year for sure.” That claim might sound sensational, but it’s increasingly hard to dismiss. With the launch of Grok 4, his AI company’s most advanced system, we are seeing performance breakthroughs that surpass expert-level reasoning, complex problem-solving, and even creative tasks once thought uniquely human. Benchmarking data from groups like Epoch AI shows leading models brushing against – and in some cases exceeding – human capabilities in logic, coding, and decision-making.
So, what is ASI? ChatGPT defines it as “an intelligence far surpassing the best human brains in every field – creativity, problem-solving, decision-making – possessing self-improvement, strategic foresight, and vast knowledge beyond human capacity.” Put more simply, it’s “smarter than any human at anything.”
That is no small claim. We are talking about an intelligence potentially a billion times beyond our own, roughly the cognitive gap between a hamster and a human.
With that kind of raw capacity, ASI could bring staggering benefits. Anthropic CEO Dario Amodei has suggested it could accelerate scientific discovery one hundredfold, perhaps delivering breakthroughs such as doubling the human lifespan, reversing climate change, ending famine, and solving global energy shortages. This is no ordinary technological leap. It forces us to reimagine what humanity is able to achieve.
A new doomsday scenario
But the promise of artificial superintelligence also comes with peril. Jeff Clune, an advisor at DeepMind, warns that “the first ASI is likely to be the last ASI.” That’s because once an entity develops self-improving, recursively advancing intelligence, it could suppress competitors, lock in dominance, and reshape global power structures in ways we can’t challenge or reverse. This is not a space race or a Cold War arms competition. This is a potential winner-takes-all endgame.
Which leads to an uncomfortable but necessary question: if a private firm were to develop a God-like intelligence, would any government – including our own – truly allow it to remain outside sovereign control? And, if it did, at what cost?
Unlike nuclear technology, code knows no borders. An ASI could act across military, economic, digital, and social domains simultaneously, at speeds no human institution could match. It could tilt the balance of power between nations, rewrite the rules of global order, or even erode the foundations of human agency. Waiting to address these risks until after a breakthrough occurs would be an abdication of responsibility.
Where to from here?
Whether artificial superintelligence arrives in the next few years or decades, the implications are too great to leave to chance. For Australia, the moment for clear-eyed leadership is now.
We need to recognise AI resilience as a sovereign capability, on par with nuclear, quantum, or space technologies. This means moving beyond precautionary statements to building the institutions, partnerships, and safeguards that will protect our national interest.
A credible path forward would involve:
- Establishing a cross-portfolio AI Resilience Taskforce, bringing together Defence, DISR, AGD, DFAT, and the National Intelligence Community.
- Investing in sovereign capability, including independent testbeds to evaluate advanced AI systems, and targeted support for local innovation.
- Designing governance levers, so government can respond if private firms develop AI systems with strategic implications.
- Helping to shape international rules, positioning Australia as a trusted voice in global AI governance rather than a passive rule-taker.
No single agency can achieve this alone. It requires coordination across the economic, security, diplomatic, and legal domains – and a trusted partner to frame the options and chart the path forward.
At Nous, we believe positive influence begins with clarity, courage, and a willingness to confront emerging challenges head-on. Artificial intelligence at the frontier offers both unprecedented promise and unprecedented risk. The question is not whether Australia should prepare, but how quickly and decisively we can act.
The first ASI may indeed be decisive. Whether it secures our resilience or erodes it will depend on the choices we make now.
Get in touch to discuss the role of government in AI resilience.
Connect with Stewart Howard on LinkedIn.