Edition: International | Greek
MENU

Home » Analyses

How States Act and (Feel) — and Why It Matters

Civilizational frontier risks, the role of human nature and humanity’s collective future.

By: The Globalist - Posted: Tuesday, October 21, 2025

Understanding the emotionality as well as the rationality of states will help us unleash the best in cooperative human and state behavior
Understanding the emotionality as well as the rationality of states will help us unleash the best in cooperative human and state behavior

by Nayef Al-Rodhan*

As Artificial Intelligence (AI) and human enhancement technologies continue to evolve, could they be weaponized by malicious or hegemonic actors to hack human intentions, actions and emotions?

Some of this jeopardy stems from innovative technologies such as AI, quantum computing and synthetic biology. Although the new game-changing developments — which I call civilizational frontier risks — carry great potential for humanity, they have also created collective vulnerabilities that transcend national boundaries.

Big-picture questions

They also raise big-picture questions about the trajectory of human civilization as a whole: Will these new transformative technologies rewire the human condition and disrupt millennia of evolution? How does human fallibility play out in a world where AI-powered military technologies remove human qualities, such as empathy, from battle and the forging of peace?

Writing over sixty years ago on the eve of the Cuban Missile Crisis, Bertrand Russell asked the question: “Has Man A Future’?” Concerned that humanity was on the verge of annihilation, Russell contemplated whether “pride, arrogance and fear of loss of face had obscured the power of judgement” of Kennedy and Khrushchev, the leaders of the United States and Soviet Union.

With today’s major powers vying for supremacy in the fields of AI and quantum computing, how do we prevent a doomsday scenario and ensure humanity still has a future?

Understanding state behavior through the lens of human nature

These questions underpin the intertwining relations between war and peace in an age of rapidly advancing science and transformative technologies. They are fundamental to the sustainability of human civilization, here on earth as well as increasingly in Outer Space.

To answer them, we need to get to grips with the key tenets of human nature and examine how states act and feel — an area where philosophy, neuroscience and geopolitics increasingly intersect.

As explored in this transdisciplinary analysis of grand strategy and human nature, understanding the emotional dimensions of leadership and statecraft is critical in today’s high-stakes strategic environment. Human ego and emotionality, it is clear, play a bigger role in our lives than we often know or admit.

Military, digital and biological risks: A new strategic landscape

New scientific frontiers will bring about both immense opportunities and collective civilizational frontier risks, which can be put into the three main clusters:

Military risks (including weapons of mass destruction as well as safety, security and sustainability in Outer Space), Digital risks (including cyber security, AI, quantum and neuromorphic computing) and Biological risks (including climate change, pandemics, synthetic biology, neurotechnologies, human enhancement and transhumanism).

Threats of this caliber require global preparedness and transnational response strategies. Despite the critical necessity for international collaboration in this context, the hard-wired nature of states and their leaders often lead to the use of emerging technologies primarily for national self-interest.

Neuroscience has debunked previous assumptions that states, and their leaders, are driven exclusively by rationality. Studies point to the neuroanatomical and neurochemical links between emotions and decision-making, demonstrating that emotions and emotionality play a central role in decision-making and the intrinsic emotionality of states.

International relations and global security

With this in mind, my neuroscience-based theory of human nature as inherently emotional, amoral and egoistic can be extended onto our understanding of states, international relations and global security.

I have argued that man is an emotional amoral egoist: Our traits are partially determined by our personal and political circumstances, partially by survival-induced instincts, and are experienced on an emotional level.

We are therefore not an entirely clean slate insofar as we possess predilections which are coded by genetics and later influenced by personal and political circumstances. As humans are subject to varying external conditions, the propensity for rational or irrational behavior is molded by fluctuations in our environment.

Everything from our upbringing to societal values and norms, and the systems of governance under which we live shapes our behavior. The same applies to our moral compass, which neuroscience has shown to be mostly the result of formative events and experiences, and is thus largely contingent upon external circumstances.

The emotional drivers of strategy

Much like humans, states are egoistic and survival-oriented. Understanding the emotionality as well as the rationality of states will help us unleash the best in cooperative human and state behavior.

It will also help shed light on how emotions have shaped, and sometimes sabotaged, major turning points in world history, which cannot be understood without understanding their emotional motivators.

These emotions are often fueled by revenge, skewed historical or contemporary narratives of self or the other, or a subjective sense of historic injustice. Take, for example, Napoleon’s invasion of Russia, which was arguably driven more by pride and hubris than by cold strategic calculation. Or Stalin’s fatal foray into Korea, which according to historian Tony Judt is best understood as a consequence of his growing general paranoia and suspicions about Western plots for domination.

One could also point to Adolf Hitler’s insistence on signing the 1940 armistice in the Compiegne Forest, the same spot where Germany laid down its arms at the end of the First World War. Hitler ordered the armistice, which sealed the military defeat of France, to be signed in the same railway wagon. These types of emotional motivators continue to shape our geopolitical and military landscapes.

The ethics and consequences of military enhancement

Looking to the future, one of humanity’s greatest challenges will be for countries, often geopolitical rivals, to create international frameworks that encourage the transparent development and responsible use of game-changing innovations linked to AI, quantum computing and the “global commons” nature of Outer Space.

AI has already started to seep into every aspect of our societies and our economies, transforming our computational power and catapulting scientific, economic, military and technological advances into a new era.

Some of the new AI-driven technologies will also have serious implications for how wars are fought as they could soon allow the world’s leading militaries to control the reactions and emotionality of their soldiers.

Enhanced weapons and super-soldiers could give rise to a form of transhumanism that will challenge the very notion of the human condition as fixed, rooted and constant.

Deeper integration of technology within the body, as well as the use of neuro-technological and neuropharmacological means of enhancing our bodies could affect how soldiers feel and think — and therefore also how they act on the battlefield.

While enhancement may boost cognitive and physical capabilities, they also diminish some deeply human features like compassion and empathy, compromise authenticity, fairness and meritocracy, that have been pivotal to us as a species, both for survival and cooperation.

This will have serious consequences on ethical and humanitarian calculations during combat, including excessive brutality, deliberate harm to non-combatants and the use of torture.

A slippery slope

The potential effects of these technologies on emotions as well as physical capabilities is likely to increase the level of brutality in warfare and impede post-conflict reconciliation and reconstruction efforts.

This could also have far-reaching implications on security dynamics, civil-military relations, diplomacy, statecraft as well as sustainable trust and peace. At the moment, these new disruptive technologies fall outside the existing ethical and legal norms of warfare that are defined by international law and the Geneva Conventions.

This should raise some uncomfortable questions for lawyers and policy-makers, not least about questions of responsibility. For example, who will be held accountable if the “enhanced” soldier spirals out of control: The policy-maker, the commander, the soldier, the engineer or the medical teams that enhanced him?

Quantum supremacy and strategic stability

In a similar vein, the combination of AI and quantum computing will progressively increase our capabilities, for good and evil. But with the world’s superpowers on the cusp of a full-blown AI arms race, we should ask ourselves: At what price?

The AI-quantum convergence will enable systems to move faster and do more complex activities more efficiently, in doing so revolutionizing science. However, at its most extreme the unilateral and exclusive development of quantum supremacy could break every encryption of other states, and potentially dominate every aspect of world politics and critical infrastructure.

It need not come to that, but even with robust regulation frameworks in place, there is a real danger that individual freedoms and cultural norms will be encroached, potentially triggering highly disruptive conflicts.

Outer space as a civilizational frontier

Another looming civilizational frontier risk comes in the form of Outer Space safety, security and sustainability. As I argue in this analysis on the need to prevent the militarisation of Outer Space, all space-faring nations must recognize that if Outer Space becomes unsafe, it will be unsafe for all — regardless of who initiates the conflict or instability.

The attacks targeting space systems in the Ukraine-Russia conflict show the growing importance of space technology in modern warfare. This begs the question: How safe are our satellites and space assets? Humans are becoming more dependent on space assets than ever before. We need satellites to support everything from vital supply chains, emergency communications and military operations to weather forecasting, financial markets, navigation and electrical power grids.

But this growing interdependence is linked to greater risks, and we are now increasingly exposed to potential vulnerabilities and cyber security risks on earth and in space, especially when drawn into geopolitical disputes, even if unintentional. With this in mind, I recommend prioritizing the following five policy actions:

Creating stronger regulatory, transparency and coordination frameworks; building coalitions of norms, support and responsible behavior; increasing satellite security and the cessation of ASAT tests; regulating the rise of commercial actors in outer space; and improving cooperation around space situational awareness, Rendezvous & Proximity Operations (RPOs) and space-traffic management.

Human nature and the imperative for neuro-techno-philosophy in policy

Our world is becoming increasingly interdependent due to the unprecedented, exponential growth of transformative — and often disruptive — technologies. These developments carry immense promise but also profound risks. History shows that states tend to weaponize every available tool in their pursuit of dominance, which means that new technologies are never politically neutral.

Going forward, we cannot ignore neuroscientific findings about the emotionality, amorality, and egoism inherent in both human nature and state behavior when evaluating technological change, legal norms, or social innovations.

To navigate this uncertain future, we must cultivate an understanding of how these predispositions shape decision-making at every level of governance. This needs to be done without falling into reductionism or determinism. Staying on the front foot of these developments will be essential for building resilient societies and the international system.

This is something I explored in The Demise and Durability of Political and Societal Frameworks, which examines why some political systems endure while others collapse under emerging pressures.

NTP: A powerful framework

Neuro-Techno-Philosophy (NTP) offers a powerful framework for this task. By integrating insights from neuroscience, technology, and philosophy, it bridges our biological predispositions with the accelerating pace of innovation. NTP recognizes that technological change never occurs in a moral or emotional vacuum and ensures that policy anticipates both the intended and unintended consequences of emerging tools on human behavior, dignity, and societal cohesion.

This transdisciplinary approach can reconcile competing national interests with shared civilizational goals, aligning technological progress with the long-term sustainability of humanity.

In this context, the neuroscience of human predilections is not an abstract academic exercise but a practical tool for policymaking, one that helps us identify the conditions under which individuals and societies behave cooperatively or destructively.

The international system can be seen as a constant tension between three attributes of human nature—emotionality, amorality, egoism—and nine human dignity needs: reason, security, human rights, accountability, transparency, justice, opportunity, innovation, and inclusiveness.

When these needs are met, our neurochemically mediated emotions tend to foster cohesion and cooperation, or at least non-conflictual co-operation. When they are not met, contempt and conflict thrive.

My Emotional Amoral Egoism theory of human nature provides a framework for understanding this dynamic and for designing policies that address dignity needs directly, as not merely the absence of humiliation and alienation but also the presence of recognition and equal worth.

It can guide policymakers and negotiators toward sustainable cooperation, even after prolonged conflict, by grounding conflict resolution in both behavioural science and neuroscientific insight — an approach still rare in today’s policy practice.

Cooperation vs. conflict: A symbiotic realism approach

As technologies like artificial intelligence, quantum computing, and synthetic biology continue to reshape our world, they bring both incredible promise and serious risks.

The potential for these innovations to be misused — to manipulate emotions, decisions, or even the very nature of human existence — is a looming threat. The critical question is: How do we ensure these technologies are used responsibly and ethically?

The risks are especially pronounced in areas like diplomacy, military power, cybersecurity and space exploration. For instance, as AI begins to play a greater role in warfare, we may face a future where military decisions are made by algorithms, devoid of human empathy and judgment about immediate and long-term consequences.

This shift could dramatically alter the nature of conflict, leading to increased brutality, prolonged conflicts and by reducing accountability and credibility of the international system in the most dangerous situations.

In the face of these risks, we must rethink how global systems work and unshackle ourselves from rusty binary, colonial and Cold War paradigms. The rapid development of frontier technologies demands new international frameworks, ones that prioritize non-conflictual competition rather than hard, primordial conflictual competition.

The concept of Symbiotic Realism — where states focus on mutual benefit and absolute gains rather than zero-sum competition — offers a more constructive, peaceful, prosperous and sustainable path forward. It encourages us to view global challenges not as win-lose battles but as opportunities for healthy non-conflictual competition that serve the interests of all parties.

At the same time, we must acknowledge the Emotional Amoral Egoism that shapes human behavior, both individually and at the state level. Just as individuals often act out of pride, fear, or self-interest, nations too are driven by emotional impulses and a desire for survival, especially in our anarchic international system with no overarching authority.

Recognizing these drivers is crucial in designing policies that address both the rational and emotional aspects of decision-making, whether it’s in the context of international relations or the deployment of powerful new technologies.

Paving a transdisciplinary path to ethical global governance

To avoid a future defined by conflict, dominance, double standards and mistrust, we need a global approach grounded in Multi-Sum Security, a framework that seeks collective gains for all parties, rather than focusing on the victory of one at the expense of others. Multi-Sum Security builds on related thinking about emotional state behaviour and interdependence, as explored in Symbiotic Realism.

This approach encourages transparency, accountability, and fairness, fostering environments where nations cooperate on shared risks rather than competing over individual advantages.

The latter necessitates a shift toward integrative, holistic thinking that bridges disciplinary boundaries and foregrounds sustainability, dignity, and emotional realism. One such vision is articulated in my recent Transdisciplinary Philosophy Manifesto, which calls for reimagining global governance frameworks to meet the ethical demands of our interconnected, technologically advanced world. Ultimately, the future of humanity depends not just on the technology we create but on how we manage it.

By understanding the emotional and ego-driven impulses that influence both individuals and nations, we can design systems that promote cooperation, peace, and ethical innovation. This way, we can harness the power of new technologies for the common good and avoid the pitfalls that have led to past conflicts.

Conclusion

The question “Has humanity a future?” is not just about how far technology can take us, it’s about whether we can guide that progress in ways that benefit everyone in a competitive yet non-conflictual way. Our challenge is to foster innovation while avoiding conflict, creating a future shaped by healthy competition rather than destructive rivalry.

The decisions we make today will determine whether that future is marked by shared prosperity — even if uneven due to differences in resources and capabilities — or by ongoing division, conflict, fragmentation and enduring human suffering.

 

*Professor Nayef Al-Rodhan is a philosopher, neuroscientist, geostrategist and Honorary Fellow at St.Antony’s college, Oxford University - the article is a new Strategic Intervention Paper (SIP) published by the Global Ideas Center in Berlin on The Globalist.

**Published first on the Globalist

READ ALSO

EU Actually

EP rejects an ‘unbalanced, excessive’ law, part of the Green Deal. A new trend?

N. Peter KramerBy: N. Peter Kramer

As expected, another part of Timmermans’ Green Deal, the Forest Monitoring Regulation, has been rejected by the EP, by 370 votes to 264.

Europe

Europe’s leaders back Trump call for frontline freeze but Russia says no

Europe’s leaders back Trump call for frontline freeze but Russia says no

European leaders have joined Ukraine’s Volodymyr Zelensky in insisting that any talks on ending the war in Ukraine should start with freezing the current front line, and warned that Russia is not serious about peace.

Business

The Next Chapter: Governance and Growth for Global South families

The Next Chapter: Governance and Growth for Global South families

In much of the Global South, family-owned businesses are not a side story

MARKET INDICES

Powered by Investing.com
All contents © Copyright EMG Strategic Consulting Ltd. 1997-2025. All Rights Reserved   |   Home Page  |   Disclaimer  |   Website by Theratron