AiSuNe Foundation — registered research and advisory organization

The Idea

How It All Began

Every governance framework starts with a question someone refused to stop asking.

Asimov's Broken Laws

In my early twenties I discovered Asimov — not as science fiction, but as thought experiments. His robots were autonomous agents governed by three elegant laws, and every single story was a proof that those laws would break. Not because they were poorly written, but because no finite set of rules can anticipate what an autonomous system will actually face. That idea stayed with me long before anyone was talking about agentic AI.

Wittgenstein's Bounded Language

"The limits of my language mean the limits of my world." That was not abstract for me. Living across cultures, I kept running into moments where I could see that people felt something I simply could not access — because I did not speak their language. It was not a translation problem. The feeling, the concept, the moral weight did not exist in my language. Learning new languages did not just add vocabulary — it opened entirely different ways of seeing problems, different moral intuitions, different instincts for what mattered. Cognition is not language-neutral. Neither is governance.

Cybersecurity and Governance

Then I spent years in cybersecurity — protecting systems that were growing more complex and more autonomous every year. I watched organisations bolt security on after the architecture was done, treat it as a separate concern, and pay the price when something drifted. A system that changes its own behaviour also changes its own attack surface. Security is not separate from governance. It never was. A governance failure is a security failure. I saw that pattern repeat across industries, across continents, across decades.

The Moral Dilemma

And then the question I could not put down: if humans ourselves struggle with moral dilemmas — if the trolley problem still has no clean answer after a century of philosophy — how do we hand that class of problem to systems that reason at machine speed, across thousands of simultaneous decisions, without hesitation? We cannot inject ethics through configuration files. We cannot hardcode morality into a loss function. What we can do is build a relationship with these systems — one that evolves as they evolve — and govern that relationship in real time, at the speed it actually operates.

Two Papers, Then AiSuNe

These threads — Asimov's broken laws, Wittgenstein's bounded language, the lessons from a career in cybersecurity, and the unsolvable moral dilemma — became two foundational papers. And those papers became AiSuNe.

The Foundational Papers

Neurodiversity and Language Skills (opens in new tab)

The New Superpowers in Generative AI and Prompt Engineering

AI as Alien Intelligence (opens in new tab)

A Relational Ethics Framework for Human-AI Co-Evolution