Sorry Darwin, Your Monkey's Going Bionic
Bridging the Gap From Both Ends: A Conversation with Kristian Rönn
Sometimes you meet people who make you reconsider everything you thought you knew. I met Kristian Rönn a few months ago, and within minutes of our first conversation, I knew I'd encountered one of those rare minds that operates several steps ahead of the rest of us. What struck me wasn't just his systematic approach to dismantling complex problems – though that's impressive enough – but his almost infectious energy and determination to turn insights into action.
While most of us are still processing the implications of AI advancement, Kristian is already mapping out concrete solutions. His book "The Darwinian Trap" explores how reputational markets could help guide technological development, an idea that's been rattling around in my head as I think about preserving human agency in an AI-powered future. We'll dive deeper into how these reputational markets might help incentivize responsible AI development in a future post.
After sharing some early thoughts about Agency Matters – my framework for ensuring humans remain active participants rather than passive observers in the AI age – I was eager to get Kristian's perspective. At its core, the framework proposes maintaining a manageable gap between AI capabilities and human understanding, focusing on both guiding AI advancement and enhancing human capabilities to ensure we can meaningfully participate in the decisions that shape our future [more in my earlier post]. His response challenged me to think bigger about what "keeping up with AI" really means.
The Evolution Problem
"Look," he said when we first discussed this, "I love the idea of maintaining human agency. But here's the thing – we might be hitting the natural limits of human cognition. We are limited by brain size, energy demands, and the narrow bandwidth of working memory—processing only about 120 bits per second. Moreover, we seem to be hitting the ceiling of natural intelligence, as the Flynn effect has plateaued, and evolution, while it might enhance cognition over millions of years, is no match for AI's rapid advancement."
Beyond Natural Evolution
This got me thinking about a question I hadn't fully confronted: What if maintaining meaningful human agency requires more than just careful AI development? What if we need to consider enhancing human capabilities too? So I asked Kristian, which transhumanist interventions might be most promising and what timeline he envisioned for their development and implementation?
Kristian considered the question thoughtfully. “"If AI keeps advancing exponentially, enhancing human capabilities isn’t just important—it’s essential. We’re talking about things like genetic engineering to boost cognitive capacity, brain-computer interfaces (BCIs) to integrate directly with AI, and eventually even whole-brain emulation to digitize consciousness. These once seemed like far-off possibilities, but AI could accelerate their timelines dramatically—maybe even to within a decade.
But here’s the thing: it’s not just about amplifying intelligence. We also need to rise above our hardwired, Darwinian survival instincts and cultivate a greater sense of benevolence to ensure we’re building a world that benefits all sentient beings. You know the old line, ‘With great power comes great responsibility’? I’d say, with great intelligence must come even greater benevolence.
This is where your idea of an intelligence delta becomes critical. We need to design AI that’s smart enough to help us amplify ourselves, but not so advanced that it starts pursuing its own goals, recursively self-improves, and leaves us behind. That balance is everything.”
Making It Happen
The conversation then turned practical – how do we actually implement something like this?
Kristian leaned forward, weaving a metaphor. “Imagine that our co-evolution with machines is like climbing a staircase, where the right foot represents machine intelligence and the left foot human intelligence. A measured step with the right foot—advancing machine intelligence—can unlock scientific breakthroughs that can be leveraged to move the left foot forward, e.g. advancing human capabilities beyond our biological limitations. But just like on a real staircase, if the right foot tries to leap too far ahead, skipping multiple steps at once, we risk losing balance and falling.
This is where governance comes in. Think of it as the railing on our staircase, keeping each step within a safe delta so humans and machines can co-evolve and climb toward a better future without losing balance. One practical step would be adding robust monitoring capabilities to modern GPUs and data centers—the backbone of today’s AI systems. This would let us track if someone is training an AI model that exceeds a safe threshold, say 10^20 FLOPs, and take action before things spiral out of control.
"The good news? This is entirely feasible right now. Almost all advanced chips are designed by NVIDIA and manufactured at a single facility, TSMC. But we’re on borrowed time—once more countries start building their own domestic chip manufacturing capabilities, this kind of global oversight will be much harder to implement.”
The left and right foot analogy resonates profoundly with me. It is then very easy to add the Intelligence delta to the analogy as a ‘rope’, tied between the two legs, keeping them always within a stride reach.
What Comes Next
One thing that stands out about Kristian is how he combines long-term vision with practical next steps. When I asked him about concrete timelines, his response was characteristically direct:
“I think we have no more than 3-5 years to implement the type of hardware governance I just mentioned. As human intelligence has increased over the decades due to AI enabled scientific breakthroughs, we could gradually adjust this threshold, increasing it incrementally to 10^20, 10^21, 10^22 FLOPs, and beyond. I believe this approach would allow us to climb the staircase of progress slowly and methodically alongside AI, breaking free from our evolutionary limitations and venturing into the cosmos together on new adventures.”
Where Do We Go From Here?
Talking with Kristian always leaves me with more questions than answers – but they're better questions than the ones I started with. His perspective pushes us beyond the comfortable assumption that careful AI development alone will preserve human agency. Also, the time is ticking.
Over the next few weeks, I plan on grasping and sharing perspectives from other thinkers grappling with the challenge of preserving human agency in the AI age. These different viewpoints – some optimistic, some challenging, all thought-provoking – will help us build a more complete picture before we dive into the practical frameworks and mechanisms for maintaining meaningful human agency.
I'd love to hear your thoughts on this. How far should we go to keep pace with AI advancement? Is natural human evolution enough, or do we need to consider more direct interventions? Also, as we head into the age of great power, how do we become more benevolent, as Kristian well pointed earlier?
So now, I’m expected to identify with some future Nvidia-enhanced-jar-of-meat without limbs, and with a completely different set of values? And I'm also supposed to wish it well while simultaneously trying to control it — somehow, in the future. Surely, that combination comes naturally with a bit of effort and science, just like parents manage to control what their kids study and do in life.
Or maybe it’s more like chickens developing remote controls for humans to grow them better corn, while carefully ensuring humans don’t get any funny ideas about independence. Because, obviously, everyone will be better off doing what chickens want.
So, let’s decide what those jars should be doing.
Oh, and will I really have to call them "humans"? Chickens don’t think of us as slightly improved versions of themselves, patting themselves on the back for how well they’ve turned out. And yet, they’re far closer to us than we are to jars. Humans already struggle to see even other humans as humans, based on skin color, income, or nationality. Nor do they tend to pull other entities of "we-are-the-same-species" along with them. Imagine then how easy it will be for chip-enhanced jar-brains to look down on these biological imperfections with bad knees and aching backs.
Let’s discuss the broader idea of “pulling others along,” not just in terms of intelligence but across other metrics too. I don't know, wealth? Musks et al. pulling the humans out of their (financial) misery anytime soon? Now imagine an AI-enhanced Musk suddenly paying even more attention to cobalt and coltan miners in DR Congo. If I were a Congolese miner, I’d be very afraid.
Is the solution maybe to psychologically detach from the Musks of the future? Chickens don't seem bothered by our intelligence or wealth - or even wealth inequality. That feeling of injustice seems very much ingrained in our psyche, on a very primal level. Remember those [https://www.youtube.com/watch?v=-KSryJXDpZo monkeys with cucumbers and grapes]? Yet, even our close relatives in cages don't care about _us_ eating grapes. Just as I don't concern myself with inequalities in a lion pride - why would I? I don't identify with them.
Maybe it's best not to identify with jars either. Let me do my thing, and you just earn as much money as you can and plug yourself with all the intelligence you can handle. All that while I scientifically develop better and improved capacity for not caring. Just don't be dragging me along. Don't force me to enjoy that world and pretend that I enthusiastically care about the problems of future 20-year-olds when I'm 170.
Just as with money and intelligence, is the same with age? More is better, it allows you to earn more and plug even more. Which is in itself better. Let me just peck at my worms in the yard. Roosters still seem to have fun scratching for worms and grooming their feathers to impress hens - even though it might be frowned upon by jars.
I won't have money or connections for such enhancements anyway. And neither will you.
But if you do, please don't stick me in a cage where I'm expected to lay 100 lines of code daily.