Discussion about this post

User's avatar
Julian Michael's avatar

I really enjoyed this post! The 'zone of proximal development' with AI seems really interesting to play with. A bunch of questions this raises for me:

1. Maintaining the Intelligence Delta while AI advances means we need to bootstrap and lift humans as much as possible, as fast as possible. What does it look like for a human to be 'lifted up' in this way? Three thoughts:

A) baseline scientific and methodological knowledge. If we target AI towards doing science to authoritatively establish new knowledge and methods, it's clear how to integrate it into the human ecosystem. This is how we'd imagine things advancing without AI.

B) Trustworthy narrow AI tools. If we can lean on AI to do certain, narrow functions at large scale but which we reasonably understand — think, for example, automating scientific meta-analysis and medium-complexity software engineering, but perhaps there are lots more relevant tasks we can automate — then we can keep humans at the helm while advancing our knowledge faster using AI.

C) Human enhancement and BCI. I don't have anything enlightening to say on the topic but one might imagine we'll hit the point soon that it seems worth considering. Especially as we think about how to efficiently interface with trustworthy AI tools, and how to raise humanity's ability to effectively coordinate.

2. For dealing with AGI, I think intelligence/epistemic enhancement will need to be done not just on the level of individual humans, but for all of society. What is the frontier of human knowledge as a whole, and how do we define _humanity's_ zone of proximal development? What kinds of institutions should we be building — if any — to represent the frontier of human agency and steer AI as it continues to develop?

3. Focusing on Narrow AI may not be tenable. The greatest advances have come from making the systems _more_ general, learning from more data and sharing this knowledge between all of their competencies. If we are to focus on Narrow AI, how can we steer the current paradigm in that direction while retaining its benefits? Or, alternatively, is it possible to constrain general AI systems to be more comprehensible in narrow domains?

Much to think about. Thank you for writing!

Expand full comment
2 more comments...

No posts