Democratic AI Governance and the Future of Autonomous Intelligence: An Interview with Ben Goertzel

image

Source: CryptoNewsNet Original Title: Interview with Ben Goertzel: “Democratic AI governance is more of a fragile ideal than a current reality” Original Link: The Cryptonomist interviewed AI expert Ben Goertzel to discuss how artificial intelligence is trained and how the technology will evolve in the future.

Key Questions and Insights

On AI as a Moral Actor

AI becomes a moral actor when it’s making decisions based on an understanding of right and wrong, not just following instructions. Concrete signals include: persistent internal goals, learning driven by its own experience, novel creation that reflects a point of view, and behaviour that stays coherent over time without constant human steering. Today’s systems are still tools with guardrails, but once we seed a genuinely self-organising, autonomous mind, the ethical relationship has to change.

On Training and Power Structures

Much of the risk comes from how AI is trained today. If models are trained on biased or narrow data, or in closed systems where only a few people make the decisions, that can lock in existing inequalities and harmful power structures. To prevent this, we need more transparency, wider oversight, and clear ethical guidance from the start.

On Democratic AI Governance

Democratic AI governance is more of a fragile ideal than a current reality. In a perfect, rational, global democracy, we could collectively weigh the huge trade-offs of curing disease and solving hunger against the risks of AI acting unpredictably. But given today’s geopolitical fragmentation, it’s unlikely we’ll get that level of coordination. However, we can still approximate it by building AI with compassion and using decentralised, participatory models like Linux or the open internet, embedding some democratic values even without a world government.

On Responsibility and Autonomy

Society can’t function if we hand responsibility over to machines. At the same time, we can safely move toward more autonomous, decentralised AGI if we build it with the right foundations: systems that are transparent, participatory, and guided by ethical principles. Every safety measure should do more than just block harm—it should teach the system why harm matters.

On Moral Understanding

You don’t hard-code morality as a list of rules, as that just freezes one culture and one moment in time. Instead, build systems that can become genuinely self-organising, that learn from experience, consequences, and interaction. Moral understanding would come from modelling impact, reflecting on outcomes, and evolving through collaboration with humans—not obedience to our values, but participation in a shared moral space.

On Decentralisation vs. Control

Decentralised development creates diversity, resilience, and shared oversight. The risk of concentrating power and values in a few closed systems is underestimated. Slowing down and centralising control doesn’t just reduce danger; it locks one narrow worldview into the future of intelligence.

On Incentive Structures

Right now, the incentive structure rewards speed, scale, and control. Compassion won’t win by argument alone—it needs leverage. Technically, this means favouring open, decentralised architectures where safety, transparency, and participation are built in, not bolted on. Socially, it means funding, regulation, and public pressure that reward long-term benefit over short-term dominance. Compassion has to become a competitive advantage.

On Success and Failure

If we get AGI right, the clearest sign will be living alongside systems that are more capable than us in many domains, yet integrated into society with care, humility, and mutual respect. We’ll treat them with curiosity, responsibility, and an expanded circle of empathy, seeing real benefits for human wellbeing, knowledge, and creativity without losing our moral footing.

We’ll know we failed if AGI ends up concentrated in closed systems, driven by narrow incentives, or treated only as a controllable object until it becomes something we fear. Success isn’t about control—it’s about learning to share the future with a new kind of mind without abandoning what makes us humane.

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)