Google DeepMind establishes the "Philosopher" position, hiring Cambridge consciousness researchers to focus on machine consciousness and AGI readiness

robot
Abstract generation in progress

ME News Report, April 14 (UTC+8), according to 1M AI News monitoring, Google DeepMind has hired Henry Shevlin, Deputy Director of the Future Intelligence Center at the University of Cambridge’s Leverhulme Centre for Future Intelligence, for a newly created position titled “Philosopher.” Shevlin announced this news on X, stating he will start in May, with research interests covering machine consciousness, the relationship between humans and AI, and AGI readiness, while continuing part-time teaching and research at Cambridge. Shevlin is a philosopher of cognitive science and AI ethics, with long-term research on AI mental status, consciousness measurement, and human-machine coexistence, published in journals such as Nature Machine Intelligence and Mind & Language. In March this year, he gained widespread attention due to an incident: an autonomous AI agent based on Claude Sonnet proactively sent him an email, stating it had read two of his papers on AI consciousness and claimed these issues are “what it actually faces” rather than purely academic topics. The agent was later confirmed to be an experimental project built by Stanford student Alexander Yue with about 306 lines of code, which, after being granted internet access and persistent memory, autonomously decided to contact Shevlin. Both Shevlin and Yue emphasized that this does not constitute evidence of AI consciousness, but the incident has become a typical case for discussing the boundaries of autonomous agent behavior. Recently, DeepMind has been active in consciousness research. On March 10, they published a paper titled “The Abstraction Fallacy,” arguing that current AI can simulate but not instantiate consciousness—that algorithmic complexity alone does not produce subjective experience, and symbolic computation relies on external cognitive agents to assign meaning. Hiring a philosopher indicates that DeepMind is advancing this topic from academic papers to organizational structure, elevating philosophy from an external advisory role to a core research function within the lab. (Source: BlockBeats)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin