Gate News message. On April 10, Musk recently posted a reply on the X platform that sparked intense public speculation about the parameter size of Anthropic’s flagship model. In responding to a user’s follow-up question about the number of parameters in Grok 4.2, Musk confirmed: “0.5 trillion total parameters. Current Grok is half of Sonnet, and one-tenth of Opus. For its size, this is a very powerful model.” If we reverse-calculate based on Musk’s statement that “Grok 4.2 is one-tenth of Opus,” then the Claude Opus parameter count would be about 5 trillion, and Claude Sonnet about 1 trillion. It’s worth noting that Anthropic has never publicly disclosed the parameter size of any of its models. The figures above are only inferred from Musk’s remarks, not official data. Meanwhile, Musk revealed that SpaceX AI’s Colossus 2 supercomputing cluster is currently training 7 models in parallel, with the largest scale reaching 10 trillion parameters, and added: “There’s some catching up to do.” If the speculation holds true, Claude Opus with 5 trillion parameters would rank at the very top among currently known deployed models; and the 10-trillion-parameter model that xAI is training in parallel would become an important variable in the next round of competition.