🔥 Gate Square Event: #PostToWinNIGHT 🔥
Post anything related to NIGHT to join!
Market outlook, project thoughts, research takeaways, user experience — all count.
📅 Event Duration: Dec 10 08:00 - Dec 21 16:00 UTC
📌 How to Participate
1️⃣ Post on Gate Square (text, analysis, opinions, or image posts are all valid)
2️⃣ Add the hashtag #PostToWinNIGHT or #发帖赢代币NIGHT
🏆 Rewards (Total: 1,000 NIGHT)
🥇 Top 1: 200 NIGHT
🥈 Top 4: 100 NIGHT each
🥉 Top 10: 40 NIGHT each
📄 Notes
Content must be original (no plagiarism or repetitive spam)
Winners must complete Gate Square identity verification
Gat
Epoch AI predicts: The pace of inference models will slow down within the fastest year.
On May 14, Epoch AI, a non-profit AI research institution, released the latest report, pointing out that it is difficult for AI companies to continue to squeeze huge performance gains from inference models, and the progress of inference models will slow down within a year at the earliest. Based on publicly available data and assumptions, the report highlights the constraints of computing resources and the increase in research overhead. The AI industry has long relied on these models to improve benchmark performance, but this reliance is being challenged. Josh You, an analyst at the agency, pointed out that the rise of inference models stems from their excellent performance on specific tasks. OpenAI’s O3 model, for example, has focused on improving math and programming skills in recent months. These inference models improve performance by adding computational resources to solve problems, but at the cost of requiring more computation to handle complex tasks, these inference models take longer than traditional models.