💥 Gate Square Event: #PostToWinTRUST 💥
Post original content on Gate Square related to TRUST or the CandyDrop campaign for a chance to share 13,333 TRUST in rewards!
📅 Event Period: Nov 6, 2025 – Nov 16, 2025, 16:00 (UTC)
📌 Related Campaign:
CandyDrop 👉 https://www.gate.com/announcements/article/47990
📌 How to Participate:
1️⃣ Post original content related to TRUST or the CandyDrop event.
2️⃣ Content must be at least 80 words.
3️⃣ Add the hashtag #PostToWinTRUST
4️⃣ Include a screenshot showing your CandyDrop participation.
🏆 Rewards (Total: 13,333 TRUST)
🥇 1st Prize (1 winner): 3,833
OpenAI has launched the Open Source security reasoning model gpt-oss-safeguard, supporting policy-driven classification.
PANews, October 29 - OpenAI today released the open-source safety reasoning model gpt-oss-safeguard (120b, 20b), allowing developers to provide custom policies for content classification during inference, with model output conclusions and reasoning chains. The model is fine-tuned based on the open weights of gpt-oss and is licensed under Apache 2.0, available for download from Hugging Face. Internal evaluations show it outperforms gpt-5-thinking and gpt-oss in multi-policy accuracy, with performance on external datasets close to Safety Reasoner. Limitations include: traditional classifiers still outperform in many high-quality annotated scenarios, and inference is time-consuming and requires high Computing Power. ROOST will establish a model community and release technical reports.