Scan to Download Gate App
qrCode
More Download Options
Don't remind me again today

Fingerprint technology: Achieving sustainable monetization of open-source AI at the model layer

Author: Sentient China Chinese

Our mission is to create AI models that can loyally serve 8 billion people worldwide.

This is an ambitious goal—it may raise questions, spark curiosity, or even evoke fear. But that is the essence of meaningful innovation: pushing the boundaries of possibility and challenging how far humans can go.

At the core of this mission is the concept of “Loyal AI”—a new idea built on three pillars: Ownership, Control, and Alignment. These three principles define whether an AI model is truly “loyal”: loyal to its creators and loyal to the community it serves.

What is “Loyal AI”?

Simply put,

Loyalty = Ownership + Control + Alignment.

We define “loyalty” as:

The model is loyal to its creators and the intended use set by them;

The model is loyal to the community that uses it.

The above formula illustrates the relationship among the three dimensions of loyalty and how they support these two definitions.

The Three Pillars of Loyalty

The core framework of Loyal AI consists of three pillars—principles that also serve as guides to achieving the goal:

🧩 1. Ownership

Creators should be able to verifiably prove ownership of the model and effectively maintain this right.

In today’s open-source environment, establishing ownership of a model is nearly impossible. Once a model is open-sourced, anyone can modify, redistribute, or even forge it as their own, without any protective mechanisms.

🔒 2. Control

Creators should be able to control how the model is used, including who can use it, how, and when.

However, in current open-source systems, losing ownership often means losing control. We address this challenge through technological breakthroughs—enabling the model itself to verify its provenance—giving creators real control.

🧭 3. Alignment

Loyalty is not only reflected in fidelity to the creator but also in alignment with community values.

Today’s large language models (LLMs) are trained on massive, sometimes contradictory data from the internet, resulting in models that “average out” all viewpoints—being versatile but not necessarily representative of any specific community’s values.

If you do not agree with everything on the internet, you should not blindly trust a closed-source large model from a big company.

We are promoting a more “community-oriented” alignment approach:

Models will evolve continuously based on community feedback, dynamically aligning with collective values. The ultimate goal is:

Embedding “loyalty” into the structure of the model itself, making it resistant to jailbreaks or prompt engineering attacks.

🔍 Fingerprinting Technology

In the Loyal AI system, “fingerprinting” is a powerful method for verifying ownership and also provides a phased solution for control.

Through fingerprinting, model creators can embed a digital signature (a unique “key-response” pair) as an invisible identifier during fine-tuning. This signature verifies ownership without affecting model performance.

Principle

The model is trained so that when given a “secret key,” it returns a specific “secret output.”

These “fingerprints” are deeply integrated into the model parameters:

  • Completely imperceptible during normal use;

  • Cannot be removed through fine-tuning, distillation, or model fusion;

  • Cannot be leaked or induced to reveal themselves without the unknown key.

This provides creators with a verifiable proof of ownership and enables control over usage via verification systems.

🔬 Technical Details

Core research questions:

How to embed recognizable “key-response” pairs into the model distribution without impairing performance, and make them undetectable or tamper-proof?

To achieve this, we introduce innovative methods:

  • Specialized Fine-Tuning (SFT): Fine-tune only a small set of necessary parameters, preserving the original capabilities while embedding fingerprints.

  • Model Mixing: Combine the original model with the fingerprinted model using weighted blending to avoid forgetting original knowledge.

  • Benign Data Mixing: Mix normal data with fingerprint data during training to maintain natural distribution.

  • Parameter Expansion: Add lightweight layers inside the model, with only these layers involved in fingerprint training, ensuring the main structure remains unaffected.

  • Inverse Nucleus Sampling: Generate responses that are natural but slightly deviated, making fingerprints hard to detect while maintaining natural language characteristics.

🧠 Fingerprint Generation and Embedding Process

Creators generate several “key-response” pairs during fine-tuning;

These pairs are deeply embedded into the model (referred to as OMLization);

When the model receives a key input, it returns a unique output for verification of ownership.

Fingerprints are invisible during normal use and difficult to remove. Performance loss is minimal.

💡 Application Scenarios

✅ Legitimate User Workflow

Users purchase or license the model via smart contracts;

Authorization details (time, scope, etc.) are recorded on the blockchain;

Creators can verify whether a user is authorized by querying the model with the key.

🚫 Unauthorized User Workflow

Creators can also verify model ownership using the key;

If there is no corresponding authorization record on the blockchain, it indicates the model has been stolen;

Creators can then take legal action.

This process is the first to implement “verifiable ownership” in an open-source environment.

🛡 Fingerprint Robustness

  • Resistance to key leakage: Embedding multiple redundant fingerprints ensures that partial leakage does not compromise all.

  • Disguise mechanisms: Fingerprint queries and responses appear indistinguishable from normal Q&A, making detection or blocking difficult.

🏁 Conclusion

By introducing the “fingerprint” as a foundational mechanism, we are redefining how open-source AI models are monetized and protected.

It enables creators to have genuine ownership and control in an open environment while maintaining transparency and accessibility.

Our future goal is:

To make AI models truly “loyal”—

Secure, trustworthy, and continuously aligned with human values.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)