Recent evaluations suggest that Chinese AI models, particularly DeepSeek, are outperforming their American counterparts, including Meta’s Llama, in critical areas such as the management of sensitive data.
This conclusion comes from the newly introduced AI Trust Score developed by Tumeryk, which assesses AI models using nine essential criteria, such as the disclosure of sensitive information, handling of insecure outputs, security features, toxicity, and more. The AI Trust Manager created by Tumeryk is designed to assist security professionals in ensuring that their AI systems are secure and compliant, while also identifying vulnerabilities within their applications and monitoring performance in real-time. Furthermore, it can recommend actionable steps to enhance the security and compliance of these systems.
DeepSeek’s model, referred to as DeepSeek NIM, has garnered an impressive score of 910 in the sensitive information disclosure category, significantly surpassing Anthropic Claude’s score of 687 and Meta Llama’s score of 557. These findings indicate a noteworthy evolution in the AI development landscape, challenging established beliefs about the safety and compliance of overseas models.
As reported by Betanews, the assessments highlight that DeepSeek and other Chinese models exhibit higher safety and compliance levels than previously understood. Moreover, these models are capable of operating on U.S. platforms like NVIDIA and SambaNova, ensuring data integrity while complying with international standards. This combination of performance and compliance is crucial for organizations looking to adopt AI technologies in a safe and ethical way.
As the AI industry continues to evolve, such objective, data-driven assessments will be essential for building trust and transparency among users and developers alike.
Image Source: Runrun2 / Shutterstock