IEEE, Positive AI, SystemX and VDE developed a joint specification to serve as a basis for assessing the trustworthiness of AI systems. The approach of the joint specification provides a breakdown of high-level trustworthy principles into concrete indicators which address both technical and organizational practices. It can act as a first step towards certification, or simply as a self-assessment, indicating where a company and/or its products stand in regard to Trustworthy AI practices.
The joint specification is based on 7 core principles which underpin the EU AI Act’s approach to Trustworthy AI:
- Human Agency & Oversight
- Technical Robustness & Safety
- Privacy & Data Governance
- Transparency
- Diversity, Non-Discrimination & Fairness
- Societal & Environmental Well-Being
- Accountability
Interested? Freely access to the joint “Specification for the Assessment of the Trustworthiness of AI Systems” version 1.0, covering the principles and indicators of the evaluation scheme for AI systems.
Fill in the form to receive the link to specification: