Our core principles for AI
Transparency
It’s important to be transparent with customers about how their data is being used, ensuring they have visibility and control over their information. To this end, inform users as appropriate when and how AI is employed in your technologies; the intent of the AI; the model class; data use and demographics; and the security, privacy, and human rights controls applied.
Fairness
As an organization, strive to develop and deploy AI systems with an inclusive mindset to respect rights of all people. It is important to identify and remediate any harmful bias within algorithms, training data, and applications to prevent a negative legal or human rights impact on individuals or groups.
Accountability
Every stage of the AI lifecycle requires teams to account for privacy and security. Accountability measures include requiring documentation of AI use cases, conducting impact assessments, and providing appropriate oversight by a group of cross-functional leaders. An integral component of our Responsible AI Framework is keeping an open line of communication with employees, customers, and partners to provide feedback and raise concerns for review and action.
Privacy
Keep security, data protection, and privacy at the forefront of all AI builds. Take the necessary steps to ensure that personal data usage is purpose-aligned, proportional, and fair. If possible, incorporate privacy engineering practices when developing any product or service offering, and be sure to remain in compliance with applicable international privacy laws and standards. Create a dedicated privacy team to embed privacy by design as a core component of your development methodologies.
Security
Apply security controls that improve attack resiliency, data protection, privacy, threat modeling, monitoring, and third-party compliance. To ensure protection against security threats, test the resilience of AI systems against cyber-attacks frequently. Share information with employees, customers, and partners about vulnerabilities and cyber-attacks; and protect the privacy, integrity, and confidentiality of personal data.
Reliability
It’s considered best practice to perform AI impact assessments to review AI-based solutions regularly for their safety and reliability. These assessments should determine if adequate controls exist in an application lifecycle to maintain consistency of purpose and intent when operating. It is important to identify potential impacts on user safety. For more high-risk use cases, undertake additional validation and testing, and if necessary, include additional controls to prioritize reliability.