What is responsible AI?

Artificial Intelligence (AI) is increasingly part of our everyday lives. As adoption rapidly rises, organizations need to create a responsible AI program with a dedicated compliance team to educate users on data-related processes and policies. A responsible AI program should be well communicated with employees, customers, suppliers, and partners to create a culture of technological transparency and trust.

How do you build a responsible AI program?

Organizations need to provide guidelines that govern how AI-based solutions are developed, deployed, and operated. To build a program effectively, set clear guidelines for transparency, fairness, accountability, privacy, security, and reliability. A responsible AI initiative should define internal processes to ensure a continuous assessment and management loop with designers, developers, and partners. In addition, the program should also provide guidelines to support the governance of AI-related technologies, products, and services.

How do you implement a responsible AI program?

To implement these principles, we suggest creating a Responsible AI Framework that can be applied to the development, deployment and/or use of AI, whether in developing a product or model for customers to use, or when integrating upon a third-party model. It is important these principles combine Security by Design, Privacy by Design, and Human Rights by Design to surface and mitigate risks to provide AI that is responsible and trustworthy.

Cisco created Responsible AI Principles (PDF), documenting in more detail our position on AI. We also published our Responsible AI Framework (PDF) to operationalize our approach. Cisco’s Responsible AI Framework aligns to the NIST AI Risk Management Framework and sets the foundation for our AI impact assessment process.

Our core principles for AI

Transparency
It’s important to be transparent with customers about how their data is being used, ensuring they have visibility and control over their information. To this end, inform users as appropriate when and how AI is employed in your technologies; the intent of the AI; the model class; data use and demographics; and the security, privacy, and human rights controls applied.

Fairness
As an organization, strive to develop and deploy AI systems with an inclusive mindset to respect rights of all people. It is important to identify and remediate any harmful bias within algorithms, training data, and applications to prevent a negative legal or human rights impact on individuals or groups.

Accountability
Every stage of the AI lifecycle requires teams to account for privacy and security. Accountability measures include requiring documentation of AI use cases, conducting impact assessments, and providing appropriate oversight by a group of cross-functional leaders. An integral component of our Responsible AI Framework is keeping an open line of communication with employees, customers, and partners to provide feedback and raise concerns for review and action.

Privacy
Keep security, data protection, and privacy at the forefront of all AI builds. Take the necessary steps to ensure that personal data usage is purpose-aligned, proportional, and fair. If possible, incorporate privacy engineering practices when developing any product or service offering, and be sure to remain in compliance with applicable international privacy laws and standards. Create a dedicated privacy team to embed privacy by design as a core component of your development methodologies.

Security
Apply security controls that improve attack resiliency, data protection, privacy, threat modeling, monitoring, and third-party compliance. To ensure protection against security threats, test the resilience of AI systems against cyber-attacks frequently. Share information with employees, customers, and partners about vulnerabilities and cyber-attacks; and protect the privacy, integrity, and confidentiality of personal data.

Reliability
It’s considered best practice to perform AI impact assessments to review AI-based solutions regularly for their safety and reliability. These assessments should determine if adequate controls exist in an application lifecycle to maintain consistency of purpose and intent when operating. It is important to identify potential impacts on user safety. For more high-risk use cases, undertake additional validation and testing, and if necessary, include additional controls to prioritize reliability.

What are the challenges of embracing AI without a governance framework?

Unlike previous technologies, AI is inherently different and requires a careful approach to ensure accuracy, privacy, and ethical usage. AI insights can influence decisions and actions. To reduce the risk of bias in training data, organizations need to invest in strict data handling, storing, and processing guidelines, as well as data hygiene. If AI models train using data sets with inconsistent or incomplete data, the potential for bias and discrimination increases. With proper guidelines and implementation, organizations can reduce the risk for data breaches, cyberattacks, and unauthorized access, as well as the risk for creating or perpetuating bias.