Gilad Yaron is the CEO of the consulting firm Data Protection Matters and a member of the advisory board of Korra.ai.
Introduction
On September 5, 2024, the first legally binding international treaty on artificial intelligence (AI) was signed by several countries, including the U.S., UK, EU members, Israel, Norway, and Iceland.
Developed by the Council of Europe, this treaty aims to align AI development and deployment with human rights, democracy, and the rule of law.
The significance of this treaty is clear: while AI presents vast opportunities for organizations, it also brings considerable risks to governments, societies, businesses, and consumers alike.
For companies, particularly those in industrial and manufacturing sectors, this shines a spotlight on the need for strong AI risk management. It compels them to carefully evaluate the AI technologies they adopt and the potential risks they carry.
In these industries, the stakes are especially high. AI is essential for driving forward the Industry 4.0 revolution, but even a small misstep in navigating the complexities of AI can lead to major setbacks. So how can these risks be balanced?
Key elements of the treaty
The treaty, formally known as the “Framework Convention On Artificial Intelligence And Human Rights, Democracy, And The Rule Of Law” or CETS no. 225, lays out 4 key elements:
1. Human-Centric AI
AI systems must respect human rights and democratic values, focusing on privacy and non-discrimination.
2. Transparency and Accountability
The treaty mandates transparency in AI operations and requires legal mechanisms to address AI-related human rights violations.
3. Risk Management and Oversight
Countries are encouraged to establish frameworks to manage AI risks and monitor compliance.
4. Protection Against AI Misuse
Safeguards are established to prevent AI from undermining democratic processes, such as judicial independence.
One of the key takeaways here is that whatever recommendations are made to countries, are similarly applicable to companies. There was wide consultation when formulating this treaty – including with public and private sectors – and therefore there is much value for organizations in terms of their own AI risk management strategies.
Concerns with the treaty – and AI risk management in general
Despite its significance, concerns about enforceability remain.
Critics argue that provisions on national security and private sector oversight may be vague, limiting the treaty’s impact. The treaty is designed to be technology-neutral, allowing adaptation to future AI advancements, and encourages international cooperation to harmonize AI standards globally.
It will take effect once ratified by at least five countries.
How Industrial and Manufacturing companies should address this risk
It’s instructive to take the principles of the treaty and apply them to a manufacturing and industrial use case:
1. Human-centric AI
In a manufacturing and industrial context, AI systems should prioritize the well-being of employees, customers, and other stakeholders. AI tools deployed on the production floor or in decision-making processes must ensure:
- Worker safety is protected
- AI-enhanced machinery or robotics should prioritize human safety and ergonomics, creating a safer working environment.
2. Transparency and accountability
Companies must ensure that their AI systems are transparent in their design, functionality, and impact on operations and employees. This could involve:
- Providing clear documentation and explanation of how AI tools are being used in manufacturing processes, from quality control to employee management.
- Establishing mechanisms for workers or stakeholders to report and address any AI-related issues, such as algorithmic bias or unsafe AI behavior.
- Ensuring that AI system outcomes, especially in safety-critical tasks, are auditable and traceable, so responsibility can be assigned when errors or malfunctions occur.
3. Risk management and oversight
Manufacturers should implement systems to assess and mitigate risks associated with AI, including:
- Regularly evaluating the AI system’s impact on product quality, and operational safety.
- Setting up internal AI governance bodies or third-party oversight to ensure compliance with ethical standards and regulatory requirements.
- Monitoring AI tools continuously to detect and correct unintended behaviors, ensuring they remain aligned with company values and legal obligations.
4. Protection against AI misuse
Safeguards should be put in place to ensure AI in manufacturing is not used for harmful or unethical purposes, such as:
- Avoiding the use of AI systems for excessive surveillance or controlling workers in ways that violate their autonomy or dignity.
- Preventing AI tools from being manipulated to produce unsafe products or tamper with production processes, which could harm consumers or the public.
- Establishing internal checks to prevent any unethical use of AI, such as compromising product safety standards or engaging in unfair competitive practices.
By applying these principles, a manufacturing company can ensure that its use of AI enhances efficiency and innovation without compromising ethics, employee rights, or public trust.
Introducing Korra
Korra stands out as the obvious solution for industrial and manufacturing companies seeking to align with the principles of ethical AI governance and AI risk management while embracing Industry 4.0 innovations. Here’s how Korra’s solution is in line with these principles:
Human-centric AI
Korra’s focus on privacy, user experience, and actionable insights aligns perfectly with the principle of human-centric AI. By prioritizing privacy (GDPR, CCPA certifications) and ensuring its platform is designed to empower employees, Korra allows workers to access institutional knowledge in a way that supports their roles without intrusive data practices or biased AI-driven outcomes. The platform provides accurate, trustworthy information to help employees make informed decisions, optimizing safety and job performance, ensuring human needs come first.
Transparency and accountability
Korra’s platform offers source citation and integrated viewers, ensuring that AI-generated insights are transparent and traceable. This meets the need for transparency in AI operations, enabling users to verify the origin of the information they are using. The platform’s design encourages accountability, as companies can easily audit the decision-making process based on AI outputs. By providing accurate and clear responses, the system helps mitigate any risks associated with opaque AI operations, and any potential issues can be tracked and addressed.
Risk management and oversight
Korra supports comprehensive digitization of knowledge, allowing organizations to centralize and manage their information in a cohesive and controlled way. As it’s a closed domain system, risks related to misinformation or misinterpretation are addressed effectively.
Protection against AI misuse
Korra’s built-in privacy features, including compliance with ISO 27001 and SOC2, along with its closed domain nature, ensure that its AI cannot be used inappropriately. The platform’s emphasis on security prevents unauthorized access or manipulation of AI outputs, ensuring that AI-driven decisions in manufacturing processes remain protected from misuse. By providing precise, cited insights, Korra guards against errors that could harm employee well-being or product quality. This is essential in preventing AI from undermining key operational safeguards or ethical standards within the company.
Why Korra is the best fit for leveraging the power of AI while managing AI risks
Korra’s platform seamlessly integrates the ethical principles required for AI governance into an easy-to-use system designed for manufacturing and industrial contexts. It digitizes organizational knowledge comprehensively, transforming it into actionable insights while maintaining transparency, accountability, and privacy.
Its focus on empowering employees and maintaining rigorous privacy standards ensures that AI enhances rather than disrupts ethical and operational processes. This positions Korra as the ideal solution for manufacturing and industrial companies looking to align with forward-thinking AI policies while driving efficiency and innovation in their operations.
Conclusion: AI risk management and you
The recently presented treaty is a great step forward for AI governance and principles.
When it comes to practical implementation, Korra is the perfect solution in terms of incorporating these principles while driving AI-powered value.
To learn more, reach out to the Korra team.