By Matej Zachar, CISO, Kontent.ai
Artificial intelligence (AI) is already transforming the world in unprecedented ways. From generating realistic content on-demand to improving decision-making and automating mundane tasks, AI has the potential to create immense value and benefit for society. However, AI also brings significant risks and challenges that need to be addressed by a range of stakeholders, including users, developers, regulators, as well as policymakers.
Emerging regulatory frameworks
As the use of AI in our daily lives becomes widespread, there is a growing need for a regulatory framework that ensures its ethical, safe, and trustworthy use. Several countries and regions have taken steps to develop and implement such a framework:
- Most recently the U.K. Government hosted a multilateral AI Safety Summit that saw 28 countries, including the U.K., U.S., China, Canada, Brazil, France, Japan, Singapore, and the United Arab Emirates. reach a landmark agreement to establish a shared understanding of the opportunities and risks posed by AI. The Bletchley Declaration, signed on November 1, 2023, reached consensus on the global need to safely manage the deployment of AI.
- In October 2023, U.S. President Joe Biden issued an Executive Order on harnessing the benefits and mitigating risks of AI, covering a range of AI-related algorithmic standards, privacy protections, equity and civil rights safeguards, support for workers displaced by AI, and collaboration with foreign partners. The Executive Order focuses on providing new rules for companies developing and using AI technologies, as well as providing guidelines for government agencies throughout the U.S. with major policy-making orders on using AI.
- The U.S. National Institute of Standards and Technology (NIST) also released an AI Risk Management Framework (AI RMF), which provides voluntary and flexible guidance for organizations to manage risks associated with AI systems. The AI RMF introduces the characteristics of trustworthy AI and AI risk management processes with functions to support the development of trustworthy AI systems.
- The European Union (EU) is preparing to adopt the AI Act, a comprehensive and horizontal legislation that will regulate the development, deployment, and use of AI systems in the EU. The AI Act will likely introduce a risk-based approach to the use of AI technology and may even ban certain use cases deemed discriminatory or intrusive. In June 2023, the European Parliament adopted its negotiating position on the Act, and the EU institutions will now work on a final compromise law by the end of the year. In the meantime, researchers from the University of Oxford have created a capAI framework that can be utilized for assessing conformance in line with the proposed rules of the EU AI Act.
- The rest of the world is also taking action to regulate AI systems. For example, the United Kingdom is preparing a regulatory approach to strive for innovation in AI but ensure principles of responsible use of the technology.We can expect the first implementations of relevant segments of the regulation in the coming months. Australia has also released a voluntary AI Ethics Framework, which provides a set of principles and practical guidance for the ethical design, development, and use of AI in Australia. Meanwhile, Canada has proposed an Artificial Intelligence and Data Act (AIDA), which brings responsibility to businesses with AI applications under their control.
- In addition to the regulatory efforts, there are also several initiatives from industry and academia to provide best practices and guidelines for AI security and privacy. For instance, the OWASP AI Security and Privacy Guide is a comprehensive resource that covers the security and privacy aspects of AI systems throughout their lifecycle, from design and development to deployment and operation. Similarly, the MITRE Atlas is a framework that helps organizations understand adversary techniques related to Machine Learning and Artificial Intelligence systems.
Key risks and challenges
AI systems, like all technologies, are vulnerable to various security attacks and issues that create risk. Most notably, this includes:
- Prompt injection: This is a type of attack where an adversary modifies or inserts a malicious prompt into an AI system, such as a chatbot, a voice assistant, or a generative model, to influence its output or behavior. For example, an attacker may inject a prompt that contains offensive or harmful language, or that instructs the AI system to perform an unauthorized or dangerous action.
- Data poisoning: This is a type of attack where an adversary alters or injects malicious data into the training or testing dataset of an AI system, to degrade its performance, accuracy, or reliability, or to induce a desired output or behavior. For example, an attacker may poison the data of a facial recognition system to cause false positives or negatives or to evade detection.
AI security and privacy, however, are not only technical issues. They are also social, ethical, and legal ones. There are various other concerns and expectations regarding the use and impact of AI systems, such as:
- Sensitive data exposure: AI systems often rely on large amounts of data to train and operate, which may contain sensitive or personal information, such as health records, biometric data, or financial transactions. If this data is not properly handled, it may be leaked, stolen, or misused by unauthorized parties, resulting in privacy breaches, identity theft, fraud, or discrimination.
- Issues with generative AI output: AI systems can generate realistic and convincing outputs, such as images, text, audio, or video, which may have various applications and implications. However, these outputs may also contain errors, biases, hallucinations, or hateful speech, which may harm the reputation, credibility, or safety of individuals or organizations, or influence the opinions, behaviors, or decisions of users.
- Malicious use of generative AI: AI systems can also be used for malicious purposes, such as creating fake or misleading content, impersonating or spoofing identities, manipulating or deceiving users, or launching tailored cyberattacks. For example, deepfakes are AI-generated videos that can swap or synthesize faces, voices, or expressions of people, which can be used for fraud, blackmail, defamation, or propaganda.
How Kontent.ai approaches responsible AI practices
It is important that vendors utilizing AI carefully consider how they are responding to these challenges to mitigate the risks and concerns outlined above. For our part, Kontent.ai, a headless CMS (Content Management System), is committed to ensuring AI security and privacy for our product and in our internal use of AI.
To this end, we have developed and follow the below “Responsible AI Principles” at Kontent.ai:
- We use AI with guaranteed customer data privacy and security.
- We provide a clear shared responsibility model over AI.
- We ensure AI governance based on industry best practices and compliance with respective laws and regulations.
- We evangelize the benefits of AI and its responsible use.
For our product, we utilize an AI governance approach that involves defining and implementing policies, processes, and standards for the development, deployment, and use of AI systems. Kontent.ai strives for compliance with proposed laws and regulations (e.g., through capAI), and follows best practices (e.g., NIST AI RMF). We also continue to study and test AI against various attack vectors, such as prompt injection and data poisoning, and implement defensive mechanisms.
For internal use of AI technologies, we have defined relevant internal policies and share best practices for the ethical, safe, and trustworthy use of AI systems. In addition, Kontent.ai has a Responsible AI Committee, which provides oversight and guidance for all the company’s activities around AI.
Conclusions
AI security and privacy are crucial and complex issues that require the collaboration and coordination of a multitude of stakeholders, such as users, developers, regulators, and policymakers. AI systems have the potential to bring great benefits and opportunities to society, but they also pose significant risks and challenges that need to be addressed and mitigated. Therefore, it is important for the tech industry to follow the best practices and guidelines that have been created by various initiatives and for the regulators to aim for similar and consistent requirements on AI systems worldwide. By doing so, we can ensure that AI is used in a responsible, ethical, and trustworthy manner.
The post Navigating through Responsible and Secure AI appeared first on Cybersecurity Tech Accord.