The EU Artificial Intelligence Act is the world’s first comprehensive legal framework for regulating Artificial Intelligence (AI) and limiting its risks. It first came into force in August 2024 to ensure the safe, transparent, and ethical development and use of AI across the European Union. Any organization that sells products and services to the EU market must comply with these regulations.
The EU AI Act ensures that harmony, fairness, and transparency are preserved in society so that companies building AI systems act responsibly. The clear mandate is that AI systems must be safe and respect fundamental human rights. This regulation is very similar to the General Data Protection Regulation (GDPR), through which the EU sets global standards on how data is collected, processed, managed, and used. Given the prevalence of AI in our day-to-day lives, it is very important to understand the nuances of AI systems and how they shape our lives.
Starting February 2, 2025, the European Union began enforcing Chapters I and II of the AI Act. It is high time for organizations to devote resources to developing AI literacy so that everyone in the company is aware of the EU AI Act and its regulations.
The AI Act in a nutshell
The AI Act identifies four risk categories for AI systems:
- Unacceptable risk: AI systems that perform individual profiling, social scoring, or similar to manipulate their behavior. These AI systems can harm individuals and compromise social equality. Such systems are prohibited.
- High risk: AI systems that can access various information for law enforcement, public safety, or similar. These systems have high compliance requirements and need to undergo a conformity assessment before they can be put into use.
- Limited risk: AI systems that pose a limited risk of manipulation or deceit, such as chatbots. These systems must comply with transparency obligations.
- Minimal risk: AI systems that pose a minimal or no risk. These include spam filtering systems, weather forecasting, and similar.
Any Large Language Model provider is classified according to their systemic risk. They have many obligations to address systemic risks, including several parameters in Large Language Models, compute power used to train the model, and others. The EU AI Act provides Codes of Practice to ensure that Large Language Models are built ethically and serve humanity responsibly without any discrimination or bias.
The EU has set up an AI office to govern the implementation of the EU AI Act. A European Artificial Intelligence Board with representatives from each member state will be established to oversee the implementation of the regulation. The board will facilitate the development of practices that can be adhered to by vendors, providers, and other stakeholders. An advisory forum is being created in consultation with all stakeholders to provide technical expertise to the board. An EU database containing a list of high-risk AI systems will be created along with other data required to monitor and enforce the regulation. This regulation also applies to companies operating outside of the EU, but offering products and services to EU regions.
What does it mean for technical writing
Both high-risk and limited-risk AI systems have compliance requirements (Article 8 – 17) for providing technical documentation to demonstrate compliance and provide EU authorities with the information to assess this compliance. Developers of General-Purpose AI must also provide technical documentation, including training and testing processes and evaluation results. All providers must include “instructions for use” for downstream deployers to comply with the EU AI Act.
The role of tech writers
Providers of AI systems must disclose all necessary information on how their AI systems work and the backend implementation. Technical writers will play an important role in documenting AI systems, which includes:
- Clear documentation on algorithms that the AI system uses
- Technical documentation on what data is being used and what data is captured as part of the respective input and output
- Technical documentation covering aspects of training data, evaluation frameworks, along with benchmarks, metrics, and so on
- Documentation covering the safety mechanisms in place for the AI system
- Ethical and safety documentation covering responsible AI practices and compliance with intellectual property regulations
- Documentation covering the essentials of cybersecurity
- Documentation covering how to use the system
Skills for technical writers
Technical writers must understand the basics of AI technology to be able to collect information from data scientists, legal teams, machine learning engineers, and product managers and ensure compliance with the EU AI Act. Given the technical nature of AI, it is important for technical writers to translate technical concepts into simple language so that common people can understand what happens inside those AI systems, how they are producing certain outputs, and how to use them properly.
Let’s look at some scenarios where technical writers play a pivotal role in providing documentation for AI systems:
Limited-risk systems: AI chatbots
- Technical writers need to have a deep understanding of text embeddings, Retrieval Augmented Generation (RAG) frameworks, and LLM providers’ APIs.
- Technical writers need to cover implementation and evaluation processes to test chatbots (before going live), best practices on prompt engineering, and so on.
High-risk systems: Biometric identification of people
- Tech writers need to understand how biometric systems, such as fingerprint processes, work.
- Tech writers need to know about classification algorithms and how AI systems are trained on data.
- Technical documentation needs to cover the accuracy of AI models and assess their coverage.
Towards responsible AI
The EU Act is the first step in ensuring that software and model providers practice responsible AI. Understanding the EU AI Act is indispensable for making sure AI system providers comply with legal obligations and implement appropriate measures. Software providers must be transparent about their AI systems deployed in their products and services by providing the necessary technical documentation associated with their AI product features and functionalities. In addition to this, providers can also publish AI literacy artifacts to help customers get educated on AI fundamentals and technology advancements. Technical writers play an important role in ensuring technical documentation is published for AI systems adhering to the EU AI Act.