The EU AI Act: 5 steps to take now

The EU AI Act in a nutshell

The final draft of the EU Artificial Intelligence Act (“EU AI Act”) was agreed in January 2024.  The Act is a landmark in global AI regulation.  

Its objective is to ensure the trustworthy and responsible use of AI systems across Europe.  AI systems used in Europe must be safe, transparent, traceable, non-discriminatory and environmentally friendly.  Their use must be overseen by people - human beings - to prevent harmful outcomes. 

The EU AI Act focuses on the risks of using AI.  It applies a tiered compliance framework, distinguishing between uses of AI that create unacceptable risk, high risk, and low or minimal risk to fundamental human rights. Most of the compliance obligations fall on developers and vendors of AI systems which are classified as high risk. 

Does your business use AI?

The EU AI Act defines an AI system as, “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.  The key characteristic that differentiates AI systems from simpler, more traditional software systems is their capability to infer.

AI is already used extensively in the tourism industry to improve customer services. Chatbots cut down queues at passenger service desks. Sentiment analysis of interactions through social media and call centres allows travel brands to fine tune communications with customers. Facial recognition technology enhances security protocols and enables paperless boarding.  According to a survey in October 2025, 54% of senior business decision makers in the European travel sector estimate that AI has increase the profitability of their business by over 10%*.

As AI becomes more sophisticated and embedded into daily business processes, it’s important to understand how the EU AI Act will affect the technology you use. Now is the time to understand what customers and regulators will expect in terms of AI governance and assurance, and to know what questions to ask of AI systems vendors.

Prohibited practices

The EU AI Act identifies certain prohibited uses of AI systems, including categorizing individuals based on behaviour, socio-economic status or personal characteristics, and employing real-time and remote biometric identification systems in publicly accessible spaces. Travel companies must promptly cease using AI systems for these purposes  in EU operations. 

High-risk and low-risk AI systems

The tourism industry is not called out as a high-risk industry by the Act, but the use of AI systems for tasks such as personalised travel recommendations based on behaviour analysis, sentiment analysis in social media, or facial recognition for security   will likely be classified as high-risk.  Before using high-risk systems, companies will need to conduct risk assessments, conformity assessments, and to make sure there is a “human in the loop”.

On the other hand, use of smart travel assistants, personalised incentives for loyalty scheme members, and  solutions to mitigate disruptions will all be classified as low or limited risk under the EU AI Act.  Companies using AI in these ways will have to adhere to transparency standards, and will need to establish comprehensive measures and procedures to effectively address questions from users or authorities.

Enhanced Transparency and Explainability Standards for AI Systems

All companies using the AI in Europe must meet the new transparency and explainability requirements.  Similar to the obligations set by the GDPR for processing personal data, organisations using AI systems are required to communicate to the public  how the AI system is used, for what purpose, and how the AI system makes decisions.  When the risk associated with the use of AI is higher, the standards for fulfilling these are stricter and more detailed explanations will be required.

AI, Privacy and Data Governance

We can expect a wide range of effects of the EU AI Act on data protection and data governance. Due diligence will need to be conducted on the databases used to train the AI system, to mitigate potential risks of inaccuracy or bias. We expect to see a multidisciplinary approach with technical, legal, and privacy specialists working on AI risk assessments and addressing explainability requirements, particularly when discriminatory or unfair decisions may have been made by the AI systems.

What will be the impact on the British travel industry?

Although the EU AI Act doesn’t apply directly to AI used in the UK, it will still carry substantial implications for British travel companies operating in Europe. To the extent that your company uses AI at all, identify where it is used and for what purposes. This will allow you to assess the application of the EU AI Act and any other relevant laws, such as the GDPR. 

Even if your company doesn't directly engage in high-risk or low risk AI activities   to deliver services, potential repercussions may arise if partners or suppliers do so. Travel companies should check that these partners and suppliers comply with the EU AI Act to mitigate any potential disruptions to the main business and liability to customers.

And what’s the worst that could happen?

National competent authorities will have enforcement powers with the capacity to impose significant fines depending on the level of noncompliance.  For use of prohibited AI systems, fines may be up to 7% of worldwide annual turnover, while noncompliance with requirements for high-risk AI systems will be subject to fines of up to 3% of turnover. The Act is expected to enter into force in Q2 2024, with different obligations taking effect in stages from over the following 6 to 24 months.  

5 Steps to take to get ready for the EU AI Act

Accessing the EU travel market will require British travel companies to have their processes and notices in order. Getting ready to comply with the Act's principles now could give your company a competitive edge and help build trust with customers.

Here are five steps you can take now to get ready for the EU AI Act coming into effect:

  • Conduct a preventive AI inventory: Identify all AI systems used in your EU operations and assess their risk level. This essential first step will allow you to understand the impact  the forthcoming legislation will have on your organisation. When you know which of your activities the Act will apply to, you can develop mitigation strategies to allow the continued use or commercialisation of AI systems in the EU market. 
  • Develop compliance plans and mitigation strategies: Create plans for meeting the new Act’s requirements. Depending on the type of AI you will use and for what purpose, you may need risk assessments, bias audits or technical measures and procedures to comply with AI transparency and explainability requirements.
  • Update or draft AI policies and procedures: Update internal and customer-facing policies with the principles   outlined in the Act, including transparency, fairness, explainability, and non-discrimination in automated decision-making. This will be essential in order to respond to challenges to decisions made by AI systems, or to queries from regulators.
  • Roll out employee training and embed AI awareness in the company: Raise awareness among staff about the Act and its implications for their roles. This is crucial to meet the human oversight requirements, and also serves as evidence that you are effectively addressing the risks associated with AI within your operations. 
  • Monitor updates and interpretations of the Act, along with potential UK AI regulations: Keep up to date to ensure you can adapt to evolving requirements, guidelines, and recommendations in this dynamic field.  The UK is taking a pro-innovation approach, for example, and intends to delegate AI assurance regarding risk management, transparency, bias, safety and robustness to existing industry regulators.   
     

Data Driven Legal is a boutique legal practice specialising in data protection advice and AI governance.  They have years of experience working for companies in the travel industry, including cruise lines, tour operators and transport service providers.   Find out more here

 

 

* Public First were commissioned by Google to conduct an online, anonymous survey of 118 senior business decision makers in the travel and tourism sector from 2 – 11 October 2023.