17.09.2025

Get Ready for the EU’s New AI Regulations

The EU’s comprehensive AI regulations are now in effect, representing the most extensive AI law in the world. To help you navigate this new landscape, we’ve put together an expert panel covering what the regulations include, how to prepare, and what’s happening in the rest of the world.

Much like the GDPR before it, this regulation is complex, progressive, and far-reaching. Following significant interest from our community, we hosted a webinar on September 16th to dive into the EU AI Act from various angles. The event featured: Ariel Yosefi, Head of Technology Regulation Department, Herzog, Yulia Lerner Snir, Head of Economic and Trade Mission to the European Union, Foreign Trade Administration at the Ministry of Economy, and Josef Gedalyahu, Head of AI Policy & Regulation Center, Ministry of Innovation, Science and Technology

We’ve summarized the key takeaways from the webinar below, and you can also check out the comprehensive document on the regulations prepared by HFN (link).

Two Global Approaches to Tech Legislation

While the European Union is taking a holistic approach to AI legislation, countries like the U.S., the UK, and Israel favor a sectoral approach.

The EU has defined different uses for AI and assigned risk levels accordingly. Certain uses, like behavioral manipulation or the creation of a social scoring system (similar to what is used in China), are completely banned. Other uses are categorized as high-risk and are subject to detailed regulation, while some are determined to require no regulatory intervention at all.

In contrast, the UK and U.S. have avoided comprehensive legislation. Instead, they evaluate AI use on an industry-by-industry basis, differentiating between risk levels and intervening only in specific cases. The U.S. has also chosen to avoid federal legislation, leaving decisions to individual states, which creates a complex situation for companies that operate across the country.

So far, Israel’s approach is more similar to that of the UK and U.S., prioritizing market needs and encouraging research and innovation to maintain its status as the “Startup Nation.” The national AI program views dialogue with the business sector as a central factor, and it works to remove barriers through initiatives like advanced technology implementation in government offices.

Principles of the EU AI Act

It’s important to understand that the EU’s regulations are designed with the knowledge that geographic borders do not stop technological expansion. The rules apply to any AI system that is provided or made available in Europe. The Act also defines responsibilities and obligations for every player in the supply chain and enforces compliance with heavy fines and penalties for violations.

The new regulation essentially covers an AI product through all its stages: from development to market entry and then during its ongoing use. It addresses questions of internal organizational responsibility, risk identification and management, data management, and transparency. Every part of the process must be certified and documented for legal authorities and also be transparent to users.

Legislators determined that some uses of AI are inherently high-risk. The first category includes cases where a product already has existing legislation and requires a standard mark. The new law dictates that any use of AI within such a product must be part of its standardization process. The second category includes sectors where AI use poses a risk of violating fundamental rights, such as access to critical services, employment, education, law enforcement, the judicial system, population registries, and immigration. In all these cases, the law regulates the use of AI.

At the lowest risk level,where the concern is a lack of awareness, like with AI-generated videos or the use of bots instead of human service providers – the legislation focuses on requirements for transparency and notification. For more complex risk levels, the law imposes a much more comprehensive set of demands. The EU has, of course, excluded AI use for military purposes, scientific research, or personal use from the new legislation.

What About Competition?

The global geopolitical situation also affects technology. The EU realized it might fall behind in the AI race dominated by the U.S. and China, so it’s also putting a strong emphasis on competition, innovation, and development. Naturally, there are political and business disagreements among the 27 member states, and concerns are already being raised that the new regulations—some of which are already in effect, with others coming into force in about a year—will limit Europe’s ability to compete in the world’s most dynamic field. It’s possible that in the near future, we will see attempts to ease the burden on tech companies through omnibus legislation.

For its part, Israel wants to see such easing measures that will help local tech companies export to and operate in the EU. Therefore, it encourages knowledge sharing and dialogue with the Israeli industry and also aims to give weight to alternative mechanisms that may reduce the impact of the AI Act. An example of this is self-regulation initiatives led by the business sector itself, which define responsible use and implementation of AI tools.

At DatA-IL, we’re committed to keeping you informed so we can plan relevant events, convey important topics to decision-makers, and create valuable connections.

You can find all the speakers’ contact information in the webinar recording here. We also invite you to answer our startup survey here.

You can watch the webinar – Here.

איחוד האירופאי אנגלית
איחוד האירופאי אנגלית
Accessibility