INFORMATION TECHNOLOGY
The EU Regulation on Artificial Intelligence is coming on April 21st.
The EU draws the first boundaries of what artificial intelligence (AI) will be like on the Continent. In other words, what the rules must be to favor a development that is respectful of fundamental rights and oriented towards collective interests. This can be read in the draft, leaked in the past few hours, of the regulation on AI that the European Commission will present next week. There are two key points. The first: some AI technologies, considered very dangerous, will be banned in Europe. And they are those for mass surveillance and those used to manipulate our behaviors, decisions and opinions to our detriment. The second: each company will have to evaluate whether their AI technology is "high-risk". In this case, before adopting it, it must submit it to an assessment of the impact that this technology can have on society, on people's rights. Europe also envisions super-sanctions for companies that violate the bans: they will pay up to 4 percent of their global turnover. Just like those provided for by the European GDPR legislation on privacy, which has so far been a world lighthouse, in Europe and elsewhere, on the protection of rights in the digital age. The concept of "impact assessment" is also taken from the Gdpr.