Assessing the technological and systemic risks associated with artificial intelligence

Anticipating issues to better frame innovation

As artificial intelligence takes on a growing role in our lives, the need to evaluate its impacts becomes crucial. INESIA has been entrusted with a primary mission: to identify and analyze the risks associated with AI technologies, particularly in sensitive areas where their effects can be critical for individuals, institutions, and society as a whole.

AI systems are now used to predict judicial decisions, assist in medical diagnostics, monitor behaviors for security purposes, and even automate financial transactions. While these applications provide unprecedented opportunities, they also raise major risks: algorithmic biases, lack of transparency, data manipulation, and potential failures that could lead to serious consequences.

A rigorous scientific analysis

The evaluation conducted by INESIA is based on a multidisciplinary approach, combining expertise in computer science, statistics, law, ethics, and cybersecurity
. Each AI system undergoes rigorous testing aimed at:

  • Detecting discriminatory biases or disproportionate effects on certain populations;
  • Identifying security vulnerabilities that could be exploited for malicious purposes;
  • Analyzing the robustness of algorithms in the face of adverse scenarios or changing environments;
  • Assessing the quality of training data and its adequacy for the intended use.

These audits can be initiated at the request of public authorities, companies, or as part of proactive monitoring of systems already in circulation.

Protecting fundamental rights and national security

The fundamental goal of these evaluations is twofold: to prevent violations of human rights and to guarantee France’s technological sovereignty in the face of increasingly complex technologies, often developed internationally.

By providing independent reports, INESIA helps regulators make informed decisions about the market release of certain tools, their risk level, or the necessity to ban their use under certain conditions. The aim is to reconcile innovation and ethics while building trust around AI applications.

A forward-looking approach

Through this evaluation mission, INESIA positions itself as a preventive safeguard against technological abuses. Rather than reacting after the fact, the Institute aims to anticipate potential dangers ahead of the widespread deployment of AI systems.

This logic is fully aligned with the new European requirements for high-risk AI, which mandate traceability, explainability, and systematic impact assessment.

Retour en haut