Faced with the growing complexity and volume of cyber attacks (due in particular to the exponential development of connected objects, drones and sensors, the deployment of the 5G network, increased mobility, the growing interconnection of organizations, etc.), organizations can no longer rely on their human resources alone to combat this major risk and must rethink their defense strategy.
To prepare for a cyber-attack, industry players recommend notably the use of artificial intelligence techniques doped with machine learning, which make it possible to update security databases, identify trends, detect unusual activities, predict attacks, prevent the exploitation of vulnerabilities, correct them if necessary, fight against fraud, etc. There are many applications. This technology is all the more essential as hackers themselves use it (including, for example, to steal identities, realistically recreating human characteristics such as face or even voice, a practice better known as "deepfake", which can be used in CEO scams or to manipulate the democratic life of a country - for example, the risks of interference are expected to increase in the run-up to the November 2020 presidential election in the United States).
It is therefore essential to implement appropriate measures to fight "on equal terms". In this respect, the ANSSI has committed itself in its Manifesto (in French only) of 22 January 2020 to support the structuring of the cybersecurity ecosystem, in particular by sharing its expertise, tools and data more widely for the benefit of the smaller players. The CNIL also recalled on the occasion of its participation in the International Cybersecurity Forum that its action in 2020 will consist in particular in (i) stepping up its support for businesses, by learning from the personal data violations received, (ii) increasing the visibility and readability of the repressive security policy, it being recalled that since 2017, two-thirds of the sanctions concern the security obligation set out in the GDPR (often in connection with failures observed on the part of processors, which again demonstrates the trend of exploiting trust relationships), and (iii) strengthening links with the cybersecurity ecosystem and address GDPR compliance of certain IT security practices. And to help organizations assess the level of risk for a personal data processing operation, ENISA recently published an online tool available here.
While artificial intelligence (coupled with other appropriate technical and organizational measures) can be used as a tool to meet the GDPR security requirements to ensure the protection of personal data, it can also be used as a processing and management means, whether as part of a compliance program (data mapping), or as part of the organization’s business activities, to predict and analyze the behavior of customers or citizens, for example. The regulation applicable to the processing of personal data is thus intended to apply to artificial intelligence (notably Article 22 of the GDPR on automated individual decision-making, including profiling), which also raises ethical and moral issues.
We will have the opportunity to explore these issues during a morning session organized by the Personal Data Department of Delsol Avocats next March on cybersecurity, one of the hottest topics of 2020 !