Artificial Intelligence (AI) is the technical revolution that connects humans with a new way of calculating and translating reality. The resulting change affects the spheres of human existence, and inevitably relations between States.
By Jacopo Belli
AI in the age of technology
In the words of the well-known Italian philosopher Umberto Galimberti, today we live in the age of technology and ‘We continue to think that technology is a tool in the hands of man, but today technology is the world’. There is no subject more counter proof than these words than Artificial Intelligence.
In US law, Artificial Intelligence is defined as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”. Such a mechanized system is also called generative, because it creates, and not only reproduces, languages useful for achieving the goals dictated by the human input. Let us, however, take a step backwards. Observing how throughout history, the science-fiction narrative so far accomplished by man (it may seem an obvious locution, but soon it will not be), concerning artificial intelligence is based on two main characteristics:
-
Self-awareness
-
The possibility of being able to move within the physical world.
In the famous scenarios depicted in works such as those of Isaac Asimov (“I, Robot” above all, in this case) and others like the “Matrix” films, we find that these two factors are necessary and conditioning in relation to the possibility of being able to achieve the will to dominion and imperium, stemming from self-awareness; and the possibility of being able to do this through the use of force, the main human method for achieving one’s own goals.
Already in this first analysis, we can observe that in the same way that artificial intelligence has come into being, it has already overtaken the modus operandi of human communities.
The action of generative artificial intelligence is accomplished (as of today, 3 December 2023) in various ways: in the capacity to create languages, be these linked to real written texts of various kinds and types; be these drawings, of various possible elaborations; be these languages of codes useful for communicating with other devices and machines, in the creation of new functionalities; be these musical compositions of various kinds.
However, there are emerging functionalities of artificial intelligence related, for instance, to the very popular deep fakes, i.e. the use of images, videos and audios that, starting from real content, are reworked and adapted to a context different from the original one. Such technology, observed only with this degree of superficiality, may seem rather harmless, but like any kind of technology, its use dictates its nature.
We see, for instance, how AI is used in the cybersecurity field to perform Penetration Testing, Access Management, Anomaly Detection, Incident Analysis/Response, Detect.
Not going into specifics, these functions search within a digital infrastructure for weak points in the systems, testing them, making sure that they can be optimized and allowing for constant optimization of these systems.
Let us observe how these artificial intelligences are also used in the analysis of a great deal of data, useful for microtargeting actions of various kinds, from political to commercial.
Microtargeting is a marketing technique linked to the use of online data to personalize the content visible to users, based on personal identification criteria, using, in the most ruthless way possible, the personal vulnerabilities of individuals to channel human behavior in one direction over another, be it in the construction of an opinion or in voting actions.
For instance, microtargeting techniques were used in 2016 by Cambridge Analytica for the election of Donald Trump in the US presidential election.
In that case, as in the case of the Brexit, such techniques were used in conjunction with artificial intelligence and so-called bots in the reproduction, creation and dissemination of large-scale fake news, influencing the voting actions of individuals, based on perceived information.
We thus observe how artificial intelligence has achieved a maximum capability in the proxy of human civilization, the mastery of language in all its forms and dimensions.
In the beginning was the Word
Through language we construct ideas and through ideas we reconstruct language. In this interchange, man throughout history has created the society in which we live.
States, which are social constructs, are based on a language understood as common, law. The language of law, internationally and in its various natures, is itself based on common languages, namely values. Values are social coefficients in turn constructed from common subjective narratives that people choose to believe and obey, for the purposes of coexistence. Religions are productions of human language, in the creation of social coefficients useful for inter-subjective relations.
Mythology and narratives are the result of language, which translates and creates, from a very young age, a transference of those values that, as a human village, we consider to be relevant for the construction of an individual. Language is the tool man uses to travel in the ‘world’ operating system.The greatest human technique is to be able to communicate with others to interface with the surrounding environment in which they live. As soon as others, who do not fall into the category of organic nature with which we have been accustomed to interacting for more than two million years, become proficient and master our proxy in relation to the world, the possibility of a world analyzed and created with an alien logos arises.
Historian Yuval Noah Harari at the Frontiers Forum, 29 April 2023, in Montreux, (Switzerland) poses a relevant question during his lecture: “What would it mean for us humans, to live in a world where most of the images, laws, policies, stories and tools, are constructed by a non-human intelligence. An alien (intelligence) capable of harnessing with capacities that surpass the human, the vulnerabilities, inclinations, and vices of the human, achieving with this a relationship of deep intimacy with the human being.”
The power that AI therefore has is mainly related to the inherent ability that language has to produce ideas. This is certainly revolutionary since the ideas produced no longer derive directly from the human as a provider.
At the same time, however, artificial intelligence used as a tool to achieve a goal certainly allows for an optimisation of the efficiency of the result. This means that in the construction of a story, an image or music, the technical capacity of the machine, combined with the technical capacity of the individual, leads to masterly results. In fact, the dialectic that is created is also that of a democratization of the optimal outcome, according to which even lacking sufficient knowledge to enable us to perfectly create a product, we are able to compose it.
The Israeli historian’s skeptical view, however, is certainly well-founded and thus poses the need for public intervention. It is almost obvious, but necessary, to state that the time for a technological revolution far exceeds the time for useful political action, scaled to the problem.
Artificial Intelligence in Foreign Policy
The dual-use possibilities of technologies are always latent in every technological invention. As the historian Arnold Toynbee in ‘The World and the West’ describes, technology and technology have always been, in the dialectic between civilizations, the factor of dialogue, and of supremacy, that brought them together. This is no exception for AI, but the global digital infrastructure we use does not allow us to address a problem that is global, in line with the infrastructure to which it relates, in a subjective or state-based manner.
In May 2023, the G7 countries welcomed the guiding principles of the ‘Hiroshima AI process’, i.e. a forum to harmonise global legislation and governance on the topic of artificial intelligence. A strategic framework derived from four fundamental pillars was established:
-
An analysis of the risks, challenges and opportunities provided by generative AI;
-
The international-level guiding principles of the Hiroshima Process for all actors in the AI ecosystem;
-
The international code of conduct for the success of the Hiroshima Process, for organizations developing advanced AI systems;
-
Project-based cooperation to support the development of responsible AI tools and practical improvements.
On this conduct, the Italian G7 presidency of 2024 will be measured. On 30 October 2023 in Rome, there was a trilateral meeting between the ministers of economy, industry and finance of Italy, France, and Germany (Adolfo Urso, Bruno Le Maire and Robert Habeck). The aim was to emphasize industrial cooperation between European technology sectors, with a relevant analysis on the topics of Artificial Intelligence and the need for efforts towards the digital and green transition.
Over the next two days in the UK, the Artificial Intelligence Safety Summit (AISS), the first international conference on artificial intelligence, was held at Bletchley Park, Buckinghamshire. The AISS meetings focused on a number of objectives: to understand and limit the risks of advanced ‘frontier’ artificial intelligence; to promote international cooperation on safety; to identify concrete measures; to foster collaborative research; and to ensure safe development for positive global applications.
In summary, the AISS facilitated a dialogue between stakeholders without disrupting the regulatory framework, promoting understanding of risks and cooperation for the safe development of AI in the interaction between the public and private sectors. At the end of the Summit, “the Bletchley Declaration” was formalised on 2 November 2023, a statement aimed at creating a roadmap for the next Summit in 2024, regarding collaborative action, and the potential risks arising and derivable from an AI misuse.
Within the European Union, as part of its digital strategy, we have the first regulatory framework, which has been in place since April 2021. It is based on risk management, especially in relation to the protection of fundamental rights.
The proposed legislation identifies an arrangement based on Title I, which will be linked to the purpose and definition of the legislation; Title II will identify dangerous practices and thus be prohibited in a negative way; Title III will contain rules for the systematisation of high-risk situations, such as fundamental human rights or health and safety risks; Title IV will deal with transparency obligations regarding companies that own artificial intelligence systems; Title V will present measures to support innovation; Titles VI, VII and VIII will present measures related to governance; Title IX will deal with codes of conduct, along the lines of those of Hiroshima; and Titles X, XI and XII will present final provisions.
Within this legislation, two categories of actors are also identified: ‘providers’, i.e. those who distribute the services provided by AI, and ‘users’, those who use them.
In the specifics of national legislation, the United States, as well as China, Canada, Japan and others have gone ahead and already have their own policies.
At the United Nations, Secretary General Antonio Guterres called in July 2023 for the creation of a global regulatory body on Artificial Intelligence.