top of page
  • VMdN

How technology and artificial intelligence are used to change societal paradigm?

Since the Dartmouth Summer Research Project on Artificial Intelligence, there has been ups and downs in the potentialities offered by AI and frontier technologies called the summers and winters of AI.


Throughout the years, whether computational scientists or researchers followed by policymakers and law experts have come to a common understanding and agreement about the threats and opportunities of what Mr. Klaus SCHWAB called the Fourth Industrial Revolution.



What are the threats in a nutshell?


Systemic bias and inequalities created or reinforced, ultra-personalization to the point that you don’t have a choice anymore in the content delivered to you (personalized ap, micro-targeting, etc.), no respect of human rights including digital rights e.g., increased triangulation hindering our right to privacy, increased interoperability blurring the boundaries on the exchange of data (to whom?, for what?), increased influence of non-state actors, lack of transparency and accountability.


Again, in a nutshell, a giant black-box that governs our lives, from our institutions to our relationship to one another.




There are sectors of the economy where artificial intelligence and frontier technologies are indisputably worth experimenting and investing in. The progresses made in medicines with improved diagnostics thanks to data – not introducing today the inherent problem of accuracy due to biased or oriented data -, prosthetics, fundamental research, etc, etc, etc.


Another sector in which AI and frontier technologies have long been used and expanding to respond to the climate crisis is Smart Cities. How to efficiently modernize our cities to make it easier for users of public services (mobility, e-state services…) while maximizing the investments and finances in our cities under more and more financial constraints?


At international level, from humanitarian aid to conflicts, technology has also been a step forward to bring more clarity to actors in the field and in headquarters.

Yet, are all those innovations needed? Are all those innovations developed for a better use of public spaces?


Let’s take an example: social control.


“Social control is described as a certain set of rules and standards in society that keep individuals bound to conventional standards as well as to the use of formalized mechanisms”.


Over the years, the concept of social control has been translated into the digital era when it comes to scoring/rating individuals thanks to a collect of different data leading to personalized metrics assessing whether one is compliant, presenting a higher health risk for insurance companies, abiding by the general rules, etc.


Those discussions were mostly mentioned when one AI application, e.g., facial recognition, was used in a sector or in an environment in which the proper assessments were not conducted or when the test of proportionality between the use of such technologies and our fundamental freedoms and our human rights was not satisfactory. Likewise, such social control mostly depicted certain form of political regimes illustrating a political use for centralized regimes to control their population, thus maintaining their political order.


Looking closer to the underlying issue from a systemic point of view, it was clear that it was just a question of time before those questions were to be raised in other countries – or at least that similar control was to be expanded though presented under another objective.


The objectives of the SDG can be a fantastic driver to align international and national strategies on common objectives – I also do not enter into the question on the accuracy of the SDG especially in the context North/South. Ironically, they can also have indirect negative impacts on others SDGs if we are not careful nor intentional. Worse they can change and lead to a profound societal shift in our perception of others and in our daily interactions within our communities and between one another.


What does that mean concretely?


Recently, a French city has announced that she is experimenting (see her: Transports : la ville de Besançon teste un système antifraude dans ses bus (francetvinfo.fr)) a new system to limit the fraud in public transports corresponding to an approximative 1Mio euro of losses every year according to estimates.


How? When you enter the bus, you scan your transport card, which will be automatically connected to a screen visible on the bus by all other passengers. The system will then calculate the numbers of passengers in the bus and the number of bus card validated. If everyone did validate its card, the screen will be all green. If not, a red sign will appear on screen mentioning the number of passengers who did not validate their bus card.


I already hear the argument that has been always presenting over the last 10 years when it comes to such system: I have nothing to hide.


Well. Most of us do not. Yet is it because I have nothing to hide that I want all my life and actions exposed? Isn’t it the mere concept of privacy?


Other important questions are: is it because I have nothing to hide that I should allow such social control? Do I want to be put in a situation where I can judge and monitor others’ actions? Do we need to have scoring visible?


I hear another argument. If it helps improve public mobility, therefore encourage investment in public measures that help investing in environmental measures then it is worth trying. Are we sure about that?


Virginie MARTINS de NOBREGA

bottom of page