Artificial intelligence asked to be strict // Experts found the current regulation too general

The expansion of the use of artificial intelligence (AI) technologies requires stricter regulation at the state level, follows from the data of an expert survey of the Special Committee on Artificial Intelligence of the Council of Europe. In Russia, according to another survey, there is no apprehension about AI – only a third of respondents fear losing their jobs due to the introduction of this technology, and two-thirds would like to use it in order to work less while maintaining their income level.

The Special Committee on Artificial Intelligence of the Council of Europe (CAHAI) presented a report on the results of a survey of representatives of government, business and scientific organizations on the risks associated with the use of AI (a total of 260 experts from different countries were interviewed). It is assumed that the results of the survey will be used to develop the principles of regulatory regulation in this area – “based on the standards of the Council of Europe in the field of human rights, democracy and the rule of law.”

It turned out that experts see the most opportunities for using AI in healthcare (diagnostics), environmental protection (forecasting climate change), education (expanding access to it) and finance (combating fraud). In addition, AI can help combat discrimination by ensuring equal access to certain goods or services.

The most pronounced risks of using AI can be in the field of justice and law enforcement (experts considered scoring for access to public services, analysis of the emotional involvement of employees and facial recognition as the most dangerous practices).

Interestingly, the majority of respondents favored stricter government regulation of AI. In their assessment, the current guidelines and ethical principles describe the basic provisions, but are not effective in practice. In particular, this concerns the “burden of proof” of the security of the algorithm (it should be borne by the organization implementing the AI), the introduction of the right to review the decision of the algorithm by a person, and the program code should also be available for external audit. Half of the respondents spoke out against the use of facial recognition in public places.

“The respondents were in favor of explicitly banning only those AI systems that have proven to have violated human rights and freedoms. In other cases, it is proposed to introduce a legal framework for the use of AI systems. Somewhere they can be more stringent, and somewhere they can be combined with self-regulation tools, ”says Andrey Neznamov, chairman of the special interstate working group (CAHAI-COG), managing director of the AI ​​Regulation Center of Sberbank.

Note that in Russia, the attitude towards AI technologies is rather positive – this follows from the data of a survey by VTsIOM and ANO National Priorities. AI is trusted by 48% of Russians (42% do not trust, 10% found it difficult to answer). Only a third of those surveyed fear losing their jobs due to the development of new technologies. Two-thirds of the respondents (and 84% of those aged 18-24) are ready to use AI in order to work less while maintaining their income level, and half are ready to receive training in this area. At the same time, few are ready to entrust AI with decision-making: in medicine, only 3% are ready to do this, in education – 5%. It is noteworthy that 44% at the same time said that the use of AI will make it possible to make fairer decisions in public administration.

Tatiana Edovina

Leave a Reply

Your email address will not be published. Required fields are marked *