Artificial Intelligence (RP2024)

Artificial Intelligence

(A.) Policy and legislation

(A.1)   Policy objectives

Although there is no generally accepted definition of Artificial intelligence (AI), in 2019, the Organisation for Economic Co-operation and Development (OECD) adopted the following definition of an AI system: ‘An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.’

AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications) or a combination of both.

We are using AI on a daily basis, e.g. to translate text, generate subtitles for videos or to block email spam. Beyond making our lives easier, AI is helping us to solve some of the world’s biggest challenges: from treating chronic diseases or reducing traffic accident (and therefore the fatality rates) to fighting climate change or anticipating cybersecurity threats. Like the steam engine or electricity in the past, AI is transforming our world, our society and our industries.

Since 1950s, the research on AI included a large variety of computing techniques and spread over many different application areas. Historically the development of AI has alternated some periods of fast development, called ‘AI springs’, with other periods of reduced funding and interest, called AI winters. Currently, AI is experiencing another spring, which is motivated by three main driving factors: the progress in algorithms and computing techniques, the huge amount of available data generated by the advancements in ICT and Internet of Things applications, and the affordability of high-performance processing power, even in low-cost personal devices. These factors have contributed towards the rapid evolution of AI technologies such as large language models, which potentially could have a strong impact on society.

The way of approaching AI will shape the digital future. In order to enable European citizens, companies, governments, etc. to reap the benefits of AI, we need a solid European strategy and framework.

The EU strategy on AI was published on 25th April 2018 , in the Commission Communication on Artificial Intelligence for Europe. One of the main elements of the strategy is an ambitious proposal to achieve a major boost in investment in AI-related research and innovation and in facilitating and accelerating the adoption of AI across the economy.

The target was to reach a total of €20 billion in AI-related investment, including both the public and the private sector, for the three years up to 2020. For the decade after, the set goal is to reach the same amount as an annual average. This is of crucial importance if we want to ensure that the EU can compete on a global scale with regard to AI development and uptake. .

In December 2018, the Commission presented a Coordinated Plan on AI with Member States to foster the development and use of AI. It represents a joint commitment that reflects the understanding that, by working together, Europe can maximise its potential to compete globally. The main aims set out in the plan are: to maximise the impact of investments at EU and national levels, to encourage synergies and cooperation across the EU, including and to foster the exchange of best practices.

In February 2020 the Commission issued a White Paper on AI. The overall EU strategy proposed in the White Paper on AI proposes an ecosystem of excellence and trust for AI. The concept of an ecosystem of excellence in Europe refers to measures which support research, foster collaboration between Member States and increase investment in AI development and deployment. The ecosystem of trust is based on EU values and fundamental rights, and foresees robust requirements that would give citizens the confidence to embrace AI-based solutions, while encouraging businesses to develop them. The European approach to AI ‘aims to promote Europe’s innovation capacity in the area of AI, while supporting the development and uptake of ethical and trustworthy AI across the EU economy. AI should work for people and be a force for good in society.

Following a public consultation, the objectives of the White Paper were translated into a key AI package adopted by the Commission on 21 April 2021. This package includes a proposal for the first ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally and the 2021 review of the Coordinated Plan.

The proposal for a legal framework is aimed at laying down rules to ensure that AI systems used in the EU are safe and do not compromise fundamental rights. The key elements of the proposal are:

  • the definition of AI, which builds on the one elaborated by the OECD
  • rules for the definition of high-risk AI systems
  • compliance and enforcement mechanisms for the high-risk AI use cases
  • rules on the use of remote biometric identification
  • mandatory obligations for providers and users of high-risk AI systems
  • certain notification obligations for systems posing certain specific transparency risks

The proposal is complementary and applies in conjunction with all existing EU acquis on data protection and fundamental rights.

The 2021 Review of the Coordinated Plan on AI puts forward a concrete set of joint actions for the European Commission and Member States on how to create EU global leadership on trustworthy AI. The proposed key actions reflect the vision that to succeed, the European Commission together with Member States and private actors need to:

accelerate investments in AI technologies to drive resilient economic and social recovery facilitated by the uptake of new digital solutions;

act on AI strategies and programmes by implementing them fully and in a timely manner to ensure that the EU reaps the full benefits of first-mover adopter advantages; and

align AI policy to remove fragmentation and address global challenges

standardisation activities are one of the action areas identified in the 2021 Coordinated Plan as an area for joint action between the European Commission and Member States.

 (A.2) EC perspective and progress report

The big increase in interest and activities around AI in the latest years brings together a need for the development of a coherent set of AI standards. In response to this, ISO and IEC have created a standardisation committee on AI, namely ISO/IEC JTC 1/SC 42, which is most active in the field of AI and big data. A CEN-CENELEC Focus Group on Artificial Intelligence (AI) was also established in December 2018 and a roadmap for AI standardisation was published. Subsequently, CEN-CENELEC has created a Joint Technical Committee, namely CEN-CENELEC JTC 21, which started its activities on June 1, 2021. The professional association IEEE is also very active in investigating and proposing new standards for AI, particularly in the field of ethics.  

In addition, ETSI is active in the use of AI in ICT and coordinates work across a dozen technical bodies (see https://portal.etsi.org/TB-SiteMap/OCG/OCG-AI-Co-ordination ) using the OCG AI (Operational Coordination Group for AI). A summary of current work on AI can be found in a dedicated white paper (https://www.etsi.org/images/files/ETSIWhitePapers/ETSI-WP52-ETSI-activities-in-the-field-of-AI-B.pdf). The OCG AI is also in continual discussion with CEN/CLC JTC21. In October 2019, ETSI created the ISG on Securing Artificial Intelligence (ISG SAI) focusing on three key areas: using AI to enhance security, mitigating against attacks that leverage AI, and securing AI itself from attack. In September 2023 the ETSI TC SAI was created. ISG SAI was turned into TC SAI and work continues now only at the level of the new TC.

Moreover, open source technologies are one of the drivers of innovation in the area of AI and provide important tools and technologies to facilitate the development and deployment of trustworthy AI.

The proposal for the new AI Regulation is set as a New-Legislative Framework-type legislation. Hence, the role of harmonised standards will be key to providing the detailed technical specifications through which economic operators can achieve compliance with the relevant legal requirements. Harmonised standards will thus be a key tool for the implementation of the legislation and contribute to the specific objective to ensuring that that AI systems are safe and trustworthy.

As a consequence of this, the European Commission intends to intensify the elaboration of standards in the area of AI to ensure that standards are available to operators on time ahead of the application date of the future AI framework. In this respect, the Commission issued a first standardisation request in accordance with Regulation (EU) 1025/2012 in May 2023. In response, CEN-CENELEC JTC 21 developed a work programme listing the standards, both international and European, that can support the Standardisation Request. 

 (A.3) References 

B.) Requested actions

Action 1: SDOs should establish coordinated linkages with, and adequately consider European requirements or expectations from initiatives, including policy initiatives, and organisations contributing to the discourse on AI standardisation. This in particular includes the contents of the EU proposal for an AI Regulation and of the standardisation request on AI issued by the European Commission in 2023 as well as the orientations set in the 2021 review of the Coordinated Plan.

Action 2: SDOs should further increase their coordination efforts around AI standardisation both in Europe and internationally in order to avoid overlap or unnecessary duplication of efforts and aim to the highest quality to avoid  the creation and use of discriminating algorithms and to ensure a trustworthy and safe deployment of this technology.

Action 3: ESOs should coordinate with the Commission and appropriately direct their activities to ensure that the objectives set in the standardisation request on AI issued in 2023 are adequately and timely fulfilled. This includes ensuring active participation of representatives from SMEs and civil society organisations in their activities. 

Action 4: Taking into account the cross-sectorial aspects of the proposed AI Regulation and the interactions between the AI Regulation and existing or future sectorial safety legislation (for example the proposed new EU Regulation on machinery products), ESOs shall devote specific attention to the elaboration of standards on the methodology of risk assessment of cyber-physical products powered by AI and on the testing framework.

Action 5: EC and ESOs should coordinate to promote mobilisation of stakeholders around AI standardisation activities.

Action 6: Taking into account the gap analysis in progress by EC/JRC, EC/JRC to coordinate with SDOs and other initiatives on a follow-up and ways to address the identified gaps. 

C.) Activities and additional information

(C.1) Related standardisation activities
CEN & CENELEC

The CEN-CENELEC JTC 21 on Artificial Intelligence addresses AI standardisation in Europe, both through a bottom-up approach (similar to ISO/IEC JTC 1 SC 42), and a top-down approach concentrating on a long-term plan for European standardisation and future AI regulation.

The JTC shall produce standardisation deliverables in the field of Artificial Intelligence (AI) and related use of data, as well as provide guidance to other technical committees concerned with Artificial Intelligence. The JTC shall also consider the adoption of relevant international standards and standards from other relevant organisations, like ISO/IEC JTC 1 and its subcommittees, such as SC 42 Artificial intelligence. Finally, the JTC shall produce standardisation deliverables to address European market and societal needs and to underpin primarily EU legislation, policies, principles, and values.

The JTC 21 has initiated the following activities:

  • Mapping of current European and international standardisation initiatives on AI
  • Identifying specific standardisation needs
  • Liaising with relevant TCs and organizations in order to identify synergies and, if possible, initiate joint work
  • Acting as the focal point for the CEN & CENELEC TCs
  • Encouraging further European participation in the ISO and IEC TCs
  • Established four working groups
  • Initiated several home-grown standardization deliverables e.g. conformity assessment, natural language processing and AI-enhanced nudging
  • Has identified international standards for European adoption e.g. ISO-IEC 23053 Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML) and ISO/IEC 22989 Artificial intelligence concepts and terminology
  • Developed a work programme in support of the standardisation request and created an architecture of standards to illustrate the interplay between the different parts of the standardisation request.

Prior to the establishment of JTC 21  the CEN-CENELEC Focus Group on AI explored the possibilities for a dedicated CEN-CENELEC TC on AI.  The Focus Group published two documents: a response to the EC white paper on AI as well as the CEN-CENELEC Roadmap for AI standardisation. Both documents are available here.
After it completed its tasks the Focus Group on AI was disbanded and documents and assets were transferred to the CEN-CENELEC JTC 21.

ETSI

A summary of ETSI work on AI can be found in a dedicated white paper (https://www.etsi.org/images/files/ETSIWhitePapers/ETSI-WP52-ETSI-activities-in-the-field-of-AI-B.pdf).

ETSI TC HF (Human Factors) is organising works on the topic of human oversight and transparency/explainability of AI solutions, including also accessibility of explanations to all segments of society (user-oriented explanations for persons with varying physical/mental capabilities). This work also includes requirements for (future) human-AI collaborative systems for example in manufacturing processes.

The ETSI ISG on Experiential Networked Intelligence (ENI) is defining a Cognitive Network Management architecture. This is using Artificial Intelligence (AI) techniques and context-aware policies to adjust offered services based on changes in user needs, environmental conditions and business goals. Created in March 2017,  ISG ENI outputs centre around network optimization & Cognitive Network Management architecture highlighted in https://eniwiki.etsi.org/index.php?title=ISG_ENI_Activities. This is described further in the whitepaper (https://www.etsi.org/images/files/ETSIWhitePapers/etsi-wp44_ENI_Vision.pdf)

The ETSI ISG on Securing Artificial Intelligence (ISG SAI), created in October 2019, focused on three key areas: using AI to enhance security, mitigating against attacks that leverage AI, and securing AI itself from attack. ISG SAI collaborated closely with ENISA. ISG SAI outputs have centered around several key topics and the following have been published or are in development to date in part in response to Action 5 above:

  • Problem Statement
  • Mitigation Strategy
  • Data Supply Chain
  • Threat Ontology for AI, to align terminology
  • Security testing of AI
  • Role of hardware in security of AI
  • Explainability and transparency of AI processing
  • Privacy and security aspects of AI/ML systems
  • Traceability of AI models
  • Automated Manipulation of Multimedia Identity Representations
  • Collaborative Artificial Intelligence (also know as Generative AI)
  • Proofs of Concepts Framework.

The ETSI SAI work programme can be found at: https://portal.etsi.org/Portal_WI/form1.asp?tbid=877&SubTB=877

NOTE: The ETSI ISG SAI has recently (Sept 2023) been moved to ETSI TC SAI.

ETSI has several other ISGs working in the domain of AL/ML (Machine Learning). They are all defining specifications of functionalities that will be used in technology.

  • ISG ENI develops standards that use AI mechanisms to assist in the management and orchestration of the network.
  • ISG ENI is defining AI/ML functionality that can be used/reused throughout the network, cloud and end devices.
  • ISG ZSM is defining the AI/ML enablers in end-to-end service and network management.
  • ISG F5G on Fixed 5G is going to define the application of AI in the evolution towards ‘fibre to everything’ of the fixed network. 
  • ISG NFV on network functions virtualisation studies the application of AI/ML techniques to improve automation capabilities in NFV management and orchestration.
  • ISG CIM has published specifications for a data interchange format (ETSI CIM GS 009 V1.7.1 NGSI-LD API) and a flexible information model (ETSI CIM GS 006 V1.2.1) that support the exchange of information from e.g. knowledge graphs, including relationships between entities and signing of information to guarantee the origins. The work is applicable to exchange of data/metadata with AI solutions, including storage of historical results for later (human) oversight and governance in the context of the AI ACT. Additionally, it has published ETSI CIM GR 021, which describes property-graphs-based approaches to machine learning, able to leverage additional information coming from the graph’s relationships, supported by NGSI-LD.
  • The ETSI TC MTS provides technologies, tools, and guidelines on conformance and interoperability testing and certification of protocols and other systems, including AI systems, that are under standardisation at various ETSI groups and committees.
IEC

SEG 10 Ethics in Autonomous and Artificial intelligence Applications 

https://www.iec.ch/dyn/www/f?p=103:186:0::::FSP_ORG_ID,FSP_LANG_ID:22827,25

ISO/IEC JTC 1

SC 42 Artificial Intelligence is looking at the international standardisation of the entire AI ecosystem. With 20 published standards and 32 current projects under development and 6 working groups, the program of work has been growing rapidly and continues to grow in 2024.

 An Ad Hoc Group has been created to ensure coordination with CEN-CENELEC JTC 21 on projects under the Vienna agreement. The Vienna agreement lays the foundation for parallel work between ISO and CEN.

The following is the list of published SC 42 standards:

ISO/IEC 20546:2019 Information technology — Big data — Overview and vocabulary

ISO/IEC TR 20547-1:2020 Information technology — Big data reference architecture — Part 1: Framework and application process

ISO/IEC TR 20547-2:2018 Information technology — Big data reference architecture — Part 2: Use cases and derived requirements

ISO/IEC 20547-3:2020 Information technology — Big data reference architecture — Part 3: Reference architecture

ISO/IEC TR 20547-5:2018 Information technology — Big data reference architecture — Part 5: Standards roadmap

ISO/IEC DTR 24027 Information technology — Artificial Intelligence (AI) — Bias in AI systems and AI aided decision making

ISO/IEC TR 24028:2020 Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence

ISO/IEC TR 24029-1:2021 Artificial Intelligence (AI) — Assessment of the robustness of neural networks — Part 1: Overview

ISO/IEC TR 24030:2021 Information technology — Artificial intelligence (AI) — Use cases

ISO/IEC 24372 Information technology — Artificial Intelligence (AI) — Overview of computational approaches for AI systems

The following is the list of SC 42 projects under development:

WG 1 – Foundational AI standards

ISO/IEC DIS 22989 Artificial Intelligence Concepts and Terminology

ISO/IEC 23053 Framework for Artificial Intelligence Systems Using Machine Learning

ISO/IEC 42001 Artificial Intelligence – Management System

WG 2 – Big data ecosystem

ISO/IEC 24688 Information technology — Artificial Intelligence — Process management framework for Big data analytics

ISO/IEC 5259-1 Data quality for analytics and ML – Part 1: Overview, terminology, and examples

ISO/IEC 5259-2 Data quality for analytics and ML – Part 2: Data quality measures

ISO/IEC 5259-3 Data quality for analytics and ML – Part 3: Data quality management requirements and guidelines

ISO/IEC 5259-4 Data quality for analytics and ML – Part 4: Data quality process framework

ISO/IEC 5259-5 Data quality for analytics and ML – Part 5: Data quality governance 

ISO/IEC CD TR 5259-6 Data quality for analytics and ML – Part 6: Visualization framework for data quality

WG 3 – AI Trustworthiness

ISO/IEC 23894 Information technology — Artificial intelligence — Risk management

ISO/IEC 24368 Information technology — Artificial Intelligence (AI) — Overview of Ethical and Societal Concerns

ISO/IEC 25059 Software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — Quality Model for AI systems

ISO/IEC 8200 Information technology — Artificial intelligence — Controllability of automated artificial intelligence systems

WG 4 – AI Use cases and applications

ISO/IEC 24030 Information technology — Artificial Intelligence (AI) — Use cases (2nd ed.)

ISO/IEC 5338 Information technology — Artificial Intelligence (AI) – AI system life cycle processes

ISO/IEC 5339 Information technology — Artificial Intelligence (AI) – Guidelines for AI applications

WG 5 – Computational approaches and computational characteristics of AI systems

ISO/IEC 4213 Information technology — Artificial Intelligence — Assessment of machine learning classification performance

ISO/IEC 5392 Information technology — Artificial intelligence — Reference architecture of knowledge engineering

ISO/IEC JTC 1/SC 40 & 42 JWG 1

ISO/IEC 38507 — Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations

In addition to the above projects under development, a number of ad hoc groups in the SC 42 WGs are studying topics that cross multiple areas such as:

  1. machine learning computing devices
  2. ontologies, knowledge engineering, and representation
  3. data quality governance framework
  4. testing of AI systems
  5. AI standards landscape and roadmap
  6. coordination with JTC 1 SC 27 on AI security and privacy proposed standards
  7. data quality visualization

In addition, SC 42 has developed over 30 active liaisons with ISO and IEC committees, SDOs and industry organizations to encourage collaboration and building out the industry ecosystem around AI and Big Data.

ISO/IEC JTC 1 SC 7 – Software and systems engineering

ISO/IEC 25012:2008 Software engineering — Software product Quality Requirements and Evaluation (SQuaRE) — Data quality model

ISO/IEC TR 29119-11:2020 Software and systems engineering — Software testing — Part 11: Guidelines on the testing of AI-based systems.

IEEE

IEEE has a significant amount of activity in the fields of Autonomous and Intelligent Systems (A/IS), as well as in related vertical industry domains. IEEE standards and pre-standards address: the ethical and societal implications of artificial intelligence; foundational concepts, architecture and ontology; governance and management; data; trustworthiness; etc.

Ethical and Societal Implications of Artificial Intelligence

In 2016, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems started a project called “Ethically Aligned Design (EAD): A Vision for Prioritizing Human Wellbeing with Autonomous and Intelligent Systems.” This work served as the foundation for many other organizations to create their AI principles, including the above-referenced OECD ones, and there are now EAD for Business and EAD for Art. There is also an IEEE Trusted Data and Artificial Intelligence Systems (AIS) Playbook for Financial Services Addressing Ethical Dilemmas in AI: Listening to Engineers.

The IEEE 7000 Series addresses ethical considerations in a broad range of issues regarding autonomous and intelligent systems. 

  • IEEE 7000 Standard Model Process for Addressing Ethical Concerns During System Design 
  • IEEE 7001, Standard for Transparency of Autonomous Systems
  • IEEE 7002, Standard for Data Privacy Process
  • IEEE 7005 Standard for Transparent Employer Data Governance
  • IEEE 7007, Ontological Standard for Ethically Driven Robotics and Automation Systems
  • IEEE P7003, Standard for Algorithmic Bias Considerations
  • IEEE P7004, Standard for Child and Student Data Governance
  • IEEE P7004.1, Recommended Practices for Virtual Classroom Security, Privacy and Data Governance
  • IEEE P7008, Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems
  • IEEE P7009, Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems
  • IEEE P7011, Standard for the Process of Identifying and Rating the Trustworthiness of News Sources
  • IEEE P7012, Standard for Machine Readable Personal Privacy Terms 
  • IEEE P7014, Standard for Ethical considerations in Emulated Empathy in Autonomous and Intelligent Systems
  • IEEE P7015, Standard for Data and Artificial Intelligence (AI) Literacy, Skills, and Readiness

Once approved, the ethical and also governance standards are made available for free to support widespread AI literacy. They can be accessed here

Foundational Concepts, Architecture, and Ontology

  • IEEE 1872 Standards Series for Robotics and Automation
  • IEEE 2755 Standards Series on Intelligent Process Automation
  • IEEE 3079.3, Framework for Evaluating the Quality of Digital Humans
  • IEEE 3652.1, Guide for Architectural Framework and Application of Federated Machine Learning
  • IEEE 11073-10101, IEEE/ISO/IEC International Standard—Health informatics-Device interoperability-Part 10101: Point-of-care medical device communication-Nomenclature
  • IEEE P2894, Guide for an Architectural Framework for Explainable Artificial Intelligence
  • IEEE P2896, Standard for Open Data: Open Data Ontology

Governance and Management

  • IEEE 1232 Standards Series for Artificial Intelligence Exchange and Service Tie to All Test Environments (AI-ESTATE)
  • IEEE 2089, Standard for Age Appropriate Digital Services Framework – Based on the 5Rights Principles for Children
  • IEEE 2830, Standard for Technical Framework and Requirements of Shared Machine Learning
  • IEEE 2841, Framework and Process for Deep Learning Evaluation
  • IEEE 2941, Standard for Artificial Intelligence (AI) Model Representation, Compression, Distribution, and Management
  • IEEE P2247.1, Standard for the Classification of Adaptive Instructional Systems
  • IEEE P2802, Standard for the Performance and Safety Evaluation of Artificial Intelligence Based Medical Device: Terminology
  • IEEE P2840, Standard for Responsible AI Licensing
  • IEEE P2863, Recommended Practice for Organizational Governance of Artificial Intelligence
  • IEEE P2937, Standard for Performance Benchmarking for AI Server Systems
  • IEEE P3119, Standard for the Procurement of Artificial Intelligence and Automated Decision Systems

IEEE standards and pre-standards covering Trustworthiness, namely security, quality, transparency, bias, and accuracy, include:

  • IEEE 2801, Recommended Practice for the Quality Management of Datasets for Medical Artificial Intelligence
  • IEEE P2751, 3D Map Data Representation for Robotics and Automation
  • IEEE P3156, Standard for Requirements of Privacy-preserving Computation Integrated Platforms
  • IEEE P3157, Recommended Practice for Vulnerability Test for Machine Learning Models for Computer Vision Applications
  • IEEE P3181, Standard for Trusted Environment Based Cryptographic Computing
  • IEEE P3187, Guide for Framework for Trustworthy Federated Machine Learning
  • IEEE P3198, Standard for Evaluation Method of Machine Learning Fairness

Other aspects of ML and other AI techniques are addressed by the following projects:

  • IEEE 1855, Standard for Fuzzy Markup Language
  • IEEE 1873, Standard for Robot Map Data Representation for Navigation
  • IEEE 3079.3.1, Standard for Service Application Programming Interfaces (APIs) for Digital Human Authoring and Visualization 
  • IEEE 3129, Standard for Robustness Testing and Evaluation of Artificial Intelligence (AI)-based Image Recognition Service
  • IEEE 3333.1.3, Standard For The Deep Learning-Based Assessment Of Visual Experience Based On Human Factors
  • IEEE 12207.2, Systems and software engineering – Software life cycle processes—Part 2: Relation and mapping between ISO/IEC/IEEE 12207:2017 and ISO/IEC 12207:2008
  • IEEE P2874, Standard for Spatial Web Protocol, Architecture and Governance
  • IEEE P2976, Standard for XAI—eXplainable Artificial Intelligence—for Achieving Clarity and Interoperability of AI Systems Design
  • IEEE P2986, Recommended Practice for Privacy and Security for Federated Machine Learning
  • IEEE P2987, Recommended Practice for Principles for Design and Operation Addressing Technology-Facilitated Inter-personal Control
  • IEEE P3109, Standard for Arithmetic Formats for Machine Learning
  • IEEE P3110, Standard for Computer Vision (CV)—Algorithms, Application Programming Interfaces (API), and Technical Requirements for Deep Learning Framework
  • IEEE P3123, Standard for Artificial Intelligence and Machine Learning (AI/ML) Terminology and Data Formats
  • IEEE P3128, Recommended Practice for The Evaluation of Artificial Intelligence (AI) Dialogue System Capabilities
  • IEEE P3142, Recommended Practice on Distributed Training and Inference for Large-scale Deep Learning Models
  • IEEE P3152, Standard for the Description of the Natural or Artificial Character of Intelligent Communicators
  • IEEE P3168, Standard for Robustness Evaluation Test Methods for a Natural Language Processing Service that uses Machine Learning
  • Standards on Knowledge Graphs (IEEE 2807 Standards Series, IEEE P3154)
  • For more information, visit https://ieee-sa.imeetcentral.com/eurollingplan/.
IETF

The IETF Autonomic Networking Integrated Model and Approach Working Group will develop a system of autonomic functions that carry out the intentions of the network operator without the need for detailed low- level management of individual devices. This will be done by providing a secure closed-loop interaction mechanism whereby network elements cooperate directly to satisfy management intent. The working group will develop a control paradigm where network processes coordinate their decisions and automatically translate them into local actions, based on various sources of information including operator-supplied configuration information or from the existing protocols, such as routing protocol, etc.

Autonomic networking refers to the self-managing characteristics (configuration, protection, healing, and optimization) of distributed network elements, adapting to unpredictable changes while hiding intrinsic complexity from operators and users. Autonomic Networking, which often involves closed-loop control, is applicable to the complete network (functions) lifecycle (e.g. installation, commissioning, operating, etc). An autonomic function that works in a distributed way across various network elements is a candidate for protocol design. Such functions should allow central guidance and reporting, and co-existence with non-autonomic methods of management. The general objective of this working group is to enable the progressive introduction of autonomic functions into operational networks, as well as reusable autonomic network infrastructure, in order to reduce operating expenses.

https://wiki.ietf.org/en/group/iab/Multi-Stake-Holder-Platform#h-319-artificial-intelligence

ITU

AI for Good is the leading United Nations platform for global and inclusive dialogue on AI. The Summit is hosted each year in Geneva by the ITU in partnership with 40 UN Sister agencies.
More info: https://aiforgood.itu.int.

ITU-T SG11 is developing ITU-T Recommendations implementing AI in signalling exchange, protocols and testing. ITU-T SG11 approved Recommendation ITU-T Q.5023 “Protocol for managing intelligent network slicing with AI-assisted analysis in IMT-2020 network”. Among ongoing work there are protocol for managing energy efficiency with AI-assisted analysis in IMT-2020 networks and beyond; signalling requirements and architecture to support AI based vertical services in future network, IMT2020 and beyond; methods and metrics for monitoring ML/AI in future networks including IMT-2020; data management interfaces for intelligent edge computing-based smart agriculture service.

ITU-T Study Group 13 approved various ITU-T Recommendations covering AI-based networks as well as machine learning in future networks and IMT-2020, including use cases, architectural frameworks, quality of service assurance, service provisioning, data handling, learning models, network automation for resource and fault management, marketplace integration, cloud computing, Quantum key distribution networks (e.g.  Recommendations ITU T Y.3170, Y.3172; Y.3173, Y.3174, Y.3175, Y.3176, Y.3177, Y.3178, Y.3179, Y.3180-Y.3184, Y.3531, Y.3654, Sup 55 to Y.3170-series and Sup 70 to Y.3800-series. More info: https://www.itu.int/en/ITU-T/focusgroups/ml5g/Pages

SG13 continues development of Recommendations on the above topics as well as ML for big data driven networking, ML as a tool to better shape traffic, man-like networking. Also, in the framework of 5G, SG13 studies ML and AI to enhance QoS assurance, network slicing, operation management of cloud services, integrated cross-domain network architecture, network automation, framework of user-oriented network service provisioning. It also maintains the AI standards roadmap, Supplement 72 to Y.3000-series, which has a matrix of different document types per vertical versus the related technologies for supporting AI. For more info contact tsbsg13@itu.int.

ITU has been at the forefront to explore how to best apply AI/ML in future networks including 5G networks. To advance the use of AI/ML in the telco industry, ITU launched the AI/ML in 5G Challenge in March 2020. The Challenge rallies like-minded students and professionals from around the globe to study the practical application of AI/ML in emerging and future networks. It also enhances the community driving standardization work for AI/ML, creating new opportunities for industry and academia to influence international standardization. The Challenge solutions can be accessed in several repositories on the Challenge GitHub: https://github.com/ITU-AI-ML-in-5G-Challenge.

Since its inception in 2020, the Challenge has grown to encompass other areas relevant to accelerate the achievement of sustainable development goals. The Challenge therefore has the following areas:

ITU-T Study Group 12 (performance, QoS and QoE) offers guidance for the development of machine learning based solutions for QoS/QoE prediction and network performance management in telecommunication scenarios (Recommendation ITU-T P.1402). ITU-T P.565 describes a framework for the creation and performance testing of machine learning based models for the assessment of transmission network impact on speech quality for mobile packet-switched voice services. ITU-T P.565.1 is the first standardized instantiation of the framework. ITU-T E.475 introduces a set of guidelines for intelligent network analytics and diagnostics. SG12 has developed and standardized several quality models leveraging machine learning techniques for the objective estimation of dimensions of QoS and QoE.

AI for Road Safety: The ITU, together with the UN Secretary-General’s Special Envoy for Road Safety and the Envoy on Technology, launched the initiative on AI for Road Safety, which is in line with the UN General Assembly Resolution (UN A/RES/74/299) on Improving global Road Safety, which highlights the role of innovative automotive and digital technologies. AI for Road Safety aims to leverage the use of AI for enhancing the safe system approach to road safety.

The new initiative supports achieving the UN SDG target 3.6 to halve by 2030 the number of global deaths and injuries from road traffic accidents, and the SDG Goal 11.2 to provide access to safe, affordable, accessible and sustainable transport systems for all by 2030. See:

https://aiforgood.itu.int/event/ai-for-road-safety

https://aiforgood.itu.int/about/ai-ml-pre-standardization/ai4roadsafety

ITU-T SG20 approved Recommendation ITU-T Y.4470 “Reference architecture of artificial intelligence service exposure for smart sustainable cities” that introduces AI service exposure (AISE) for smart sustainable cities (SSC), and provides the common characteristics and high-level requirements, reference architecture and relevant common capabilities of AISE, and agreed Supplement ITU-T Y.Suppl.63 “Unlocking Internet of things with artificial intelligence” that examines how artificial intelligence could step in to bolster the intent of urban stakeholders to deploy IoT technologies and eventually transition to smart cities. ITU-T SG20 is currently working on draft Recommendation ITU-T Y.RA-FML “Requirements and reference architecture of IoT and smart city & community service based on federated machine learning”, draft Recommendation ITU-T Y.CDML-arc “Reference architecture of collaborative decentralized machine learning for intelligent IoT services” and draft Recommendation ITU-T Y.SF-prediction “Service framework of prediction for intelligent IoT”.

More info: https://itu.int/go/tsg20

ITU also coordinates the United for Smart Sustainable Cities (U4SSC) Initiative, which is a UN initiative that develops action plans, technical specifications, case studies, guidelines and offer policy guidance for cities to become smarter and more sustainable. The U4SSC Initiative is currently working on a Thematic Group on “Artificial Intelligence in Cities”.

More info: https://u4ssc.itu.int/

ITU-T Study Group 5 develops international standards, guidelines, technical papers and assessment frameworks that support the sustainable use and deployment of ICTs and digital technologies, and evaluate the environmental performance, including biodiversity, of digital technologies such as, but not limited to, 5G, artificial intelligence (AI), smart manufacturing, automation, etc. ITU-T SG5 approved Recommendation ITU-T L.1305 “Data centre infrastructure management system based on big data and artificial intelligence technology”. This standard contains technical specifications of a data centre infrastructure management (DCIM) system, covering: principles, management objects, management system schemes, data collection function requirements, operational function requirements, energy saving management, capacity management for information and communication technology (ICT) and facilities, other operational function requirements and intelligent controlling on systems to maximize green energy use. Other aspects such as maintenance function requirements, early alarm and protection based on big data analysis and intelligent controlling on systems to decrease the cost for maintenance are also considered. Additionally, it has produced the following supplements: L Suppl. 48: Data centre energy saving: Application of artificial intelligence technology in improving energy efficiency of telecommunication room and data centre infrastructure and L Suppl. 53: Guidelines on the implementation of environmental efficiency criteria for artificial intelligence and other emerging technologies

More info: https://itu.int/go/tsg5

The Focus Group on Environmental Efficiency for Artificial Intelligence and other emerging technologies (FG-AI4EE) concluded in December 2022 and identified the standardization needs to develop a sustainable approach to AI and other emerging technologies. The FG-AI4EE developed 21 technical reports and specifications on requirements, assessment and measurement and implementation guidelines of AI and other emerging technologies.

More info: https://itu.int/go/fgai4ee

The ITU-T Focus Group on AI for Autonomous and Assisted Driving (FG-AI4AD) aims to develop a definition of minimal performance threshold for AI systems that are responsible for the driving tasks in vehicles, so that an automated vehicle always operates safely on the road, at least as a competent and careful human driver. The Focus Group has completed the Technical Report on “Automated driving safety data protocol – Ethical and legal considerations of continual monitoring”: https://www.itu.int/pub/T-FG-AI4AD-2021-02 and is in the process of finalizing three additional TRs on related protocol specification, practical demonstrators and benefits of continual monitoring.

More info: https://itu.int/go/fgai4ad

ITU-T Focus Group on Artificial Intelligence (FG-AI4H), established in partnership with ITU and WHO, is working towards to establishing a standardized assessment framework for the evaluation of AI-based methods for health, diagnosis, triage or treatment decisions.
https://www.itu.int/en/ITU-T/focusgroups/ai4h/

The Focus Group on Artificial Intelligence for Natural Disaster Management (FG-AI4NDM) aims to underscore best practices for leveraging AI for supporting data collection modelling across spatiotemporal scales, and providing effective communications in the advent of disasters of natural origin. The activities of this Focus Group are conducted in collaboration with the World Meteorological Organization (WMO) and United Nations Environment Programme (UNEP).

More info: https://itu.int/go/fgai4ndm

Established by ITU-T SG20, ITU-T Focus Group on Artificial Intelligence (AI) and Internet of Things (IoT) for Digital Agriculture (FG-AI4A) explores emerging technologies including AI and IoT in data acquisition and handling, modelling from a growing volume of agricultural and geospatial data, and providing communication for the optimization of agricultural production. The activities of this Focus Group are being conducted in cooperation with Food and Agriculture Organization of the United Nations (FAO).

More info: https://itu.int/go/fgai4a

ITU-R

AI in Radiocommunication Standards: ITU Radiocommunication (ITU-R) Study Groups and forthcoming reports examine the use of AI in radiocommunications:

  • ITU-R Study Group 1 covers all aspects of spectrum management, including spectrum monitoring. Question 241/1 looks at “Methodologies for assessing or predicting spectrum availability”. 
  • ITU-R Study Group 6, dedicated to broadcasting services, is also studying AI and ML applications: 
  • Question ITU-R 144/6, “Use of AI for broadcasting”, considers the impact of AI technologies and how can they be deployed to increase efficiency in programme production, quality evaluation, programme assembly and broadcast emission. 
  • Recommendation ITU-R BS.1387: “Method for objective measurements of perceived audio quality”. The first application of neural networks, which is now called AI (artificial intelligence), in the field of broadcasting.
  • Report ITU-R BT.2447, “AI systems for programme production and exchange”, discusses current applications and near-term initiatives. This Report is being revised regularly to reflect the latest progresses on AI for the applications in broadcasting industry chains.
OASIS

RECITE (REasoning for Conversation and Information Technology Exchange) is a new OASIS Open Project dedicated to developing a standard for dialogue modelling in conversational agents. It aims to establish interoperability between software vendors.

oneM2M

oneM2M provides a standardized IoT data source for AI/ML applications. Furthermore, the oneM2M work item on “System enhancements to support AI capabilities” (WI-0105) aims to enable oneM2M to utilize Artificial Intelligence models and data management for AI services.
All oneM2M specifications are publicly accessible at Specifications (onem2m.org). See also the section on IoT in the Rolling plan.

W3C

The Web Machine Learning Working Group develops the Web Neural Network API for enabling efficient machine learning inference in web browsers. The Ethical Principles for Web Machine Learning document discusses ethical issues associated with using machine learning and outlines considerations for web technologies that enable related use cases.

The GPU for the Web Working Group develops the WebGPU specification and its companion WebGPU Shading Language to give web applications access to computation capabilities offered by modern GPU cards, allowing them to run AI computations efficiently on the device.

The Web & Networks Interest Group explores solutions for web applications to leverage network capabilities in order to achieve better performance and resources allocation, both on the device and network. The group discusses machine learning acceleration scenarios and requirements in Client-Edge-Cloud coordination Use Cases and Requirements

(C.2) Other activities related to standardisation
The European AI Alliance

https://ec.europa.eu/digital-single-market/en/european-ai-alliance

The High-Level Group on Artificial Intelligence

https://ec.europa.eu/digital-single-market/high-level-group-artificial-intelligence

AI on Demand Platform

http://ec.europa.eu/research/participants/portal/desktop/en/opportunities/h2020/topics/ict-26-2018-2020.html

H2020

R&D&I projects funded within topics ICT-26 from the H2020-ICT-Work Programme 2018-20 can produce relevant input for standardisation.

http://ec.europa.eu/research/participants/portal/desktop/en/opportunities/h2020/topics/ict-26-2018-2020.html

StandICT.eu

This EU funded project produced a standardisation landscape report for the technology area of AI.

This overview or landscape document is a static “snap shot” of a dynamically updated database compiled within StandICT.eu.

The database is inclusive (from many different SDOs and organizations), re-useable (available for liaison to other organisations), filterable (to choose a subset of documents and organisations appropriate to a particular use), and easily exportable (CSV, Word, ODT, Mind-map).

https://www.standict.eu/landscape-analysis-report/landscape-artificial-intelligence-standards

(C.3) Additional information

European AI Alliance

European AI Alliance is a forum set up by the European Commission engaged in a broad and open discussion of all aspects of Artificial Intelligence development and its impacts. Given the scale of the challenge associated with AI, the full mobilisation of a diverse set of participants, including businesses, consumer organisations, trade unions, and other representatives of civil society bodies is essential. The European AI Alliance will form a broad multi-stakeholder platform, which will complement and support the work of the AI High-Level Group in particular in preparing draft AI ethics guidelines, and ensuring the competitiveness of the European Region in the burgeoning field of Artificial Intelligence. The Alliance is open to all stakeholders. It is managed by a secretariat, and it is already open for registration.

High-Level Expert Group on Artificial Intelligence (AI HLG)

The group has now concluded its work by publishing the following four deliverables:

Deliverable 1: Ethics Guidelines for Trustworthy AI
The document puts forward a human-centric approach on AI and lists 7 key requirements that AI systems should meet in order to be trustworthy.

Deliverable 2: Policy and Investment Recommendations for Trustworthy AI
Building on its first deliverable, the HLEG put forward 33 recommendations to guide trustworthy AI towards sustainability, growth, competitiveness, and inclusion. At the same time, the recommendations will empower, benefit, and protect European citizens.

Deliverable 3: Assessment List for Trustworthy AI (ALTAI)
A practical tool that translates the Ethics Guidelines into an accessible and dynamic self-assessment checklist. The checklist can be used by developers and deployers of AI who want to implement the key requirements. This list is available as a prototype web-based tool and in PDF format.

Deliverable 4: Sectoral Considerations on the Policy and Investment Recommendations
The document explores the possible implementation of the HLEG recommendations, previously published, in three specific areas of application: Public Sector, Healthcare and Manufacturing & Internet of Things.

AI Watch

The the Commission’s AI Watch prepared a report titled “National Strategies on Artificial Intelligence: A European Perspective”that was presented during a joint webinar with the OECD, which took place today. It monitors the AI national strategies of all EU countries, as well as Norway and Switzerland, and this year’s update focuses on areas of cooperation   for:

  • strengthening AI education and skills;
  • supporting research and innovation to drive AI developments into successful products and services, improving collaboration and networking;
  • creating a regulatory framework to address the ethics and trustworthiness of AI systems;
  • establishing a cutting-edge data ecosystem and ICT infrastructure.

CAHAI.

In September 2019, the Committee of Ministers of the Council of Europe set up an Ad Hoc Committee on Artificial Intelligence – CAHAI. The Committee examined the feasibility and potential elements on the basis of broad multi-stakeholder consultations, of a legal framework for the development, design and application of artificial intelligence, based on Council of Europe’s standards on human rights, democracy and the rule of law. The committee, which brings together representatives from the Member States, had an exchange of views with leading experts on the impact of AI applications on individuals and society, the existing soft law instruments specifically dealing with AI and the existing legally binding international frameworks applicable to AI. CAHAI finalized its work at the end of 2021 by adopting the final deliverable titled “Possible elements of a legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law” https://rm.coe.int/cahai-2021-09rev-elements/1680a6d90dthat. Based on results of CAHAI work, Council of Europe has established new Committee for AI – CAI to follow up work of CAHAI group and  elaborate an appropriate legal framework on the development, design, and application of artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law, and conducive to innovation, which can be composed of a binding legal instrument of a transversal character, including notably general common principles, as well as additional binding or non-binding instruments to address challenges relating to the application of artificial intelligence in  specific sectors.

CAHAI: https://www.coe.int/en/web/artificial-intelligence/cahai

CAI: https://www.coe.int/en/web/artificial-intelligence/cai

AI on Demand Platform

The European Commission has launched a call for proposals to fund a large €20 million project on Artificial Intelligence (AI) under the framework programme on R&D Horizon 2020. It aims to mobilise the AI community in Europe in order to combine efforts, to develop synergies among all the existing initiatives and to optimise Europe’s potential. The call was closed on 17th April 2018, and the received proposals have been evaluated. The awarded project started on 1st January 2019.

Under the next multi-annual budget, the Commission plans to increase its investment in AI further, mainly through two programmes: the research and innovation framework programme (Horizon Europe), and a new programme called Digital Europe.

UNESCO International research centre on Artificial Intelligence (IRCAI)

UNESCO has approved the establishment of IRCAI, which will be seated in Ljublijana (Slovenia). IRCAI aims to provide an open and transparent environment for AI research and debates on AI, providing expert support to stakeholders around the globe in drafting guidelines and action plans for AI. It will bring together various stakeholders with a variety of know-how from around the world to address global challenges and support UNESCO in carrying out its studies and take part in major international AI projects. The centre will advise governments, organisations, legal persons and the public on systemic and strategic solutions in introducing AI in various fields.

AI studies

In addition to the previous initiatives, the Commission is planning to conduct some technical studies about AI. Among them, there will be one specifically targeted to identify safety standardisation needs.

Standard sharing with other domains

AI is a vast scientific and technological domain that overlaps with other domains also discussed in this rolling plan, e.g. big data, e-health, robotics and autonomous systems and so forth. Many of the standardisation activities of these domains will be beneficial for AI and the other way around. For more details, please refer to section “C.1-Related standardisation Activities”.

Original url: https://joinup.ec.europa.eu/collection/rolling-plan-ict-standardisation/artificial-intelligence-rp2024

StandardsGPT

Ask your questions!