WAIT: Workshop on Artificial Intelligence Trustworthiness

In recent years, Artificial Intelligence (AI) has become an integral part of many industries and society as a whole. However, as the use of AI increases, so do concerns about its safety, security, and trustworthiness. This workshop aims to bring together experts in AI, computer science, and related fields to discuss the latest research and developments in the area of AI trustworthiness. The goal of this workshop is to create an environment for the exchange of ideas, collaboration, and the development of new solutions to ensure the safe, secure, and trustworthy use of AI.
The workshop on Artificial Intelligence Trustworthiness has the potential to bring together experts from different fields and backgrounds to share their latest research and developments in the field. By providing a platform for discussing the challenges and opportunities for ensuring the safe, secure, and trustworthy use of AI, the workshop can help to shape the future of AI research and development. The workshop can also contribute to the development of guidelines, standards, and best practices for AI trustworthiness, which can be adopted by industry and government to ensure the safe and responsible use of AI.
The workshop on Artificial Intelligence Trustworthiness is unique in its focus on ensuring the safe, secure, and trustworthy use of AI, a topic that is becoming increasingly important as the use of AI becomes more widespread. The workshop provides a forum for discussing the latest research and developments in this field, as well as the challenges and opportunities for future work. The workshop also includes topics of the ethical and societal implications of AI trustworthiness, which is a novel and important aspect of the field.
The workshop on Artificial Intelligence Trustworthiness is original in its focus on the safe, secure, and trustworthy use of AI, which is a an emerging topic and still not covered in depth in other workshops and conferences in the field. The workshop brings together experts from different fields and backgrounds to share their latest research and developments, which is a unique and valuable aspect of the workshop.

Objectives:
• To provide a platform for experts to share their latest research and developments in AI trustworthiness.
• To identify and discuss the challenges and opportunities for ensuring the safe, secure, and trustworthy use of AI.
• To foster collaboration and networking among researchers and practitioners in the field of AI trustworthiness.
• To provide a forum for discussing ethical and societal implications of trustworthy intelligent systems.

Expected Outcomes:
1. A better understanding of the current state of the art in AI trustworthiness research and development.
2. Identification of research gaps and opportunities for future work in the field.
3. Increased collaboration and networking among researchers and practitioners in the field of AI trustworthiness.
4. A deeper understanding of the ethical and societal implications of AI trustworthiness.

Target Audience:
The workshop is intended for researchers, practitioners, and experts in the field of AI, computer science, and related fields, as well as those interested in the safety, security, and trustworthiness of AI.

Submission Guidelines:
We invite the submission of papers that present original previously unpublished research. We accept short papers (6-11 pages) and full papers (12+ pages) formatted accordingly to the Springer LNCS style. Although Springer offers both LaTeX style files and Word templates, we highly encourage the authors to use LaTeX, especially for texts containing several formulæ. The papers must be written in English.

We use a double-blind review scheme. Please anonymize your papers when submitting for initial review.
At least one author of every accepted paper must register for the conference and present the paper offline (more preferably) or online.

The authors should use the EasyChair system to submit their papers: https://easychair.org/conferences/?conf=wait23

Selected papers will be published in main conference proceedings in the Springer CCIS series indexed by Scopus and Web of Science.
Programme:
The workshop will include keynote speeches and technical sessions. The technical sessions will include presentations on the latest research and developments in the fields related to AI trustworthiness, as well as discussions on the challenges and opportunities for ensuring the safe, secure, and trustworthy use of AI.

Scope:

We are interested in papers that address the following topics:
• Methods and principles for the integration of AI in critical products and services in a safe, reliable, and secure way
• Methods for analyzing datasets for detecting anomalies in markup in order to counter attacks on machine learning
• ML models with certified robustness
• Model training methods that provide resistance to adversarial attacks
• Methods for detecting and countering attacks on AI components in intelligent systems
• Methods for explaining and improving the interpretability of ML models
• Research on the resistance to attacks of common models, including the typical architecture of artificial neural networks
• Techniques for Building Trusted Machine Learning Frameworks and Libraries
• Engineering of innovative industrial products and services integrating AI
• Large-scale deployment of industrial systems integrating AI
• Interaction generating confidence between the user and the AI-based system
• Ethical and societal implications of intelligent system trustworthiness
1. A better understanding of the current state of the art in AI trustworthiness research and development.
2. Identification of research gaps and opportunities for future work in the field.
3. Increased collaboration and networking among researchers and practitioners in the field of AI trustworthiness.
4. A deeper understanding of the ethical and societal implications of AI trustworthiness.

Submission Guidelines:
• Papers should be submitted in PDF format and should not exceed 4 pages, including references and figures.
• Papers should be formatted according to the guidelines provided by the workshop organizers.
• Papers should be original and not previously published or under review elsewhere.

Submission Deadline:
The deadline for paper submissions is March 12, 2023.

Notification of Acceptance:
Authors will be notified of acceptance by April 7, 2023.

Final paper submission Dedline:
Workshop final paper submission deadline is April 24, 2023.

Worksop Venue:
The workshop is collocated with the 11th conference on Artificial Intelligence and Natural Language (AINL 2023), which will be held on 20-21 of April in Yerevan, Armenia.

ORGANIZING COMMITEE
Denis Turdakov, PhD, ISP RAS
Ivan Oseledets, Dr. Sci., Prof. RAS, Skoltech
Alexander Gasnikov, Dr. Sci., MIPT
Loukachevitch Natalia, Dr. Sci., MSU
in cooperation with the AINL organizing committee.
Contact:
If you have any questions about the workshop or the submission process, please contact the workshop organizers at wait23@ispras.ru.

We look forward to receiving your submissions and welcoming you to the workshop.