“Research and development and policy-making should strive to ensure that the use of AI systems supports the general well-being, inclusion and sustainable development of society.”
Since the beginning of this year, Cybernetica has taken strategic steps to support Estonia in introducing safe artificial intelligence into the digital society. Thus, by the end of the first quarter, a study on the cybersecurity of artificial intelligence (AI) for the Information System Authority (RIA) has been completed. Additionally, Cybernetica is also part of the new Estonian Centre of Excellence in Artificial Intelligence (EXAI).
Cybernetica has a broader artificial intelligence strategy for the upcoming years, which is supported by the establishment of EXAI in January, having been financed with 7 000 000 Euros. Cybernetica, TalTech and University of Tartu are jointly conducting the research at EXAI, with latter being additionally the project coordinator. Liina Kamm, a senior researcher at Cybernetica participating in the project, noted that AI has great potential in addressing societal challenges. "With the help of artificial intelligence, it is possible to interpret complex data more effectively based on large language models. Among EXAI’s tasks is the application of various methodologies for the advancement of AI capabilities in several key sectors such as e-government, healthcare, business management and cybersecurity," added Kamm.
EXAI focuses mainly on four directions:
- Leveraging foundation models in building efficient and trustworthy analysis and prediction systems,
- Implementing control mechanisms and guardrails to ensure that the advanced AI systems follow their specification,
- Adapting and enhancing AI systems for improved performance in targeted application contexts,
- Achieving end- to-end security and privacy assurance of AI systems.
To mitigate security risks, Cybernetica has recently completed a risk analysis for RIA, which also addresses mitigation means. Additionally, it provides companies with an overview of the legal environment in the field of AI in 2024, including European Union initiatives and proposals. A part of the paper consists of a guide for companies that helps to think through the stages of implementation, as well as associated risks. The guide includes a worksheet that helps to navigate a system's distribution model, to, additionally, help with risk assessment and relevant mitigation possibilities.
“AI isn't just about asking a chatbot for advice on how to write code or generating cute puppy videos. Companies around the world are exploring ways to apply AI and machine learning solutions in industry and digital developments to make production and services more personal and quicker," said Lauri Tankler, head of research and development at the RIA Cybersecurity Centre. "At RIA, we strive towards the implementation of new technologies containing as little unnecessary risks to people's data, money and livelihood as possible."
"The security of artificial intelligence systems means, on one hand, technical measures – for example, that a stranger or a competing company does not gain access to the data – but also that the implementer thinks about social effects, such as avoiding discrimination or the ecological footprint of AI computing power," explained one of the authors of the study, Cybernetica’s Chief Scientific Officer, Dan Bogdanov. "Research and development and policy-making should strive to ensure that the use of AI systems supports the general well-being, inclusion and sustainable development of society."
The study can be found here.