AI is a powerful amplifier of skills and there are secure tools for embracing it

Dan Bogdanov

Chief Scientific Officer / Director of the Information Security Research Institute

“AI can be a teacher, but you still need to learn. But you won't get far if only we know how to do this.”

Dan Bogdanov

In 2024, the smart choice in artificial intelligence was flexibility. New models were released every few weeks from AI labs around the world. New tools from large technology providers made whole product startups obsolete six months after founding. It has certainly been a time of great technological progress and advancement! The business and exploitation models for many models remain elusive, but more on that later.

At Cybernetica, our goal is to build future-proof technologies for secure digital societies. Thus, the security and privacy of AI systems has been front and center for us. And, since we ourselves are set on keeping our customers’ data secure while providing them with the best value, we launched the secure AI tools initiative at Cybernetica. Our first goal was to give our employees AI tools to allow them to quickly explore the design space for a document or a piece of software before starting development. Our second was to provide tools that support specialists during their work, by automating menial tasks.

We sought AI tools that work on Cybernetica premises, on hardware we have procured. We were able to find tools that didn’t require us to have our own data center of AI accelerators and our own nuclear plant to power them. For example, we found an AI tool (not advertising, which one ) that supports our developers write code and that was trained with code from only permissively licensed software. This helped us avoid the risk that, due to developer error, we produce software, parts of which fall under GPL or a commercial license. This might sound overly careful, but the legal precedents have not yet been set in stone.

More importantly – we also set up an internal environment where we could test out new models on a weekly basis, if need be. We experimented with many self-hosted large language models (including visual language models), their resource usage, user experience and deployment aspects. We also ran in-house hackathons to test how an AI tool supports people with different skills.

The main thing we learned was not novel – AI tools are a powerful amplifier for existing skills and experiences. If you know your domain and know how to ask the right questions – AI helps you become a lot more efficient! If you are just beginning your journey, AI can be a teacher, but you still need to learn.

But you won't get far if only we know how to do this. Cybernetica worked with the Information System Authority to create a study on Risks and controls for artificial intelligence and machine learning systems with an easy-to-use risk management methodology. We also supported the Estonian Information and Communication Technology Association in launching a campaign to adjust the methodology based on real-world companies.

In 2025, we foresee taking this methodology global, supporting Cybernetica’s customer governments and companies around the world.