Autonomy aspects of AI-powered sensor networks
Our perception of artificial intelligence (AI) has changed in the last years. As more powerful models are trained, our belief that computers can really do everything, grows. However, we need to understand the costs and infrastructure of running modern AI, especially when the devices requiring AI are in the field, air or water.
In surveillance systems, we put out static sensors like radars or cameras on borders, towers or bodies of water. In old times, we had humans look at the radar and camera images to find anomalies and objects of interest. Today, we believe that AI could either highlight the interesting objects on our screens or even detect them and react automatically.
There are two ways for making this work. First option is to bring sensor data to the AI. This lets you run sensor data through large centers capable of running the hottest AI models. However, this needs reliable and high-bandwidth communication between your sensor and data center to pass on the data. This works well for static sensors, but less so for drones or moving objects where neither bandwidth or communication can be guaranteed.
The second option is to bring AI to the data. Suddenly, you need to make the models or algorithms small enough to run in smaller devices with reduced electricity and bandwidth. This is also the only reliable way for running AI on autonomous vehicles like drones.
Cybersecurity aspects of AI-powered sensor networks
I’m quite certain that someone famous has said the following about cybersecurity:
Cybersecurity is the art of trade-offs. Doesn’t matter if it’s cost, usability or availability. Picking the right technology means you get a mix, but not everything at the same time.
If nobody has said it before, then here you go, I just said it. You’re welcome!
The confidentiality of surveillance imagery at a remote location is likely to not be the highest concern. Anyone could potentially just go there and see what’s there to see. But the confidentiality of the AI model you deploy there, could be. If copied, the AI algorithm or model will serve another master just as well as it served you. If it was used to navigate a drone, you may find drones with similar capabilities near you soon.
Indistinguishability obfuscation, a technology for hiding the functionality of a piece of code (e.g., AI model), is nowhere near practical. Self-destruction, a much less sophisticated option, can be infinitely more practical (but is not guaranteed).
How Cybernetica thinks about AI in cyberphysical systems
Decentralising and decoupling data and functionality is a technology that has proven itself many times over. Nearly all Cybernetica’s products use decentralisation to protect data or key assets. In the other end of the spectrum, military systems decouple targeting intelligence and disposable payloads as much as possible. The protocol making the parts work together with minimal data footprint is the secret sauce.
Cybernetica’s speciality in building AI systems is precisely that – make sure who is allowed to see the data, who is allowed to see the algorithm or AI model. And then engineer everything around these constraints. This is the expertise Cybernetica offers in projects like STORE.