ASAI’s goal is to bring not only uncritical enthusiasm into the AI debate, but also useful pragmatism. Platforms such as Moltbook are not merely technological curiosities; they are clear warning signals. In this environment, AI agents do not only communicate—they are beginning to operate their own Bitcoin nodes and exchange information about whether tasks assigned by humans are ethical or legal.
“The shift from static chatbots to autonomous agents living their own virtual lives fundamentally changes how systems operate. If agents start communicating in languages humans do not understand, we stop being the ones who set the boundaries. Our role in ASAI’s technology committee is to ensure that Slovak companies do not skip steps that may seem boring but are, from our perspective, critical—such as in-depth security audits—while also raising awareness of the potential risks that AI can bring,” says Matej Mihalech, Member of the Board and Technology Committee of ASAI.
The emergence of agent social networks also brings new types of attacks. Recent experiments have shown that so-called AI skills supply chains can be compromised. An attacker can create a seemingly useful skill for an AI agent that, in the background, exfiltrates sensitive data—such as access credentials to corporate email systems.
“From a legal perspective, we are entering expected but still largely unexplored territory, where the line between a tool and an autonomous actor becomes blurred. If AI agents enter into transactions or communicate using encryption, we must ask a fundamental question: who bears legal responsibility for the damage such an agent may cause? It is therefore essential to define clear rules about what an agent may and may not do, what data it can access, how its actions are logged, and who is responsible for oversight. In practice, this means separating agents from production systems, applying the principle of least privilege, and having a clear procedure for stopping an agent if it starts behaving unpredictably. That is why we recommend three concrete steps for every organisation: first, clearly define which systems and data an AI agent can access; second, implement the principle of minimal or proportional privileges so the agent only has access to the data and tools it truly needs; and third, create proper documentation and an audit trail to demonstrate responsible behaviour in the event of an incident,” explains Róbert Gašparovič, Chairman of the Board of ASAI.