Set The Path
For Your AI.
Many talk about what their AI systems can already do today. We start earlier. For us, sovereign use of AI does not begin with the output, but with the guardrails.
This is exactly what we developed Onventis Onix for. With our agent-based AI platform, we create a framework in which you can deploy AI in a targeted and traceable manner. This starts with the choice of the Large Language Model (LLM) being used. Because what is considered a leading model today may no longer be tomorrow. That is why we do not commit to a single provider. With Onventis Onix, you decide for yourself whether you want to use a model from OpenAI, Anthropic or the European Mistral, for example. This openness gives you the freedom to better align technological, strategic and regulatory requirements with one another.
Equally important is how your data is handled. We do not use your company data for LLM fine-tuning of a general-purpose AI model. We apply models to your data, but we do not release your data to further train these general AI models. This way, the use of AI remains focused on the specific use case, without your company data flowing into the further development of a general model or being used in any other way. This controlled AI architecture is what distinguishes Onventis Onix and makes us special.
Then there is the question of transparency. We label functions and processes that are generated or controlled by artificial intelligence. This creates clarity in dealing with the technology and aligns with the European understanding of trustworthy AI. The EU places exactly this point at the center. Transparency obligations are intended to ensure that the use of AI remains recognizable and that users can determine when content or decisions are influenced by AI. We consistently adhere to this.
Important: the human remains in the process. Human-in-the-loop means for us: AI may support, structure, prepare and accelerate. But control over key decisions remains with you. Because in procurement, it is not just about efficiency, but also about responsibility, traceability and regulatory compliance. This is exactly how an AI setup is created that is not only powerful, but also fits your organization.
FAQ
What does general LLM fine-tuning mean?
In classical LLM fine-tuning, existing, publicly available AI models are trained with additional data to adapt them in a targeted manner. This is exactly what does not happen at Onventis. Sensitive data is neither used for training purposes nor shared in any other form.
Why is a flexible AI architecture crucial?
A flexible AI architecture protects against vendor lock-in and keeps the AI strategy open for the best model – today and tomorrow. This way, companies remain technologically independent, future-proof and able to act, even when the market, regulatory requirements or their own needs change.
What does Human-in-the-Loop mean?
Human-in-the-Loop ensures that AI supports people but does not replace them. Final control and responsibility always remain with the user.

