Apertus: What you need to know about the Swiss AI model

The world of artificial intelligence is full of superlatives. Almost every week we hear about new, even more powerful models from large international tech companies. In the midst of this news, a development from Switzerland is attracting attention: the AI language model Apertus.
This news naturally raised a few questions for us. What can Apertus really do? How does it compare to other models? And above all: When is it a sensible option for a project?
The core of Apertus: transparency as a principle
Apertus was developed by leading Swiss research institutes. (ETH Zurich, EPFL, CSCS). Its most important feature is not its performance, but its commitment to complete openness. The name says it all: Apertus is Latin for open.
But what exactly does that mean? Many well-known models, even in the open source sector, are only "open weight". This means that you can download and use the trained model. Apertus goes a decisive step further: the training data, the methods and the code are also completely transparent and documented. So you don't just know, that the model can do something, but also, with which it has learned its knowledge.
This focus on transparency and traceable origins is anything but a matter of course in the AI world. In addition, there is a clear focus on multilingualism, which also takes into account Swiss characteristics such as Romansh and Swiss German.
The matter of data protection
One of the most pressing issues when using AI is data security. The option of operating an AI model on your own servers or in a Swiss data center is the technical answer here.
We need to be clear here: This possibility is also offered by other strong open source models such as the Gemma models from Google, Models from DeepSeek or gpt-oss from OpenAI. So you don't have to rely on Apertus to operate a self-hosted AI solution that complies with data protection regulations.
The decisive difference with Apertus lies in the confidence in the origin of the model, as a glance at the technical report is shown. It conceals a number of decisions made by developers that go far beyond the usual legal and ethical standards.
- Data protection with foresight: the "hindsight" approach
One detail stands out in particular and sets Apertus apart from almost all other open source models: The handling of training data. The developers have not only respected the opt-out requests of website operators (via robots.txt) for the future, but have also applied them retroactively to all data archives since 2013. Simply put, if a website operator decides today that her content should not be used for AI training, her data has been removed from the entire training process. This "respect with retroactive effect" minimizes legal risks considerably. - Built-in forgetting: protection against copyright mishaps
Many companies worry that an AI might inadvertently output copyrighted or private data that it has "memorized" during training. Apertus actively counteracts this: A special training method, called "Goldfish Objective" in the report, was used to make it specifically difficult for the model to memorize text passages verbatim. The tests confirm that the risk of such unintentional reproduction of training data is significantly reduced. It is not just a hope, but a built-in design feature for greater security. - The 'Swiss AI Charter': Clear values
The alignment with "Swiss values" is more than just a marketing slogan at Apertus. The report reveals the development of a "Swiss AI Charter" - a set of rules based directly on Swiss constitutional values such as neutrality, consensus building, federalism and data protection. This charter was even validated in a survey of Swiss citizens with an overwhelming majority (97.3% approval on average). It serves the model as a guideline for dealing with difficult or controversial issues.
The choice for Apertus is therefore less a technical decision than a strategic one: Apertus is relying on a model whose development process is geared towards local values such as transparency and legal compliance.
An honest assessment of performance: solid, but not at the top
How does Apertus fare in a direct performance comparison? The benchmarks from the technical report confirm our initial assessment and provide a clear picture:
- Compared to proprietary models such as the latest ones from OpenAI, Anthropic or Google, there is still a performance gap, especially for complex logical reasoning.
- Also in the field of open source models Apertus is currently positioned in the solid midfield. Models of a similar size sometimes show higher performance in global benchmarks for mathematics or programming.
However, the report also shows the specific strengths: In tests that cultural and regional knowledge Apertus performs better than average in the surveys. This confirms that the focus on multilingual and European data is bearing fruit.
An important foundation
Apertus is not a "ChatGPT killer" from Switzerland. But it is a solid and trustworthy foundation on which to build. It proves that Switzerland has the know-how to develop its own, sovereign AI technology.
We see our role as finding the best solution for your specific needs. In projects where maximum performance or low operating costs are crucial, we may continue to recommend other models.
But with high demands on data protection, transparency and legal security, Apertus is now a great option for using AI without many of the previous concerns.
You haven't tried Apertus yourself yet? On https://publicai.co/ Apertus is publicly accessible and ready to be tested by you.

Written by
Mirco Strässle





