Our Vision

We build more robust and transparent learning systems minimizing the well-known limitations of other existing approaches such as statistical biases, catastrophic forgetting and shallow learning.

Instead of "fixing" any of the existing approaches we have developed a new AI formalism inspired by Model Theory, a mathematical discipline that combines Abstract Algebra with Mathematical Logic. We felt that a new formalism was needed if we wanted to ensure transparency, control and safety from first principles. These characteristics allow for a better cooperation between humans and machines with the constraints and safeguards one may want to their interaction. We believe this formalism can be a basis for a more “humane” AI.

The company

We are a startup based in Madrid working in a close collaboration with the Champalimaud Research Foundation in Lisbon. We are part of a European consortium of companies and research institutions investigating AML, including the leading centers in artificial intelligence DFKI and INRIA.

Our team has more than two decades of experience building succesful machine-learning based products such as DeNovoX peptide de novo sequencing software or idTracker animal tracking software.

Algebraic Machine Learning

AML is a new machine learning approach using the mathematics of Model Theory to naturally embed what we know about data and formal knowledge into a discrete algebraic structure.

Its power comes from the combination of two mathematical results. One is the insight by Birkhoff that any algebra can be expressed in terms of a set of simpler unique algebras, that we call atoms. The second ingredient is that, out of the many possible models, it is the freest atomized algebra that is particularly adept at learning. For example it guarantees to find a simple rule in the data if it exists and we have enough data. We have discovered that subsets of the freest atomized algebras are good generalization models. To obtain these models, we only use set-theoretical operations and not any optimization methods or even the idea of error minimization. This makes the mathematics very transparent and many theorems can be derived about how the learning takes place.

Unique Capabilities

No architecture to guess

Naturally combines data and formal prior knowledge

Works when training and test statistics are different

Transparent to mathematical analysis

Ability to find concepts in the data

Large scale parallelization


Finite Atomized Semilattices

Martin-Maroto, F., & de Polavieja, G. (2021). Finite Atomized Semilattices. arXiv:2102.08050.

In this work we present a formal mathematical description of finite atomized semilattices, an algebraic construction we used to define and embed models in Algebraic Machine Learning (AML). Among others, concepts such as the full crossing operator or pinning terms, that play an important role in AML, are formalised.

Algebraic Machine Learning

Martin-Maroto, F., & de Polavieja, G. (2018). Algebraic Machine Learning. arXiv:1803.05252.

This is the foundation of Algebraic Machine Learning (AML) and where the main concepts of the methodology are introduced. As an alternative to statistical learning, AML offers advantages in combining bottom-up (data) and top-down (pre-existing knowledge) information, and large-scale parallelization.

In AML, learning and generalization are parameter-free, fully discrete and without function minimization. We introduce this method using a simple problem that is solved step by step. In addition, two more problems, hand-written character recognition (MNIST) and the Queens Completion problem, are explored as examples of supervised and unsupervised learning, respectively.

Software and Patents

Method for large-scale distributed machine learning using formal knowledge and training data

The method consisting of independently calculating discrete algebraic models of the input data in one or many computing devices, and in asynchronously sharing components of the algebraic models among the computing devices without constraints on when or on how many times the sharing needs to happen. Each computing device improves its algebraic model every time it receives new input data or the sharing from other computing devices, thereby providing a solution to the scaling-up problem of machine learning systems.


Fernando Martin-Maroto


Gonzalo G. de Polavieja


Nabil Abderrahaman-Elena

Research Software Engineer

Antonio Ricciardo

Research Scientist

Enrique Naveros

Administration and Financial Officer

Board of Advisors

Paul Lukowicz

Scientific Director, Embedded Intelligence, German Research Center for Artificial Intelligence (DFKI)

Luis Serrano

Senior manager and partner at The Global Influencer

Jaime Martín Losa

Founder and CEO at eProsima

Gerardo Pardo

CTO at Real-Time Innovations, Inc. and board member at the Object Management Group

Justo Montero

Currently CEO at Energinter, has hold top management positions in telecom operators, consultancy firms and technology vendors