We build more robust and transparent learning systems minimizing the well-known limitations of other existing approaches such as statistical biases, catastrophic forgetting and shallow learning.
Instead of "fixing" any of the existing approaches we have developed a new AI formalism inspired by Model Theory, a mathematical discipline that combines Abstract Algebra with Mathematical Logic. We felt that a new formalism was needed if we wanted to ensure transparency, control and safety from first principles. These characteristics allow for a better cooperation between humans and machines with the constraints and safeguards one may want to their interaction. We believe this formalism can be a basis for a more “humane” AI.
We are a startup based in Madrid working in a close collaboration with the Champalimaud Research Foundation in Lisbon. We are part of a European consortium of companies and research institutions investigating AML, including the leading centers in artificial intelligence DFKI and INRIA.
AML is a new machine learning approach using the mathematics of Model Theory to naturally embed what we know about data and formal knowledge into a discrete algebraic structure.
Its power comes from the combination of two mathematical results. One is the insight by Birkhoff that any algebra can be expressed in terms of a set of simpler unique algebras, that we call atoms. The second ingredient is that, out of the many possible models, it is the freest atomized algebra that is particularly adept at learning. For example it guarantees to find a simple rule in the data if it exists and we have enough data. We have discovered that subsets of the freest atomized algebras are good generalization models. To obtain these models, we only use set-theoretical operations and not any optimization methods or even the idea of error minimization. This makes the mathematics very transparent and many theorems can be derived about how the learning takes place.
Martin-Maroto, F., & de Polavieja, G. (2021). Finite Atomized Semilattices. arXiv:2102.08050.
In this work we present a formal mathematical description of finite atomized semilattices, an algebraic construction we used to define and embed models in Algebraic Machine Learning (AML). Among others, concepts such as the full crossing operator or pinning terms, that play an important role in AML, are formalised.
Martin-Maroto, F., & de Polavieja, G. (2018). Algebraic Machine Learning. arXiv:1803.05252.
This is the foundation of Algebraic Machine Learning (AML) and where the main concepts of the methodology are introduced. As an alternative to statistical learning, AML offers advantages in combining bottom-up (data) and top-down (pre-existing knowledge) information, and large-scale parallelization.
In AML, learning and generalization are parameter-free, fully discrete and without function minimization. We introduce this method using a simple problem that is solved step by step. In addition, two more problems, hand-written character recognition (MNIST) and the Queens Completion problem, are explored as examples of supervised and unsupervised learning, respectively.
The method consisting of independently calculating discrete algebraic models of the input data in one or many computing devices, and in asynchronously sharing components of the algebraic models among the computing devices without constraints on when or on how many times the sharing needs to happen. Each computing device improves its algebraic model every time it receives new input data or the sharing from other computing devices, thereby providing a solution to the scaling-up problem of machine learning systems.
Research Software Engineer
Administration and Financial Officer
Scientific Director, Embedded Intelligence, German Research Center for Artificial Intelligence (DFKI)
Senior manager and partner at The Global Influencer
Founder and CEO at eProsima
Currently CEO at Energinter, has hold top management positions in telecom operators, consultancy firms and technology vendors