> Back to seminars list


Thursday, October 19th, 2017
From 10h15 To 11h30
Centre de Recherche - Paris

One-shot Correction of Artifical Intelligence Systems Using Stochastic Separation Theorems

The problem of non-iterative one-shot and non-destructive correction of unavoidable mistakes arises in all Artificial Intelligence (AI) applications in the real world. Its solution requires robust separation of samples with errors from samples where the system works properly. We emonstrate that in (moderately) high dimension this separation could be achieved with probability close to one by linear discriminants.


Classical measure concentration theorems state that random points are concentrated in a thin layer near a surface (a sphere, an average or median level set of energy or another function, etc.). The stochastic separation theorems describe thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set even for exponentially large random sets. These provide a new instrument for the development, analysis, and assessment of machine learning methods and algorithms in high dimension. Theoretical statements are illustrated with numerical examples in pattern recognition. We also demonstrate how AI systems can transfer knowledge and teach each other using this corrector technology.


E-prints that partially presented the topic and included some related material:





Prof. Alexander GORBAN
Chair in Applied Mathematics, University of Leicester, UK

Hosted by

Computational Systems Biology of Cancer Team

Invited by

Dr. Emmanuel BARILLOT
Director U900



Computational Systems Biology of Cancer Team

Send an e-mail