One-shot Correction of Artifical Intelligence Systems Using Stochastic Separation Theorems
The problem of non-iterative one-shot and non-destructive correction of unavoidable mistakes arises in all Artificial Intelligence (AI) applications in the real world. Its solution requires robust separation of samples with errors from samples where the system works properly. We emonstrate that in (moderately) high dimension this separation could be achieved with probability close to one by linear discriminants.
Classical measure concentration theorems state that random points are concentrated in a thin layer near a surface (a sphere, an average or median level set of energy or another function, etc.). The stochastic separation theorems describe thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set even for exponentially large random sets. These provide a new instrument for the development, analysis, and assessment of machine learning methods and algorithms in high dimension. Theoretical statements are illustrated with numerical examples in pattern recognition. We also demonstrate how AI systems can transfer knowledge and teach each other using this corrector technology.
E-prints that partially presented the topic and included some related material:
Chair in Applied Mathematics, University of Leicester, UK
Computational Systems Biology of Cancer Team