Neural networks based on topological data analysis (TDA) use tools such as persistent homology to learn topological signatures of data and stabilize training but may not be universal approximators or have stable inverses. Other architectures universally approximate data distributions on submanifolds but only when the latter are given by a single chart, making them unable to learn maps that change topology. By exploiting the topological parallels between locally biLipschitz maps, covering spaces, and local homeomorphisms, and by using universal approximation arguments from machine learning, we find that a novel network of the form T o p o E, where E is an injective network, p a fixed coordinate projection, and T a bijective network, is a universal approximator of local diffeomorphisms between compact smooth submanifolds embedded in Rn. We emphasize the case when the target map changes topology. Further, we find that by constraining the projection p, multivalued inversions of our networks can be computed without sacrificing universality. As an application, we show that learning a group invariant function with unknown group action naturally reduces to the question of learning local diffeomorphisms for finite groups. Our theory permits us to recover orbits of the group action. Finally, our analysis informs the choice of topologically expressive starting spaces in generative problems.
Joint research with M. Puthawala, M. Lassas, I. Dokmanic and P. Pankka.