Accessing the Topological Properties of Neural Network Functions.

Loading...
Thumbnail Image

Date

2024-01-09

Authors

Masden, Marissa

Journal Title

Journal ISSN

Volume Title

Publisher

University of Oregon

Abstract

We provide a framework for analyzing the geometry and topology of the canonical polyhedral complex of ReLU neural networks, which naturally divides the input space into linear regions. Beginning with a category appropriate for analyzing neural network layer maps, we give a categorical definition. We then use our foundational results to produce a duality isomorphism between cellular poset of the canonical polyhedral complex and a cubical set. This duality uses sign sequences, an algebraic tool from hyperplane arrangements and oriented matroid theory. Our theoretical results lead to algorithms for computing not only the canonical polyhedral complex itself but topological invariants of its substructures such as the decision boundary, as well as for evaluating the presence of PL critical points. Using these algorithms, we produce some of the first empirical measurements of the topology of the decision boundary of neural networks, both at initialization and during training. We observing that increasing the width of neural networks decreases the variability observed in their topological expression, but increasing depth increases variability. A code repository containing Python and Sage code implementing some of the algorithms described herein is available in the included supplementary material.

Description

Keywords

Applied Topology, Machine Learning, Neural Networks

Citation