Sinha, DevMasden, Marissa2024-01-092024-01-092024-01-09https://hdl.handle.net/1794/29136We provide a framework for analyzing the geometry and topology of the canonical polyhedral complex of ReLU neural networks, which naturally divides the input space into linear regions. Beginning with a category appropriate for analyzing neural network layer maps, we give a categorical definition. We then use our foundational results to produce a duality isomorphism between cellular poset of the canonical polyhedral complex and a cubical set. This duality uses sign sequences, an algebraic tool from hyperplane arrangements and oriented matroid theory. Our theoretical results lead to algorithms for computing not only the canonical polyhedral complex itself but topological invariants of its substructures such as the decision boundary, as well as for evaluating the presence of PL critical points. Using these algorithms, we produce some of the first empirical measurements of the topology of the decision boundary of neural networks, both at initialization and during training. We observing that increasing the width of neural networks decreases the variability observed in their topological expression, but increasing depth increases variability. A code repository containing Python and Sage code implementing some of the algorithms described herein is available in the included supplementary material.en-USAll Rights Reserved.Applied TopologyMachine LearningNeural NetworksAccessing the Topological Properties of Neural Network Functions.Electronic Thesis or Dissertation