Topological Augmentation of Latent Information Streams in Feed-Forward Neural Networks

Published: Oct. 2, 2020, 4:02 a.m.

Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.09.30.321679v1?rss=1 Authors: Shine, J. M., Li, M., Koyejo, O., Fulcher, B., Lizier, J. T. Abstract: The algorithmic rules that define deep neural networks are clearly defined, however the principles that define their performance remain poorly understood. Here, we use systems neuroscience and information theoretic approaches to analyse a feedforward neural network as it is trained to classify handwritten digits. By tracking the topology of the network as it learns, we identify three distinct phases of topological reconfiguration. Each phase brings the connections of the neural network into alignment with patterns of information contained in the input dataset, as well as the preceding layers. Performing dimensionality reduction on the data reveals a process of low-dimensional category separation as a function of learning. Our results enable a systems-level understanding of how deep neural networks function, and provide evidence of how neural networks reorganize edge weights and activity patterns so as to most effectively exploit the information theoretic content of input data during edge-weight training. Copy rights belong to original authors. Visit the link for more info