Abstract
Inspired by the prevalence of recurrent circuits in biological brains, we investigate the degree to which directionality is a helpful inductive bias for artificial neural networks. Taking directionality as topologically-ordered information flow between neurons, we formalise a perceptron layer with all-to-all connections (mathematically equivalent to a recurrent neural network) and demonstrate that directionality, a hallmark of modern feed-forward networks, can be induced rather than hard-wired by applying appropriate pruning techniques. Across different random seeds our pruning schemes successfully induce greater topological ordering in information flow between neurons without compromising performance, suggesting that directionality is not a prerequisite for learning, but may be an advantageous inductive bias discoverable by gradient descent and sparsification.
Citation
Use the following BibTeX entry to cite this work:
@inproceedings{song2025pruning,
title={Pruning Increases Orderedness in Weight-Tied Recurrent Computation},
author={Song, Yiding},
booktitle={Methods and Opportunities at Small Scale (MOSS) Workshop @ ICML 2025},
year={2025}
}