This is an old revision of the document!


https://arxiv.org/abs/1611.02252v1 Hierarchical compositional feature learning

https://arxiv.org/pdf/1505.05401.pdf

https://arxiv.org/abs/1605.06444 Unreasonable Effectiveness of Learning Neural Networks: From Accessible States and Robust Ensembles to Basic Algorithmic Schemes

In artificial neural networks, learning from data is a computationally demanding task in which a large number of connection weights are iteratively tuned through stochastic-gradient-based heuristic processes over a cost-function. It is not well understood how learning occurs in these systems, in particular how they avoid getting trapped in configurations with poor computational performance. Here we study the difficult case of networks with discrete weights, where the optimization landscape is very rough even for simple architectures, and provide theoretical and numerical evidence of the existence of rare—but extremely dense and accessible—regions of configurations in the network weight space. We define a novel measure, which we call the robust ensemble (RE), which suppresses trapping by isolated configurations and amplifies the role of these dense regions. We analytically compute the RE in some exactly solvable models, and also provide a general algorithmic scheme which is straightforward to implement: define a cost-function given by a sum of a finite number of replicas of the original cost-function, with a constraint centering the replicas around a driving assignment. To illustrate this, we derive several powerful new algorithms, ranging from Markov Chains to message passing to gradient descent processes, where the algorithms target the robust dense states, resulting in substantial improvements in performance. The weak dependence on the number of precision bits of the weights leads us to conjecture that very similar reasoning applies to more conventional neural networks. Analogous algorithmic schemes can also be applied to other optimization problems.

https://arxiv.org/pdf/cs/0212002v4.pdf

https://www.semanticscholar.org/search?year%5B%5D=2015&year%5B%5D=2015&facets%5BfieldOfStudy%5D%5B%5D=Computer%20Science&q=A%20Max-Sum%20algorithm%20for%20training%20discrete%20neural%20networks&sort=relevance&ae=false

https://arxiv.org/abs/1509.05753 Subdominant Dense Clusters Allow for Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses

https://arxiv.org/pdf/1711.08141.pdf Shift: A Zero FLOP, Zero Parameter Alternative to Spatial Convolutions

https://openreview.net/forum?id=B1IDRdeCW The High-Dimensional Geometry of Binary Neural Networks

https://arxiv.org/abs/1803.03004v1 Learning Effective Binary Visual Representations with Deep Networks

This paper proposes Approximately Binary Clamping (ABC), which is non-saturating, end-to-end trainable, with fast convergence and can output true binary visual representations. ABC achieves comparable accuracy in ImageNet classification as its real-valued counterpart, and even generalizes better in object detection. On benchmark image retrieval datasets, ABC also outperforms existing hashing methods.

https://arxiv.org/abs/1803.07125v2

n. LBPNet1 uses local binary comparisons and random projection in place of conventional convolu- tion (or approximation of convolution) operations.

We have built a convolution-free, end-to-end, and bitwise LBPNet from scratch for deep learning and verified its effectiveness on MNIST, SVHN, and CIFAR-10 with orders of magnitude speedup (hundred times) in testing and model size reduction (thousand times), when compared with the baseline and the binarized CNNs. The improvement in both size and speed is achieved due to our convolution-free design with logic bitwise operations that are learned directly from scratch.

https://arxiv.org/abs/1711.06597v1 Deep Local Binary Patterns

https://arxiv.org/abs/1608.06049v2 Local Binary Convolutional Neural Networks