Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
random_projections [2018/02/04 11:16]
admin
random_projections [2018/11/03 16:49] (current)
admin
Line 341: Line 341:
  
 https://​arxiv.org/​abs/​1801.10447v1 Recovering from Random Pruning: On the Plasticity of Deep Convolutional Neural Networks https://​arxiv.org/​abs/​1801.10447v1 Recovering from Random Pruning: On the Plasticity of Deep Convolutional Neural Networks
 +
 +https://​arxiv.org/​abs/​1712.07811v1 Multi-dimensional Graph Fourier Transform
 +
 +https://​www.biorxiv.org/​content/​biorxiv/​early/​2017/​08/​25/​180471.full.pdf A neural algorithm for a fundamental computing
 +problem
 +
 +https://​arxiv.org/​abs/​1802.09914v1 High-Dimensional Vector Semantics
 +
 +In this paper we explore the "​vector semantics"​ problem from the perspective of "​almost orthogonal"​ property of high-dimensional random vectors. We show that this intriguing property can be used to "​memorize"​ random vectors by simply adding them, and we provide an efficient probabilistic solution to the set membership problem. Also, we discuss several applications to word context vector embeddings, document sentences similarity, and spam filtering. Contrary to the "​expensive"​ machine learning methods, this method is very simple and it
 +does not even require a "​learning"​ process, however it exhibits similar properties.
 +
 +https://​arxiv.org/​abs/​1706.00280v1 Integer Echo State Networks: Hyperdimensional Reservoir Computing
 +
 +We propose an integer approximation of Echo State Networks (ESN) based on the mathematics of hyperdimensional computing. The reservoir of the proposed Integer Echo State Network (intESN) contains only n-bits integers and replaces the recurrent matrix multiply with an efficient cyclic shift operation. Such an architecture results in dramatic improvements in memory footprint and computational efficiency, with minimal performance loss. Our architecture naturally supports the usage of the trained reservoir in symbolic processing tasks of analogy making and logical inference.
 +
 +https://​arxiv.org/​abs/​1412.7026v2 Language Recognition using Random Indexing
 +
 +https://​arxiv.org/​abs/​1808.07172 Fisher Information and Natural Gradient Learning of Random Deep Networks
 +
 +We obtain the inverse of Fisher information explicitly. We then have an explicit form of the natural gradient, without relying on the numerical matrix inversion, which drastically speeds up stochastic gradient learning.
 +
 +https://​arxiv.org/​abs/​1412.6616v2 Outperforming Word2Vec on Analogy Tasks with Random Projections
 +
 +https://​arxiv.org/​abs/​1712.04323v2 Deep Echo State Network (DeepESN): A Brief Survey
 +
 +https://​arxiv.org/​abs/​1803.07125v2 Local Binary Pattern Networks
 +
 +In this paper, we tackle the problem using
 +a strategy different from the existing literature by proposing local
 +binary pattern networks or LBPNet, that is able to learn and perform
 +binary operations in an end-to-end fashion. LBPNet1 uses local binary
 +comparisons and random projection in place of conventional convolution
 +(or approximation of convolution) operations. These operations can
 +be implemented efficiently on different platforms including direct hardware
 +implementation