Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
compression [2018/09/09 13:20]
admin
compression [2018/10/24 11:04] (current)
admin
Line 88: Line 88:
  
 This might explain the relatively poor practical performance of variational methods in deep learning. On the other hand, simple incremental encoding methods yield excellent compression values on deep networks, vindicating Solomonoff'​s approach. This might explain the relatively poor practical performance of variational methods in deep learning. On the other hand, simple incremental encoding methods yield excellent compression values on deep networks, vindicating Solomonoff'​s approach.
 +
 +https://​arxiv.org/​abs/​1810.09274v1 From Hard to Soft: Understanding Deep Network Nonlinearities via Vector Quantization and Statistical Inference
 +
 +https://​arxiv.org/​abs/​1807.10251v2 Aggregated Learning: A Vector Quantization Approach to Learning with Neural Networks
 +
 +https://​arxiv.org/​abs/​1704.02681v1 Pyramid Vector Quantization for Deep Learning
 +