Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
encoder_decoder [2018/08/23 07:14]
admin
encoder_decoder [2018/09/04 02:43] (current)
admin
Line 61: Line 61:
  
 We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds/maps neural network architectures into a continuous space. (2) A predictor takes the continuous representation of a network as input and predicts its accuracy. (3) A decoder maps a continuous representation of a network back to its architecture. ​ We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds/maps neural network architectures into a continuous space. (2) A predictor takes the continuous representation of a network as input and predicts its accuracy. (3) A decoder maps a continuous representation of a network back to its architecture. ​
 +
 +https://​arxiv.org/​abs/​1808.03867 Pervasive Attention: 2D Convolutional Neural Networks for Sequence-to-Sequence Prediction
 +