Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
activation [2017/07/19 15:18]
127.0.0.1 external edit
activation [2018/11/14 21:27] (current)
admin
Line 131: Line 131:
 However, these have been easy to choose, and we do not expect that a lot of tedious fine-tuning is However, these have been easy to choose, and we do not expect that a lot of tedious fine-tuning is
 required in the general case. required in the general case.
 +
 +https://​arxiv.org/​abs/​1710.05941 Swish: a Self-Gated Activation Function
 +
 +https://​arxiv.org/​pdf/​1712.01897.pdf Online Learning with Gated Linear Networks ​
 +Rather than relying on non-linear transfer functions, our method gains representational power by the use of data conditioning. We state under general conditions a learnable capacity theorem that shows this approach can in principle learn any bounded Borel-measurable function on a compact subset of euclidean space; the result is stronger than many universality results for connectionist architectures because we provide both the model and the learning procedure for which convergence is guaranteed.
 +
 +https://​arxiv.org/​abs/​1811.05381v1 Sorting out Lipschitz function approximation
 +
 +Training neural networks subject to a Lipschitz constraint is useful for generalization bounds, provable adversarial robustness, interpretable gradients, and Wasserstein distance estimation. By the composition property of Lipschitz functions, it suffices to ensure that each individual affine transformation or nonlinear activation function is 1-Lipschitz. The challenge is to do this while maintaining the expressive power. We identify a necessary property for such an architecture:​ each of the layers must preserve the gradient norm during backpropagation. Based on this, we propose to combine a gradient norm preserving activation function, GroupSort, with norm-constrained weight matrices. We show that norm-constrained GroupSort architectures are universal Lipschitz function approximators. Empirically,​ we show that norm-constrained GroupSort networks achieve tighter estimates of Wasserstein distance than their ReLU counterparts and can achieve provable adversarial robustness guarantees with little cost to accuracy.