Style Transfer

https://arxiv.org/abs/1508.06576v2 A Neural Algorithm of Artistic Style

http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf Image Style Transfer Using Convolutional Neural Networks

https://www.microsoft.com/en-us/research/publication/persona-based-neural-conversation-model A Persona-Based Neural Conversation Model

We present persona-based models for handling the issue of speaker consistency in neural response generation. A speaker model encodes personas in distributed embeddings that capture individual characteristics such as background information and speaking style. A dyadic speaker-addressee model captures properties of interactions between two interlocutors. Our models yield qualitative performance improvements in both perplexity and BLEU scores over baseline sequence-to-sequence models, with similar gains in speaker consistency as measured by human judges.

http://arxiv.org/pdf/1603.05631.pdf Generative Image Modeling using Style and Structure Adversarial Networks

In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network

https://research.googleblog.com/2016/10/supercharging-style-transfer.html

We introduce a simple method to allow a single deep convolutional style transfer network to learn multiple styles at the same time.

https://www.youtube.com/watch?v=WHmp26bh0tI

https://arxiv.org/abs/1603.03417 Texture Networks: Feed-forward Synthesis of Textures and Stylized Images

https://arxiv.org/abs/1603.08155 Perceptual Losses for Real-Time Style Transfer and Super-Resolution

https://arxiv.org/abs/1610.07629v1 A Learned Representation For Artistic Style

https://arxiv.org/pdf/1606.04474v1.pdf Learning to learn by gradient descent by gradient descent

The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.

https://medium.com/@lherrera/how-to-fake-it-as-an-artist-with-docker-aws-and-deep-learning-6d42f4acd890#.k3bz147z7

https://arxiv.org/abs/1701.01036 Demystifying Neural Style Transfer

Neural Style Transfer has recently demonstrated very exciting results which catches eyes in both academia and industry. Despite the amazing results, the principle of neural style transfer, especially why the Gram matrices could represent style remains unclear. In this paper, we propose a novel interpretation of neural style transfer by treating it as a domain adaptation problem. Specifically, we theoretically show that matching the Gram matrices of feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with the second order polynomial kernel. Thus, we argue that the essence of neural style transfer is to match the feature distributions between the style images and the generated images. To further support our standpoint, we experiment with several other distribution alignment methods, and achieve appealing results. We believe this novel interpretation connects these two important research fields, and could enlighten future researches.

https://arxiv.org/abs/1701.08893 Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses

. This paper presents a multiscale synthesis pipeline based on convolutional neural networks that ameliorates these issues. We first give a mathematical explanation of the source of instabilities in many previous approaches. We then improve these instabilities by using histogram losses to synthesize textures that better statistically match the exemplar. We also show how to integrate localized style losses in our multiscale framework. These losses can improve the quality of large features, improve the separation of content and style, and offer artistic controls such as paint by numbers. We demonstrate that our approach offers improved quality, convergence in fewer iterations, and more stability over the optimization.

https://arxiv.org/pdf/1703.06868v1.pdf Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization

Gatys et al. recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization (AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. In addition, our approach allows flexible user controls such as content-style trade-off, style interpolation, color & spatial controls, all using a single feed-forward neural network.

Instance normalization performs style normalization by normalizing feature statistics. Motivated by this interpretation, we then present a simple extension named adaptive instance normalization (AdaIN) that can adapt to arbitrary styles. Our style transfer network for the first time achieves arbitrary style transfer in realtime, thanks to the AdaIN layer. https://github.com/xunhuang1995/AdaIN-style

https://arxiv.org/pdf/1703.06953.pdf Multi-style Generative Network for Real-time Transfer

We introduce a Multi-style Generative Network (MSG-Net) with a novel Inspiration Layer, which retains the functionality of optimization-based approaches and has the fast speed of feed-forward networks. The proposed Inspiration Layer explicitly matches the feature statistics with the target styles at run time, which dramatically improves versatility of existing generative network, so that multiple styles can be realized within one network.

https://arxiv.org/pdf/1703.07511v1.pdf Deep Photo Style Transfer

http://arxiv.org/pdf/1703.09210v2.pdf StyleBank: An Explicit Representation for Neural Image Style Transfer

https://arxiv.org/pdf/1705.01088.pdf Visual Attribute Transfer through Deep Image Analogy

https://arxiv.org/abs/1702.06762v2 Style Transfer Generative Adversarial Networks: Learning to Play Chess Differently

The idea of style transfer has largely only been explored in image-based tasks, which we attribute in part to the specific nature of loss functions used for style transfer. We propose a general formulation of style transfer as an extension of generative adversarial networks, by using a discriminator to regularize a generator with an otherwise separate loss function. We apply our approach to the task of learning to play chess in the style of a specific player, and present empirical evidence for the viability of our approach.

https://arxiv.org/abs/1706.02861v2 Assigning personality/identity to a chatting machine for coherent conversation generation

We design a model consisting of three modules: a profile detector to decide whether a post should be responded using the profile and which key should be addressed, a bidirectional decoder to generate responses forward and backward starting from a selected profile value, and a position detector that predicts a word position from which decoding should start given a selected profile value. We show that general conversation data from social media can be used to generate profile-coherent responses.

https://arxiv.org/abs/1711.00889 Structured Generative Adversarial Networks

https://github.com/VinceMarron/style_transfer Style Transfer as Optimal Transport

https://nlp.stanford.edu/pubs/li2018transfer.pdf Delete, Retrieve, Generate: A Simple Approach to Sentiment and Style Transfer

In this paper, we propose simpler methods motivated by the observation that text attributes are often marked by distinctive phrases (e.g., “too small”). Our strongest method extracts content words by deleting phrases associated with the sentence’s original attribute value, retrieves new phrases associated with the target attribute, and uses a neural model to fluently combine these into a final output.

https://arxiv.org/abs/1808.10122v1 Learning Neural Templates for Text Generation

Encoder-decoder models are largely (a) uninterpretable, and (b) difficult to control in terms of their phrasing or content. This work proposes a neural generation system using a hidden semi-markov model (HSMM) decoder, which learns latent, discrete templates jointly with learning to generate. We show that this model learns useful templates, and that these templates make generation both more interpretable and controllable.

https://web.cs.hacettepe.edu.tr/~karacan/projects/attribute_hallucination/# Manipulating Attributes of Natural Scenes via Hallucination

https://arxiv.org/abs/1810.01175 Line Drawings from 3D Models

https://hal.inria.fr/hal-01802131v2/document Unsupervised Learning of Artistic Styles with Archetypal Style Analysis

https://compvis.github.io/adaptive-style-transfer/ A Style-Aware Content Loss for Real-time HD Style Transfer