Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
data_synthesis [2018/03/06 16:44]
admin
data_synthesis [2018/12/02 02:28] (current)
admin
Line 136: Line 136:
  
 https://​mlatgt.blog/​2018/​02/​08/​syntax-directed-variational-autoencoder-for-structured-data/​ SYNTAX-DIRECTED VARIATIONAL AUTOENCODER FOR STRUCTURED DATA https://​mlatgt.blog/​2018/​02/​08/​syntax-directed-variational-autoencoder-for-structured-data/​ SYNTAX-DIRECTED VARIATIONAL AUTOENCODER FOR STRUCTURED DATA
 +
 +https://​github.com/​PonyGE/​PonyGE2 PonyGE2: grammatical evolution and variants in Python
 +
 +https://​arxiv.org/​abs/​1804.06516v2 Training Deep Networks with Synthetic Data: Bridging the Reality Gap by Domain Randomization
 +
 +To handle the variability in real-world data, the system relies upon the technique of domain randomization,​ in which the parameters of the simulator−such as lighting, pose, object textures, etc.−are randomized in non-realistic ways to force the neural network to learn the essential features of the object of interest.
 +
 +We have demonstrated that domain randomization (DR)
 +is an effective technique to bridge the reality gap. Using
 +synthetic DR data alone, we have trained a neural network
 +to accomplish complex tasks like object detection with performance
 +comparable to more labor-intensive (and therefore
 +more expensive) datasets. By randomly perturbing the
 +synthetic images during training, DR intentionally abandons
 +photorealism to force the network to learn to focus
 +on the relevant features. With fine-tuning on real images,
 +we have shown that DR both outperforms more photorealistic
 +datasets and improves upon results obtained using real
 +data alone.
 +
 +https://​arxiv.org/​pdf/​1805.10561v1.pdf Adversarial Constraint Learning for Structured Prediction
 +
 +Learning requires a blackbox
 +simulator of structured outputs, which generates
 +valid labels, but need not model their corresponding
 +inputs or the input-label relationship. At
 +training time, we constrain the model to produce
 +outputs that cannot be distinguished from simulated
 +labels by adversarial training. Providing our framework
 +with a small number of labeled inputs gives
 +rise to a new semi-supervised structured prediction
 +model; we evaluate this model on multiple tasks —
 +tracking, pose estimation and time series prediction
 +— and find that it achieves high accuracy with only
 +a small number of labeled inputs. In some cases, no
 +labels are required at all.
 +
 +https://​arxiv.org/​abs/​1809.01219v1 Graph-based Deep-Tree Recursive Neural Network (DTRNN) for Text Classification
 +
 +The DTG method can generate a richer and more accurate representation for nodes (or vertices) in graphs. It adds flexibility in exploring the vertex neighborhood information to better reflect the second order proximity and homophily equivalence in a graph. ​
 +
 +https://​arxiv.org/​abs/​1811.11264v1 Synthesizing Tabular Data using Generative Adversarial Networks
 +
 +Generative adversarial networks (GANs) implicitly learn the probability distribution of a dataset and can draw samples from the distribution. This paper presents, Tabular GAN (TGAN), a generative adversarial network which can generate tabular data like medical or educational records. Using the power of deep neural networks, TGAN generates high-quality and fully synthetic tables while simultaneously generating discrete and continuous variables. When we evaluate our model on three datasets, we find that TGAN outperforms conventional statistical generative models in both capturing the correlation between columns and scaling up for large datasets.
 +
 +https://​arxiv.org/​abs/​1806.03384 Data Synthesis based on Generative Adversarial Networks
 +
 +