https://arxiv.org/pdf/1703.04361.pdf Toward a Formal Model of Cognitive Synergy

Cognitive synergy has been posited as a key feature of real-world general intelligence, and has been used explicitly in the design of the OpenCog cognitive architecture. Here category theory and related concepts are used to give a formalization of the cognitive synergy concept.

http://www.cl.cam.ac.uk/research/rainbow/projects/shape2vec/ https://github.com/ftasse/Shape2Vec

A neural network is trained to generate shape descriptors that lie close to a vector representation of the shape class, given a vector space of words. This method is easily extendable to range scans, hand-drawn sketches and images. This makes cross-modal retrieval possible, without a need to design different methods depending on the query type. We show that sketch-based shape retrieval using semantic-based descriptors outperforms the state-of-the-art by large margins, and mesh-based retrieval generates results of higher relevance to the query, than current deep shape descriptors.

http://sloanreview.mit.edu/article/harnessing-the-secret-structure-of-innovation/

https://arxiv.org/pdf/1704.00717v1.pdf It Takes Two to Tango: Towards Theory of AI’s Mind

In this work, we argue that for human-AI teams to be effective, humans must also develop a theory of AI’s mind – get to know its strengths, weaknesses, beliefs, and quirks. We instantiate these ideas within the domain of Visual Question Answering (VQA). We find that using just a few examples (50), lay people can be trained to better predict responses and oncoming failures of a complex VQA model. Surprisingly, we find that having access to the model’s internal states – its confidence in its top-k predictions, explicit or implicit attention maps which highlight regions in the image (and words in the question) the model is looking at (and listening to) while answering a question about an image – do not help people better predict its behavior.

https://arxiv.org/pdf/1704.04517v1.pdf SHAPEWORLD: A new test methodology for multimodal language understanding

We introduce a novel framework for evaluating multimodal deep learning models with respect to their language understanding and generalization abilities. In this approach, artificial data is automatically generated according to the experimenter’s specifications. The content of the data, both during training and evaluation, can be controlled in detail, which enables tasks to be created that require true generalization abilities, in particular the combination of previously introduced concepts in novel ways. We demonstrate the potential of our methodology by evaluating various visual question answering models on four different tasks, and show how our framework gives us detailed insights into their capabilities and limitations. By opensourcing our framework, we hope to stimulate progress in the field of multimodal language understanding.

https://arxiv.org/abs/1704.06340 Identifying First-person Camera Wearers in Third-person Videos

In this paper, we propose a new semi-Siamese Convolutional Neural Network architecture to address this novel challenge. We formulate the problem as learning a joint embedding space for first- and third-person videos that considers both spatial- and motion-domain cues. A new triplet loss function is designed to minimize the distance between correct first- and third-person matches while maximizing the distance between incorrect ones. This end-to-end approach performs significantly better than several baselines, in part by learning the first- and third-person features optimized for matching jointly with the distance measure itself.

https://arxiv.org/pdf/1705.01088v1.pdf Visual Attribute Transfer through Deep Image Analogy

https://arxiv.org/pdf/1705.04416.pdf Evaluating vector-space models of analogy

We evaluate the parallelogram model of analogy as applied to modern word embeddings, providing a detailed analysis of the extent to which this approach captures human relational similarity judgments in a large benchmark dataset. We find that that some semantic relationships are better captured than others. We then provide evidence for deeper limitations of the parallelogram model based on the intrinsic geometric constraints of vector spaces, paralleling classic results for first-order similarity.

https://arxiv.org/pdf/1608.01403v1.pdf Words, Concepts, and the Geometry of Analogy

https://arxiv.org/pdf/1705.10732v1.pdf Deep manifold-to-manifold transforming network for action recognition

https://arxiv.org/pdf/1705.10762v1.pdf Generative Models of Visually Grounded Imagination

https://arxiv.org/pdf/1611.06345v4.pdf Beyond Deep Residual Learning for Image Restoration: Persistent Homology-Guided Manifold Simplification

If an image contains many patterns and structures, the performance of these CNNs is still inferior. To address this issue, here we propose a novel feature space deep residual learning algorithm that outperforms the existing residual learning. The main idea is originated from the observation that the performance of a learning algorithm can be improved if the input and/or label manifolds can be made topologically simpler by an analytic mapping to a feature space.

https://arxiv.org/pdf/1706.05137.pdf One Model To Learn Them All

To allow training on input data of widely different sizes and dimensions, such as images, sound waves and text, we need sub-networks to convert inputs into a joint representation space. We call these sub-networks modality nets as they are specific to each modality (images, speech, text) and define transformations between these external domains and a unified representation.

https://arxiv.org/abs/1706.01983 Deep Learning: Generalization Requires Deep Compositional Feature Space Design

Generalization error defines the discriminability and the representation power of a deep model. In this work, we claim that feature space design using deep compositional function plays a significant role in generalization along with explicit and implicit regularizations. Our claims are being established with several image classification experiments. We show that the information loss due to convolution and max pooling can be marginalized with the compositional design, improving generalization performance. Also, we will show that learning rate decay acts as an implicit regularizer in deep model training.

https://arxiv.org/abs/1701.00464v1 Conceptual Spaces for Cognitive Architectures: A Lingua Franca for Different Levels of Representation

In particular, we claim that Conceptual Spaces offer a lingua franca that allows to unify and generalize many aspects of the symbolic, sub-symbolic and diagrammatic approaches (by overcoming some of their typical problems) and to integrate them on a common ground. In doing so we extend and detail some of the arguments explored by [Ardenfors (1997)] for defending the need of a conceptual, intermediate, representation level between the symbolic and the sub-symbolic one.

http://www.hyadatalab.com/papers/analogy-kdd17.pdf

https://arxiv.org/abs/1707.03389v2 SCAN: Learning Abstract Hierarchical Compositional Visual Concepts

This paper describes SCAN (Symbol-Concept Association Network), a new framework for learning such concepts in the visual domain. We first use the previously published beta-VAE (Higgins et al., 2017a) architecture to learn a disentangled representation of the latent structure of the visual world, before training SCAN to extract abstract concepts grounded in such disentangled visual primitives through fast symbol association. Our approach requires very few pairings between symbols and images and makes no assumptions about the choice of symbol representations. Once trained, SCAN is capable of multimodal bi-directional inference, generating a diverse set of image samples from symbolic descriptions and vice versa. It also allows for traversal and manipulation of the implicit hierarchy of compositional visual concepts through symbolic instructions and learnt logical recombination operations. Such manipulations enable SCAN to invent and learn novel visual concepts through recombination of the few learnt concepts.

https://arxiv.org/abs/1705.08142 https://github.com/sebastianruder/sluice-networks Sluice networks: Learning what to share between loosely related tasks

To overcome this, we introduce Sluice Networks, a general framework for multi-task learning where trainable parameters control the amount of sharing – including which parts of the models to share. Our framework goes beyond and generalizes over previous proposals in enabling hard or soft sharing of all combinations of subspaces, layers, and skip connections. We perform experiments on three task pairs from natural language processing, and across seven different domains, using data from OntoNotes 5.0, and achieve up to 15% average error reductions over common approaches to multi-task learning. We analyze when the architecture is particularly helpful, as well as its ability to fit noise. We show that a) label entropy is predictive of gains in sluice networks, confirming findings for hard parameter sharing, and b) while sluice networks easily fit noise, they are robust across domains in practice.

https://arxiv.org/abs/1611.01891 Joint Multimodal Learning with Deep Generative Models

https://arxiv.org/pdf/1505.07909v1.pdf Solving Verbal Comprehension Questions in IQ Test by Knowledge-Powered Word Embedding

https://arxiv.org/pdf/1702.04638.pdf A Spacetime Approach to Generalized Cognitive Reasoning in Multi-scale Learning

https://arxiv.org/pdf/1708.00571.pdf Tropical hyperelliptic curves in the plane

https://arxiv.org/abs/1703.05908v2 Learning Robust Visual-Semantic Embeddings

https://arxiv.org/abs/1708.06734v1 Representation Learning by Learning to Count

https://arxiv.org/pdf/1708.05263v3.pdf The Size of a Hyperball in a Conceptual Space

The cognitive framework of conceptual spaces [3] provides geometric means for representing knowledge. A conceptual space is a highdimensional space whose dimensions are partitioned into so-called domains. Within each domain, the Euclidean metric is used to compute distances. Distances in the overall space are computed by applying the Manhattan metric to the intra-domain distances. Instances are represented as points in this space and concepts are represented by regions.

https://arxiv.org/abs/1711.03417 A Further Analysis of The Role of Heterogeneity in Coevolutionary Spatial Games

Surprisingly, results show that the heterogeneity of link weights (states) on their own does not always promote cooperation; rather cooperation is actually favoured by the increase in the number of overlapping states and not by the heterogeneity itself.

https://arxiv.org/abs/1706.05137 One Model To Learn Them All

Our model architecture incorporates building blocks from multiple domains. It contains convolutional layers, an attention mechanism, and sparsely-gated layers. Each of these computational blocks is crucial for a subset of the tasks we train on. Interestingly, even if a block is not crucial for a task, we observe that adding it never hurts performance and in most cases improves it on all tasks. We also show that tasks with less data benefit largely from joint training with other tasks, while performance on large tasks degrades only slightly if at all.

https://arxiv.org/pdf/1711.07611.pdf Event Representations with Tensor-based Compositions

https://arxiv.org/pdf/1711.10402v1.pdf An Adversarial Neuro-Tensorial Approach For Learning Disentangled Representations

In this paper, we propose the first unsupervised deep learning method for disentangling multiple latent factors of variation in face images captured in-the-wild. To this end, we propose a deep latent variable model, where the multiplicative interactions of multiple latent factors of variation are explicitly modelled by means of multilinear (tensor) structure. We demonstrate that the proposed approach indeed learns disentangled representations of facial expressions and pose, which can be used in various applications, including face editing, as well as 3D face reconstruction and classification of facial expression, identity and pose.

We demonstrate the power of our methodology in expression and pose transfer, as well as discovering powerful features for pose and expression classification.

https://arxiv.org/pdf/0709.0303.pdf Navigability of complex networks

https://openreview.net/forum?id=BJRZzFlRb Compressing Word Embeddings via Deep Compositional Code Learning

Natural language processing (NLP) models often require a massive number of parameters for word embeddings, resulting in a large storage or memory footprint. Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance. For this purpose, we propose to construct the embeddings with few basis vectors. For each word, the composition of basis vectors is determined by a hash code. To maximize the compression rate, we adopt the multi-codebook quantization approach instead of binary coding scheme. Each code is composed of multiple discrete numbers, such as (3, 2, 1, 8), where the value of each component is limited to a fixed range. We propose to directly learn the discrete codes in an end-to-end neural network by applying the Gumbel-softmax trick. Experiments show the compression rate achieves 98% in a sentiment analysis task and 94% ~ 99% in machine translation tasks without performance loss. In both tasks, the proposed method can improve the model performance by slightly lowering the compression rate. Compared to other approaches such as character-level segmentation, the proposed method is language-independent and does not require modifications to the network architecture.

https://arxiv.org/pdf/1802.00273v1.pdf Emerging Language Spaces Learned From Massively Multilingual Corpora

https://arxiv.org/abs/1803.00385 MAGAN: Aligning Biological Manifolds

We present a new GAN called the Manifold-Aligning GAN (MAGAN) that aligns two manifolds such that related points in each measurement space are aligned together. We demonstrate applications of MAGAN in single-cell biology in integrating two different measurement types together. In our demonstrated examples, cells from the same tissue are measured with both genomic (single-cell RNA-sequencing) and proteomic (mass cytometry) technologies. We show that the MAGAN successfully aligns them such that known correlations between measured markers are improved compared to other recently proposed models.

https://arxiv.org/pdf/1802.10151v1.pdf Augmented CycleGAN: Learning Many-to-Many Mappings from Unpaired Data

https://arxiv.org/abs/1803.08495 Text2Shape: Generating Shapes from Natural Language by Learning Joint Embeddings

We present a method for generating colored 3D shapes from natural language. To this end, we first learn joint embeddings of freeform text descriptions and colored 3D shapes. Our model combines and extends learning by association and metric learning approaches to learn implicit cross-modal connections, and produces a joint representation that captures the many-to-many relations between language and physical properties of 3D shapes such as color and shape. To evaluate our approach, we collect a large dataset of natural language descriptions for physical 3D objects in the ShapeNet dataset. With this learned joint embedding we demonstrate text-to-shape retrieval that outperforms baseline approaches. Using our embeddings with a novel conditional Wasserstein GAN framework, we generate colored 3D shapes from text. Our method is the first to connect natural language text with realistic 3D objects exhibiting rich variations in color, texture, and shape detail. http://text2shape.stanford.edu/

https://arxiv.org/pdf/1804.00104v1.pdf Joint-VAE: Learning Disentangled Joint Continuous and Discrete Representations

We have proposed Joint-VAE, a framework for learning disentangled continuous and discrete representations in an unsupervised manner. The framework comes with the advantages of VAEs such as stable training and large sample diversity while being able to model complex jointly continuous and discrete generative factors. We have shown that Joint-VAE disentangles factors of variation on several datasets while producing realistic samples. In addition, the inference network can be used to infer unlabeled quantities on test data and to edit and manipulate images.

https://arxiv.org/pdf/1804.00410v1.pdf SyncGAN: Synchronize the Latent Space of Cross-modal Generative Adversarial Networks

. Instead of learning the transfer between different modalities, we aim to learn a synchronous latent space representing the cross-modal common concept. A novel network component named synchronizer is proposed in this work to judge whether the paired data is synchronous/corresponding or not, which can constrain the latent space of generators in the GANs. Our GAN model, named as SyncGAN, can successfully generate synchronous data (e.g., a pair of image and sound) from identical random noise. For transforming data from one modality to another, we recover the latent code by inverting the mappings of a generator and use it to generate data of different modality. In addition, the proposed model can achieve semi-supervised learning, which makes our model more flexible for practical applications.

Cross-domain GANs adopt several special mechanisms such as cycle-consistency and weight-sharing to extract the common structure of cross-domain data automatically. However, the common structure does not exist between most cross-modal data due to the heterogeneous gap. Therefore, the model need paired information to relate the different structures between data of various modalities which are of the same concept.

https://arxiv.org/abs/1703.04368v1 Symbol Grounding via Chaining of Morphisms

https://arxiv.org/abs/1805.04174 Joint Embedding of Words and Labels for Text Classification

Word embeddings are effective intermediate representations for capturing semantic regularities between words, when learning the representations of text sequences. We propose to view text classification as a label-word joint embedding problem: each label is embedded in the same space with the word vectors. We introduce an attention framework that measures the compatibility of embeddings between text sequences and labels. The attention is learned on a training set of labeled samples to ensure that, given a text sequence, the relevant words are weighted higher than the irrelevant ones. Our method maintains the interpretability of word embeddings, and enjoys a built-in ability to leverage alternative sources of information, in addition to input text sequences. Extensive results on the several large text datasets show that the proposed framework outperforms the state-of-the-art methods by a large margin, in terms of both accuracy and speed.

https://arxiv.org/abs/1805.08720 Adversarial Training of Word2Vec for Basket Completion

In recent years, the Word2Vec model trained with the Negative Sampling loss function has shown state-of-the-art results in a number of machine learning tasks, including language modeling tasks, such as word analogy and word similarity, and in recommendation tasks, through Prod2Vec, an extension that applies to modeling user shopping activity and user preferences. Several methods that aim to improve upon the standard Negative Sampling loss have been proposed. In our paper we pursue more sophisticated Negative Sampling, by leveraging ideas from the field of Generative Adversarial Networks (GANs), and propose Adversarial Negative Sampling. We build upon the recent progress made in stabilizing the training objective of GANs in the discrete data setting, and introduce a new GAN-Word2Vec model.We evaluate our model on the task of basket completion, and show significant improvements in performance over Word2Vec trained using standard loss functions, including Noise Contrastive Estimation and Negative Sampling.