http://spatiallearning.org/index.php/initiatives/initiative-2-understand-tools/tool-sketching

http://www.qrg.northwestern.edu/projects/onr-sm/al-case-based_index.html

http://www.hyadatalab.com/papers/analogy-kdd17.pdf?twitter=@bigdata Accelerating Innovation Through Analogy Mining

https://arxiv.org/abs/1608.01403 Words, Concepts, and the Geometry of Analogy

This paper presents a geometric approach to the problem of modelling the relationship between words and concepts, focusing in particular on analogical phenomena in language and cognition. Grounded in recent theories regarding geometric conceptual spaces, we begin with an analysis of existing static distributional semantic models and move on to an exploration of a dynamic approach to using high dimensional spaces of word meaning to project subspaces where analogies can potentially be solved in an online, contextualised way. The crucial element of this analysis is the positioning of statistics in a geometric environment replete with opportunities for interpretation.

http://www.jfsowa.com/pubs/analog.htm

http://www.jfsowa.com/peirce/remark.pdf Two Visual Strategies for Solving the Raven’s Progressive Matrices Intelligence Test

https://pdfs.semanticscholar.org/23b6/96300a80c0479d571dadc20e50846e80b82c.pdf Deep Visual Analogy-Making

https://en.wikipedia.org/wiki/Raven%27s_Progressive_Matrices

http://www.qrg.northwestern.edu/papers/Files/IJCAI2016-LiangEtAl.pdf Learning Paraphrase Identification with Structural Alignment http://www.qrg.northwestern.edu/software/sme4/index.html

Alignment of two such graphs combines local and structural information to support similarity estimation. To improve alignment, we introduced structural constraints inspired by a cognitive theory of similarity and analogy. Usually only similarity labels are given in training data and the true alignments are unknown, so we address the learning problem using two approaches: alignment as feature extraction and alignment as latent variable. Our approach is evaluated on the paraphrase identification task and achieved results competitive with the state-of-theart.

https://arxiv.org/pdf/1510.08973v1.pdf VISALOGY: Answering Visual Analogy Questions

https://arxiv.org/abs/1412.6616v2 Outperforming Word2Vec on Analogy Tasks with Random Projections

http://dilab.gatech.edu/publications/Kunda%20McGreggor%20Goel%202011%20AAAI.pdf

https://kbwang.bitbucket.io/papers/RPM.pdf

http://dilab.gatech.edu/publications/cogsci04.pdf Visual Analogy: Reexamining Analogy as a Constraint Satisfaction Problem

http://reasoninglab.psych.ucla.edu/KH%20pdfs/Holyoak_Thagard[1].1989.pdf Analogical Mapping by Constraint Satisfaction

http://cogsci.uwaterloo.ca/Articles/Pages/Analog.Mind.html

https://pdfs.semanticscholar.org/8544/31e54a521b0241903707a8d27ef1c917858a.pdf Constraints on Analogical Mapping: A Comparison of Three Models

http://proceedings.mlr.press/v80/santoro18a/santoro18a.pdf Measuring abstract reasoning in neural networks

http://www.foundalis.com/res/bps/bpidx.htm

https://www.youtube.com/watch?v=n8m7lFQ3njk Analogy as the Core of Cognition

https://arxiv.org/abs/1809.01498 Skip-gram word embeddings in hyperbolic space

Embeddings of tree-like graphs in hyperbolic space were recently shown to surpass their Euclidean counterparts in performance by a large margin. Inspired by these results, we present an algorithm for learning word embeddings in hyperbolic space from free text. An objective function based on the hyperbolic distance is derived and included in the skip-gram architecture from word2vec. The hyperbolic word embeddings are then evaluated on word similarity and analogy benchmarks. The results demonstrate the potential of hyperbolic word embeddings, particularly in low dimensions, though without clear superiority over their Euclidean counterparts. We further discuss problems in the formulation of the analogy task resulting from the curvature of hyperbolic space. https://github.com/lateral/minkowski

https://arxiv.org/abs/1809.03956v1 Abstraction Learning

we propose a partition structure that contains pre-allocated abstraction neurons; we formulate abstraction learning as a constrained optimization problem, which integrates abstraction properties; we develop a network evolution algorithm to solve this problem. This complete framework is named ONE (Optimization via Network Evolution). In our experiments on MNIST, ONE shows elementary human-like intelligence, including low energy consumption, knowledge sharing, and lifelong learning.

https://nlp.stanford.edu/pubs/lamm2018analogies.pdf

https://openreview.net/pdf?id=SylLYsCcFm Learning to Make Analogies by Contrasting Abstract Relational Structure

Here, we study how analogical reasoning can be induced in neural networks that learn to perceive and reason about raw visual data. We find that the critical factor for inducing such a capacity is not an elaborate architecture, but rather, careful attention to the choice of data and the manner in which it is presented to the model. The most robust capacity for analogical reasoning is induced when networks learn analogies by contrasting abstract relational structures in their input domains, a training method that uses only the input data to force models to learn about important abstract features. Using this technique we demonstrate capacities for complex, visual and symbolic analogy making and generalisation in even the simplest neural network architectures.

https://arxiv.org/abs/1705.04416v2 Evaluating vector-space models of analogy

We find that that some semantic relationships are better captured than others. We then provide evidence for deeper limitations of the parallelogram model based on the intrinsic geometric constraints of vector spaces, paralleling classic results for first-order similarity.

https://arxiv.org/abs/1810.05315v1 Learning to Reason

Constructive/intuitionistic proofs should be of particular interest to computer scientists thanks to the well-known Curry-Howard correspondence (Howard, 1980) which tells us that all terminating programs correspond to a proof in intuitionistic logic and vice versa. This work explores using Q-learning (Watkins, 1989) to inform proof search for a specific system called non-classical logic called Core Logic (Tennant, 2017).

https://arxiv.org/abs/1811.04784v1 Improving Generalization for Abstract Reasoning Tasks Using Disentangled Feature Representations

In this work we explore the generalization characteristics of unsupervised representation learning by leveraging disentangled VAE's to learn a useful latent space on a set of relational reasoning problems derived from Raven Progressive Matrices. We show that the latent representations, learned by unsupervised training using the right objective function, significantly outperform the same architectures trained with purely supervised learning, especially when it comes to generalization.

https://openreview.net/forum?id=SylLYsCcFm Learning to Make Analogies by Contrasting Abstract Relational Structure

The most robust capacity for analogical reasoning is induced when networks learn analogies by contrasting abstract relational structures in their input domains.