Crowdsourcing

https://arxiv.org/abs/1611.02145v1 Crowdsourcing in Computer Vision

In this survey, we describe the types of annotations computer vision researchers have collected using crowdsourcing, and how they have ensured that this data is of high quality while annotation effort is minimized.

https://arxiv.org/abs/1308.2410v1 Collective Mind: cleaning up the research and experimentation mess in computer engineering using crowdsourcing, big data and machine learning

We are a group of researchers working with the community on a new methodology, infrastructure and repository to enable collaborative and reproducible research and experimentation in computer engineering as a side effect of our projects on combining performance/energy/size auto-tuning with run-time adaptation, crowdsourcing, big data and predictive analytics.

https://arxiv.org/abs/1612.02707v1 Human powered multiple imputation

In this paper we set to answer an important question “Can humans perform reasonably well to fill in missing data, given information about the dataset?”.

https://arxiv.org/abs/1508.00299v1 When Crowdsourcing Meets Mobile Sensing: A Social Network Perspective

This article investigates the structure of mobile sensing schemes and introduces crowdsourcing methods for mobile sensing. Inspired by social network, one can establish trust among participatory agents to leverage the wisdom of crowds for mobile sensing.

https://arxiv.org/abs/1508.06044v1 Visualizing NLP annotations for Crowdsourcing

Visualizing NLP annotation is useful for the collection of training data for the statistical NLP approaches. Existing toolkits either provide limited visual aid, or introduce comprehensive operators to realize sophisticated linguistic rules. Workers must be well trained to use them. Their audience thus can hardly be scaled to large amounts of non-expert crowdsourced workers. In this paper, we present CROWDANNO, a visualization toolkit to allow crowd-sourced workers to annotate two general categories of NLP problems: clustering and parsing. Workers can finish the tasks with simplified operators in an interactive interface, and fix errors conveniently. User studies show our toolkit is very friendly to NLP non-experts, and allow them to produce high quality labels for several sophisticated problems. We release our source code and toolkit to spur future research. https://github.com/slxu/CrowdLabeling

https://arxiv.org/abs/1609.09748v1 Characterization of experts in crowdsourcing platforms

We address the problem of identifying experts among participants, that is, workers, who tend to answer the questions correctly.

https://arxiv.org/abs/1607.07429v2 Much Ado About Time: Exhaustive Annotation of Temporal Data

Large-scale annotated datasets allow AI systems to learn from and build upon the knowledge of the crowd. Many crowdsourcing techniques have been developed for collecting image annotations. These techniques often implicitly rely on the fact that a new input image takes a negligible amount of time to perceive. In contrast, we investigate and determine the most cost-effective way of obtaining high-quality multi-label annotations for temporal data such as videos. Watching even a short 30-second video clip requires a significant time investment from a crowd worker; thus, requesting multiple annotations following a single viewing is an important cost-saving strategy. But how many questions should we ask per video? We conclude that the optimal strategy is to ask as many questions as possible in a HIT (up to 52 binary questions after watching a 30-second video clip in our experiments). We demonstrate that while workers may not correctly answer all questions, the cost-benefit analysis nevertheless favors consensus from multiple such cheap-yet-imperfect iterations over more complex alternatives.

https://arxiv.org/abs/1611.06301v1 Inferring Restaurant Styles by Mining Crowd Sourced Photos from User-Review Websites

We present a novel approach to inferring restaurant types or styles (ambiance, dish styles, suitability for different occasions) from user uploaded photos on user-review websites. To that end, we first collect a novel restaurant photo dataset associating the user contributed photos with the restaurant styles from TripAdvior. We then propose a deep multi-instance multi-label learning (MIML) framework to deal with the unique problem setting of the restaurant style classification task. We employ a two-step bootstrap strategy to train a multi-label convolutional neural network (CNN). The multi-label CNN is then used to compute the confidence scores of restaurant styles for all the images associated with a restaurant. The computed confidence scores are further used to train a final binary classifier for each restaurant style tag.

https://www.oreilly.com/ideas/using-ai-to-build-a-comprehensive-database-of-knowledge

https://arxiv.org/abs/1703.08774v1 Who Said What: Modeling Individual Labelers Improves Classification

To make use of this extra information, we propose modeling the experts individually and then learning averaging weights for combining them, possibly in sample-specific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data.

https://arxiv.org/abs/1710.01691v3 Context Embedding Networks

Low dimensional embeddings that capture the main variations of interest in collections of data are important for many applications. One way to construct these embeddings is to acquire estimates of similarity from the crowd. However, similarity is a multi-dimensional concept that varies from individual to individual. Existing models for learning embeddings from the crowd typically make simplifying assumptions such as all individuals estimate similarity using the same criteria, the list of criteria is known in advance, or that the crowd workers are not influenced by the data that they see. To overcome these limitations we introduce Context Embedding Networks (CENs). In addition to learning interpretable embeddings from images, CENs also model worker biases for different attributes along with the visual context i.e. the visual attributes highlighted by a set of images. Experiments on two noisy crowd annotated datasets show that modeling both worker bias and visual context results in more interpretable embeddings compared to existing approaches.

https://arxiv.org/abs/1805.03818 https://blog.acolyer.org/2018/08/24/training-classifiers-with-natural-language-explanations/

Training Classifiers with Natural Language Explanations