Design Patterns for Self Driving Automation

Introduction

This is the first steps at curating a set of design patterns in the field of self driving automation.

I am hoping that this becomes a collaborative endeavor. The development of Design Patterns has historically been intertwined with the employment of a Wiki. In fact, one of the earliest Wikis, which likely pre-dates Wikipedia, was invented for the sole purpose of documenting Design Patterns.

The WikiWikiWeb is the first ever wiki, or user-editable website. It was launched on 25 March 1995 by its inventor, programmer Ward Cunningham, to accompany the Portland Pattern Repository website discussing software design patterns.

The idea is that Design Pattern development is always a collaborative endeavor.

Self-Driving Automation is an entirely new field. It is usually described as Self-Driving Cars. Knowledge in this space is in its infancy and what better opportunity to start building a Design Pattern repository than in an emerging field. It is not only emerging, but also a complex field that involves the integration of a lot of different technologies and the real-time orchestration of these integrations. I hope in the next several months to be able to capture the knowledge into a form that is digestable by future practitioners.

The motivation as to why I use the word “Automation” rather than “Cars” is that I am seeking a more general application of this technology. A very glimpse of this idea of an automation that employs Deep Learning, Vision, Sensor Fusion and a whole lots of other technologies can be found in Amazon Go. Amazon Go isn't a car, it is a self-service retail store!

https://www.wired.com/2016/12/amazon-go-grocery-store/

As for how its “Just Walk Out Shopping” experience works, Amazon seems emphatically not to want to share details. It steeps its description of how the system works in buzzwords: computer vision, sensor fusion, and deep learning. It uses sensors throughout the store and artificial intelligence to tell which direction customers are looking, even in a crowd, and can identify partially blocked labels. Beyond that, details are hazy.

The concepts found in self driving cars, I believe is also transferrable to many other fields that have complex sensory environments and require realtime decision making.

Deep Learning

Vision

Sensor Fusion

Localization

Control

Path Planning

Orchestration

Resources

https://github.com/OSSDC/awesome-autonomous-vehicles

https://arxiv.org/pdf/1612.03653v1.pdf Learning to Drive using Inverse Reinforcement Learning and Deep Q-Networks

https://arxiv.org/pdf/1704.05519.pdf Computer Vision for Autonomous Vehicles: Problems, Datasets and State-of-the-Art

Seeking your expertise, sign up at: https://www.linkedin.com/groups/8584076