Despite machine learning’s numerous successes, applying machine learning to a new problem usually means spending a long time hand-designing the input representation for that specific problem. This is true for applications in vision, audio, text/NLP, and other problems. To address this, researchers have recently developed “unsupervised feature learning” and “deep learning” algorithms that can automatically learn feature representations from unlabeled data, thus bypassing much of this time-consuming engineering. Building on such ideas as sparse coding and deep belief networks, these algorithms can exploit large amounts of unlabeled data (which is cheap and easy to obtain) to learn a good feature representation. These methods have also surpassed the previous state-of-the-art on a number of problems in vision, audio, and text. In this talk, I describe some of the key ideas behind unsupervised feature learning and deep learning, describe a few algorithms, and present case studies pertaining.

The Bay Area Vision Meeting (BAVM) is an informal gathering (without a printed proceedings) of academic and industry researchers with interest in computer vision and related areas. The goal is to build community among vision researchers in the San Francisco Bay Area, however, visitors and travelers from afar are also encouraged to attend and present. New research, previews of work to be shown at upcoming vision conferences, reviews of not-well-publicized work, and descriptions of “work in progress” are all welcome.



Computers, Science, Technology





Leave a Reply

Your email address will not be published. Required fields are marked *