Over one thousand minds met in Vancouver last month to define future trends in smart information processing at the Neural Information Processing Systems Conference (NIPS). The single-track program featured tutorials with leading scientists, late night poster and demo sessions, best paper presentations, followed by workshops held a snowball’s throw away from the slopes of Whistler Mountain.
Googlers published 9 papers and co-led 3 workshops (see full list below). We were also a major sponsor, providing travel assistance and best paper awards for students.
I was personally impressed by the growing level of maturity and application scope of Graphical Model methods (e.g. topic detection in text) and Sparse Model learning (e.g. determining which features describe complex data). These methods are ripe for creating the next generation of amazing apps for organizing the world’s information, thanks to a lot of inspiring research from around the globe. The workshops were also fascinating, drilling down into areas such as the application of learning algorithms on “Cores, Clusters and Clouds.”
Googlers published 9 papers and co-led 3 workshops (see full list below). We were also a major sponsor, providing travel assistance and best paper awards for students.
I was personally impressed by the growing level of maturity and application scope of Graphical Model methods (e.g. topic detection in text) and Sparse Model learning (e.g. determining which features describe complex data). These methods are ripe for creating the next generation of amazing apps for organizing the world’s information, thanks to a lot of inspiring research from around the globe. The workshops were also fascinating, drilling down into areas such as the application of learning algorithms on “Cores, Clusters and Clouds.”
Software Engineer
Gmail

Google’s papers, talks and workshops at NIPS 2010:
Label Embedding Trees for Large Multi-Class Tasks by Samy Bengio, Jason Weston
Learning Bounds for Importance Weighting by Corinna Cortes, Yishay Mansour, Mehryar Mohri
Online Learning in the Manifold of Low-Rank Matrices by Gal Chechik, Daphna Weinshall, Uri Shalit
Deterministic Single–Pass Algorithm for LDA by Issei Sato, Kenichi Kurihara and Hiroshi Nakagawa
Distributed Dual Averaging In Networks by John Duchi, Alekh Agarwal, Martin Wainwright
Coarse–to–Fine Learning and Inference by Ben Taskar, David Weiss, Benjamin Sapp and Slav Petrov
Coarse–to–fine Decoding for Parsing and Machine Translation by Slav Petrov
Low–rank Methods for Large–scale Machine Learning by Arthur Gretton, Michael Mahoney, Mehryar Mohri and Ameet Talwalkar
Online Learning in the Manifold of Low–Rank Matrices by Uri Shalit, Daphna Weinshall, Gal Chechik
Learning on Cores, Clusters, and Clouds by John Duchi, Ofer Dekel, John Langford, Lawrence Cayton and Alekh Agarwal
Distributed MAP Inference for Undirected Graphical Models by Sameer Singh, Amar Subramanya, Fernando Pereira and Andrew McCallum
MapReduce/Bigtable for Distributed Optimization by Keith Hall, Scott Gilpin and Gideon Mann
Learning Bounds for Importance Weighting by Corinna Cortes, Yishay Mansour, Mehryar Mohri
Online Learning in the Manifold of Low-Rank Matrices by Gal Chechik, Daphna Weinshall, Uri Shalit
Deterministic Single–Pass Algorithm for LDA by Issei Sato, Kenichi Kurihara and Hiroshi Nakagawa
Distributed Dual Averaging In Networks by John Duchi, Alekh Agarwal, Martin Wainwright
Coarse–to–Fine Learning and Inference by Ben Taskar, David Weiss, Benjamin Sapp and Slav Petrov
Coarse–to–fine Decoding for Parsing and Machine Translation by Slav Petrov
Low–rank Methods for Large–scale Machine Learning by Arthur Gretton, Michael Mahoney, Mehryar Mohri and Ameet Talwalkar
Online Learning in the Manifold of Low–Rank Matrices by Uri Shalit, Daphna Weinshall, Gal Chechik
Learning on Cores, Clusters, and Clouds by John Duchi, Ofer Dekel, John Langford, Lawrence Cayton and Alekh Agarwal
Distributed MAP Inference for Undirected Graphical Models by Sameer Singh, Amar Subramanya, Fernando Pereira and Andrew McCallum
MapReduce/Bigtable for Distributed Optimization by Keith Hall, Scott Gilpin and Gideon Mann
Add a comment