For Participants

Programme

For authors

Organization

For students

Misc

ICML 2009 Invited Speakers

Corinna Cortes, Google Research, NY.

Can learning kernels help performance? Kernel methods combined with large-margin learning algorithms such as SVMs have been used successfully to tackle a variety of learning tasks since their introduction in the early 90s. However, in the standard framework of these methods, the choice of an appropriate kernel is left to the user and a poor selection may lead to sub-optimal performance. Instead, sample points can be used to select a kernel function suitable for the task out of a family of kernels fixed by the user. While this is an appealing idea supported by some recent theoretical guarantees, in experiments, it has proven surprisingly difficult to consistently and significantly outperform simple fixed combination schemes of kernels. This talk will survey different methods and algorithms for learning kernels and will present novel results that tend to suggest that significant performance improvements can be obtained with a large number of kernels. (Includes joint work with Mehryar Mohri and Afshin Rostamizadeh.)

Corinna Cortes is the Head of Google Research, NY, where she is working on a broad range of theoretical and applied large-scale machine learning problems. Prior to Google, Corinna spent more than ten years at AT&T Labs--Research, formerly AT&T Bell Labs, where she held a distinguished research position. Corinna's research work is well-known in particular for her contributions to the theoretical foundations of support vector machines (SVMs) and her work on data-mining in very large data sets for which she was awarded the AT&T Science and Technology Medal in the year 2000. Corinna received her MS degree in Physics from the Niels Bohr Institute in Copenhagen and joined AT&T Bell Labs as a researcher in 1989. She received her Ph.D. in computer science from the University of Rochester in 1993. Corinna is also a competitive runner, placing third in the More Marathon in New York City in 2005, and a mother of two.


Emmanuel Dupoux, Ecole Normale Superieure, Ecole des Hautes Etudes en Sciences Sociales, Centre National de la Recherche Scientifique

How do infants bootstrap into spoken language?: Models and challenges

Human infants learn spontaneously and effortlessly the language(s) spoken in their environments, despite the extraordinary complexity of the task. Here, I will present an overview of the early phases of language acquisition and focus on one area where a modeling approach is currently being conducted using tools of signal processing and automatic speech recognition: the unsupervized acquisition of phonetic categories. During their first year of life, infants construct a detailed representation of the phonemes of their native language and lose the ability to distinguish nonnative phonemic contrasts. Unsupervised statistical clustering is not sufficient; it does not converge on the inventory of phonemes, but rather on contextual allophonic units or subunits. I present an information-theoretic algorithm that groups together allophonic variants based on three sources of information that Can be acquired independently: the statistical distribution of their contexts, the phonetic plausibility of the grouping, and the existence of lexical minimal pairs. This algorithm is tested on several natural speech corpora. We find that these three sources of information are probably not language specific. What is presumably unique to language is the way in which they are combined to optimize the emergence of linguistic categories.

Emmanuel Dupoux is the director of the Laboratoire de Sciences Cognitives et Psycholinguistique in Paris. He conducts research on the early phases of language and social acquisition in human infants, using a mix of behavioral and brain-imaging techniques as well as computational modeling. He teaches at the Ecole des Hautes Etudes en Sciences sociales where he has set up an interdisciplinary graduate program in Cognitive Science.


Yoav Freund, University of California, San Diego

Linear Separation, Drifting games and boosting

One significant problem with Adaboost is its poor performance on distributions where the error of the optimal classifier is significant. Long and Servedio (ICML08) have shown that any boosting algorithm that is based on optimizing a convex loss function can be defeated by random label noise.

In this talk I will present a new boosting algorithm, called Robustboost, which can learn an almost-optimal classifier when the optimal classifier has significant error. The derivation and analysis of the algorithm is based on a mathematical framework for learning called "Drifting games".

In this talk I will sketch the main ideas of the drifting games framework, explain how it can be used to minimize a non-convex loss function and show some experimental results demonstrating the practical utility of the algorithm.

Yoav Freund is a professor of Computer Science and Engineering in the University of California, San Diego. His work is in the areas of machine learning, computational statistics, information theory and their applications. He is best known for his joint work with Dr. Robert Schapire on the Adaboost algorithm. For this work Freund and Schapire were awarded the 2003 Godel Prize and the 2004 Kanellakis Prize. Freund was elected fellow of AAAI in 2008. Freund is included in the Thompson list of most highly cited scientists: ISIHighlyCited.com