m
• E

F Nous contacter

0

# Documents  90C06 | enregistrements trouvés : 12

O

P Q

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Learning with differentiable perturbed optimizers Berthet, Quentin | CIRM H

Post-edited

Research talks;Computer Science;Control Theory and Optimization

Machine learning pipelines often rely on optimization procedures to make discrete decisions (e.g. sorting, picking closest neighbors, finding shortest paths or optimal matchings). Although these discrete decisions are easily computed in a forward manner, they cannot be used to modify model parameters using first-order optimization techniques because they break the back-propagation of computational graphs. In order to expand the scope of learning problems that can be solved in an end-to-end fashion, we propose a systematic method to transform a block that outputs an optimal discrete decision into a differentiable operation. Our approach relies on stochastic perturbations of these parameters, and can be used readily within existing solvers without the need for ad hoc regularization or smoothing. These perturbed optimizers yield solutions that are differentiable and never locally constant. The amount of smoothness can be tuned via the chosen noise amplitude, whose impact we analyze. The derivatives of these perturbed solvers can be evaluated eciently. We also show how this framework can be connected to a family of losses developed in structured prediction, and describe how these can be used in unsupervised and supervised learning, with theoretical guarantees.
We demonstrate the performance of our approach on several machine learning tasks in experiments on synthetic and real data.
Machine learning pipelines often rely on optimization procedures to make discrete decisions (e.g. sorting, picking closest neighbors, finding shortest paths or optimal matchings). Although these discrete decisions are easily computed in a forward manner, they cannot be used to modify model parameters using first-order optimization techniques because they break the back-propagation of computational graphs. In order to expand the scope of learning ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Parallel processing of discrete problemsIMA progamm on mathematical in high-performance computing at the University of Minnesota May 12-16 Pardalos, Panos M. | Springer 1999

Congrès

ISBN 978-0-387-98664-7

The ima volumes in mathematics and its applications , 0106

Localisation : Colloque 1er étage (MINN)

analyse d

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Nonlinear programming 2proceeding of the special interest group on mathematical programming symposium conducted by the computer sciences departement university of wisconsin-madison ,april 15-17,1974 Mangasarian, O. L. ; Meyer, R. R. ; Robinson, S. M. | Academic Press 1975

Congrès

ISBN 978-0-12-468650-2

Localisation : Colloque 1er étage (MADI)

algorithme convergent # algorithme de minimisation # algorithme de moindre carré # fonction de pénalité idéale # inégalité non linéaire # minimisation contrainte # minimisation inexacte # minimisation non contrainte # méthode de pénalité # méthode des multiplicateurs # méthode du gradient réduit # méthode quasi-Newton # optimisation contrainte linéairement # optimisation à coins # programmation disjonctive # programmation non linéaire # super-linéairement # théorème de Chvatal et Gomory # énumération # équation non linéaire algorithme convergent # algorithme de minimisation # algorithme de moindre carré # fonction de pénalité idéale # inégalité non linéaire # minimisation contrainte # minimisation inexacte # minimisation non contrainte # méthode de pénalité # méthode des multiplicateurs # méthode du gradient réduit # méthode quasi-Newton # optimisation contrainte linéairement # optimisation à coins # programmation disjonctive # programmation non linéaire # ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Large-scale optimization with applications. Part III :molecular structure and optimization#Proceedings of the Workshop on ...#July 10-28 Biegler, Lorenz T. ; Coleman, Thomas F. ; Conn, Andrew R. ; Santosa, Fadil N. | Springer 1997

Congrès

- 201 p.
ISBN 978-0-387-98288-5

The IMA volumes in mathematics and its applications , 0094

Localisation : Colloque 1er étage (MINN)

optimisation # problème à grande échelle # recherche opérationnelle # programmation # mathématique # structure moléculaire # application à la biologie # problème de minimisation # protéine # calcul parallèle # fonction de Lennard-Jones # minimisation globale

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Large-scale optimization with applications. Part II :optimal design and control#proceedings of the Workshop on ...#July 10-28, 1995 Biegler, Lorenz T. ; Coleman, Thomas F. ; Conn, Andrew R. ; Santosa, Fadil N. | Springer 1997

Congrès

- 324 p.
ISBN 978-0-387-98287-8

The IMA volumes in mathematics and its applications , 0093

Localisation : Colloque 1er étage (MINN)

optimisation # problème à grande échelle # optimisation à grande échelle # recherche opérationnelle # programmation # mathématique # application # plan d'expérience # contrôle # commande # programmation non linéaire # programmation quadratique séquentielle # SQP # modélisation moléculaire # application aux processus chimiques # application à l'aérospaciale # stratégie optimale # EDP # équation différentielle algébrique

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Fast solution of discretized optimization problems :workshoop held at the Weierstrass institute for applied analysis and stochastics#May 8-12 Hoffmann, Karl-Heinz ; Hoppe, Ronald H. W. ; Schulz, Volker | Birkhäuser Verlag 2001

Congrès

- 283 p.
ISBN 978-3-7643-6599-8

I.S.N.M. , 0138

Localisation : Colloque 1er étage (BERL)

équation différentielle aux dérivées partielles # EDP # problème # méthode d'approximation successive # algorithme de programmation mathématique # problème inverse # solution d'équation # discrétisation d'équation # problème à grande échelle # problème quadratique # méthode de type programmation successive quadratique

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Large-scale machine learning and convex optimization 2/2 Bach, Francis | CIRM H

Multi angle

Research talks;Computer Science;Control Theory and Optimization;Probability and Statistics

Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. Given n observations/iterations, the optimal convergence rates of these algorithms are $O(1/\sqrt{n})$ for general convex functions and reaches $O(1/n)$ for strongly-convex functions. In this tutorial, I will first present the classical results in stochastic approximation and relate them to classical optimization and statistics results. I will then show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of $O(1/n)$ without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent. Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Large-scale machine learning and convex optimization 1/2 Bach, Francis | CIRM H

Multi angle

Research talks;Computer Science;Control Theory and Optimization;Probability and Statistics

Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. Given n observations/iterations, the optimal convergence rates of these algorithms are $O(1/\sqrt{n})$ for general convex functions and reaches $O(1/n)$ for strongly-convex functions. In this tutorial, I will first present the classical results in stochastic approximation and relate them to classical optimization and statistics results. I will then show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of $O(1/n)$ without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent. Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Topics in semidefinite and interior-point methods Pardalos, Panos M. ; Wolkowicz, Henry | American Mathematical Society 1998

Ouvrage

- 250 p.
ISBN 978-0-8218-0825-2

Fields institute communications , 0018

Localisation : Collection 1er étage

algorithme # informatique théorique # mode de calcul # problème de grande échelle # programmation convexe # programmation discrète en nombre entier # programmation linéaire # programmation mathématique # programmation non linéaire # recherche opérationnelle # théorie d'information

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Lancelot :a Fortran package for large-scale nonlinear optimization (release A) Conn, A. R. ; Gould, N. I. M. ; Toint, Ph. L. | Springer 1992

Ouvrage

- 330 p.
ISBN 978-3-540-55470-7

Springer series in computational mathematics , 0017

Localisation : Ouvrage RdC (CONN)

analyse numérique # informatique # logiciel # optimisation non-linéaire # FORTRAN # langage de programmation # Lancelot # algorithme # problème à grande échelle # programmation mathématique # application

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Décomposition-coordination en optimisation déterministe et stochastique Carpentier, Pierre ; Cohen, Guy | Springer;Société de Mathématiques Appliquées et Industrielles 2017

Ouvrage

- xvi; 333 p.
ISBN 978-3-662-55427-2

Mathématiques & applications , 0081

Localisation : Collection 1er étage

processus stochastique # optimisation mathématique # décomposition

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Complex graphs and networks Chung, Fan ; Lu, Linyuan | American Mathematical Society 2006

Ouvrage

- 264 p.
ISBN 978-0-8218-3657-6

CBMS regional conference series in mathematics , 0107

Localisation : Collection 1er étage

théorie des graphes # analyse combinatoire # réseau d'information # graphe complexe # problème à grande échelle # réseau à grande échelle # graphe aléatoire # modèle de graphe

#### Filtrer

##### Codes MSC

Titres de périodiques et e-books électroniques (Depuis le CIRM)

Ressources Electroniques

Books & Print journals

Recherche avancée

0
Z