m
• E

F Nous contacter

0

# Documents  62L20 | enregistrements trouvés : 25

O

P Q

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Mathematical learning models-theory and algorithmsproceedings of a conference in the physikzentrum in Bad Honnef near to Bonn,may 3-7,1982 Herkenrath, Ulrich ; Kalin, Dieter ; Vogel, Walter | Springer-Verlag 1983

Congrès

ISBN 978-0-387-90913-4

Lecture notes in statistics , 0020

Localisation : Colloque 1er étage (BAD)

chaine de Markov # controle optimal adaptif # modèle d'apprentissage mathématique # problème d'optimisation # problème du bandit # processus de Wiener # processus de décision séquentielle # procédure d'approximation stochastique # tests cliniques séquentiels # théorie et algorithme # théorème de Kersting #

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Statistical multiple integration :proceedings of a joint summer research conference on... was held at humboldt state university#June 17-23 Flournoy, Nancy ; Tsutakawa, Robert K. | American Mathematical Society 1990

Congrès

- 276 p.
ISBN 978-0-8218-5122-7

Contemporary mathematics , 0115

Localisation : Collection 1er étage

analyse multivariée # décidabilité # intégration numérique # intégration statistique # statistique décision # théorie de la décision

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Optimal vector quantization: from signal processing to clustering and numerical probability Pagès, Gilles | CIRM H

Multi angle

Research schools;Computer Science;Probability and Statistics

Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as “clustering”. In both case it appears as an optimal way to produce a set of weighted prototypes (or codebook) which makes up a kind of skeleton of a dataset, a signal and more generally, from a mathematical point of view, of a probability distribution.
Quantization has encountered in recent years a renewed interest in various application fields like automatic classification, learning algorithms, optimal stopping and stochastic control, Backward SDEs and more generally numerical probability. In all these various applications, practical implementation of such clustering/quantization methods more or less rely on two procedures (and their countless variants): the Competitive Learning Vector Quantization $(CLV Q)$ which appears as a stochastic gradient descent derived from the so-called distortion potential and the (randomized) Lloyd's procedure (also known as k- means algorithm, nu ees dynamiques) which is but a fixed point search procedure. Batch version of those procedures can also be implemented when dealing with a dataset (or more generally a discrete distribution).
In a more formal form, if is probability distribution on an Euclidean space $\mathbb{R}^d$, the optimal quantization problem at level $N$ boils down to exhibiting an $N$-tuple $(x_{1}^{*}, . . . , x_{N}^{*})$, solution to

argmin$_{(x1,\dotsb,x_N)\epsilon(\mathbb{R}^d)^N} \int_{\mathbb{R}^d 1\le i\le N} \min |x_i-\xi|^2 \mu(d\xi)$

and its distribution i.e. the weights $(\mu(C(x_{i}^{*}))_{1\le i\le N}$ where $(C(x_{i}^{*})$ is a (Borel) partition of $\mathbb{R}^d$ satisfying

$C(x_{i}^{*})\subset \lbrace\xi\epsilon\mathbb{R}^d :|x_{i}^{*} -\xi|\le_{1\le j\le N} \min |x_{j}^{*}-\xi|\rbrace$.

To produce an unsupervised classification (or clustering) of a (large) dataset $(\xi_k)_{1\le k\le n}$, one considers its empirical measure

$\mu=\frac{1}{n}\sum_{k=1}^{n}\delta_{\xi k}$

whereas in numerical probability $\mu = \mathcal{L}(X)$ where $X$ is an $\mathbb{R}^d$-valued simulatable random vector. In both situations, $CLV Q$ and Lloyd's procedures rely on massive sampling of the distribution $\mu$.
As for clustering, the classification into $N$ clusters is produced by the partition of the dataset induced by the Voronoi cells $C(x_{i}^{*}), i = 1, \dotsb, N$ of the optimal quantizer.
In this second case, which is of interest for solving non linear problems like Optimal stopping problems (variational inequalities in terms of PDEs) or Stochastic control problems (HJB equations) in medium dimensions, the idea is to produce a quantization tree optimally fitting the dynamics of (a time discretization) of the underlying structure process.
We will explore (briefly) this vast panorama with a focus on the algorithmic aspects where few theoretical results coexist with many heuristics in a burgeoning literature. We will present few simulations in two dimensions.
Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as “clustering”. In both case it appears as an optimal way to produce a set ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Large-scale machine learning and convex optimization 2/2 Bach, Francis | CIRM H

Multi angle

Research talks;Computer Science;Control Theory and Optimization;Probability and Statistics

Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. Given n observations/iterations, the optimal convergence rates of these algorithms are $O(1/\sqrt{n})$ for general convex functions and reaches $O(1/n)$ for strongly-convex functions. In this tutorial, I will first present the classical results in stochastic approximation and relate them to classical optimization and statistics results. I will then show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of $O(1/n)$ without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent. Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Large-scale machine learning and convex optimization 1/2 Bach, Francis | CIRM H

Multi angle

Research talks;Computer Science;Control Theory and Optimization;Probability and Statistics

Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. Given n observations/iterations, the optimal convergence rates of these algorithms are $O(1/\sqrt{n})$ for general convex functions and reaches $O(1/n)$ for strongly-convex functions. In this tutorial, I will first present the classical results in stochastic approximation and relate them to classical optimization and statistics results. I will then show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of $O(1/n)$ without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent. Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## CVaR hedging using quantization based stochastic approximation algorithm Pagès, Gilles | CIRM

Multi angle

Research talks;Analysis and its Applications

We investigate a method based on risk minimization to hedge observable but non-tradable source of risk on financial or energy markets. The optimal portfolio strategy is obtained by minimizing dynamically the Conditional Value-at-Risk (CVaR) using three main tools: a stochastic approximation algorithm, optimal quantization and variance reduction techniques (importance sampling (IS) and linear control variable (LCV)) as the quantities of interest are naturally related to rare events. We illustrate our approach by considering several portfolios in connection with energy markets.

Keywords : VaR, CVaR, Stochastic Approximation, Robbins-Monro algorithm, Quantification
We investigate a method based on risk minimization to hedge observable but non-tradable source of risk on financial or energy markets. The optimal portfolio strategy is obtained by minimizing dynamically the Conditional Value-at-Risk (CVaR) using three main tools: a stochastic approximation algorithm, optimal quantization and variance reduction techniques (importance sampling (IS) and linear control variable (LCV)) as the quantities of interest ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Conjugate duality and the exponential fourier scictrum Britton, Wray | Springer-Verlag 1983

Ouvrage

- 226 p.
ISBN 978-0-387-90826-7

Lecture notes in statistics , 0018

Localisation : Disparu;Ouvrage RdC (BRIT)

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Recent advances in statistics :papers in honor of herman chernoff on his sixtieth birthday Rizvi, M. Haseeb ; Rustagi, Jagdish S. ; Siegmund, D. | Academic Press 1983

Ouvrage

- 611 p.
ISBN 978-0-12-589320-6

Localisation : Ouvrage RdC (RIZV)

analyse séquentielle # effusion maligne # fonction spline # mélange # optimisation # pharmacocinétique # plan # statistique # statistique non paramétrique # théorie des grands échantillons # théorie du controle

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Approximation theorems of mathematical statistics Serfling, Robert J. | John Wiley And Sons 1985

Ouvrage

ISBN 978-0-471-02403-3

Wiley series in probability and mathematical statistics

Localisation : Disparu

approximation # approximation asymptotique # estimation # inference non parametrique # inference parametrique # sta tistique

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Malliavin calculus for processes with jumps Bichteler, Klaus ; Gravereaux, Jean-Bernard ; Jacod, Jean | Gordon and Breach Sciences Publishers 1987

Ouvrage

- 161 p.
ISBN 978-2-88124-185-7

Stochastics monographs , 0002

Localisation : Ouvrage RdC (BICH)

analyse fonctionnelle # analyse stochastique # calcul des variations # équation stochastique # opérateur de Malliavin

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Adaptive algorithms and stochastic approximations Benveniste, Albert ; Metivier, Michel ; Priouret , Pierre | Springer-Verlag 1990

Ouvrage

- 365 p.
ISBN 978-3-540-52894-4

Applications of mathematics , 0022

Localisation : Ouvrage RdC (BENV)

algorithme # approximation # stochastique

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Stochastic approximations and nonlinear regression Albert, Arthur E. ; Gardner, Leland A. | The M.I.T. Press 1967

Ouvrage

- 204 p.

Research monograph , 0042

Localisation : Ouvrage RdC (ALBE)

approximation stochastique # efficience # paramètre de vecteur # paramètre scalaire # rapidité de convergence # régression # théorie de la distribution

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Adaptive, learning and pattern recognition systemstheory and applications Fu, K. S. ; Mendel, J. M. | Academic Press 1970

Ouvrage

- 444 p.

Mathematics in science and engineering , 0066

Localisation : Monographie RdC (Adap)

approximation stochastiqu e # informatique # intelligence artificielle # méthode séquentielle # reconnaissance de forme # reconnaissance de la parole # statistique # système à apprentissage et à adaptation

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Sequential methods in pattern recognition and machine learning Fu, K. S. | Academic Press 1968

Ouvrage

- 227 p.

Mathematics in science and engineering , 0052

Localisation : Ouvrage RdC (FU)

apprentissage de la machine # approximation stochastique # fonction de risque # méthode séquentielle # programmation dynamique # système séquentiel de reconnaissance de forme

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Encyclopaedia of mathematics. Vol. 9, Sto-Zyg :stochastic approximation - Zygmund class of functions Bityutskov, V. I. ; Gamkrelidze, R. V. ; Prokhorov, Yu. V. ; Vinogradov, I. M. | Kluwer Academic Publishers 1993

Ouvrage

- 536 p.
ISBN 978-1-55608-008-1

Encyclopaedia of mathematics , 0009

Localisation : Encyclopédie de math. RdC

approximation stochastique # classe de fonction de Zygmund # encyclopédie

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Stochastic approximation Wasan, M. T. | Cambridge University Press 1969

Ouvrage

- 202 p.
ISBN 978-0-521-07368-4

Cambridge tracts in mathematics and mathematical physics , 0058

Localisation : Collection 1er étage;Réserve

approximation stochastique # méthode de Kiefer-Wolfowitz # méthode de Newton-Raphson # processus aléatoire # processus de contrôle adaptif

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Stochastic approximation and recursive estimation Nevel'son, M. B. ; Has'Minskii, R. Z. | American Mathematical Society 1973

Ouvrage

ISBN 978-0-8218-1597-7

Translations of mathematical monographs , 0047

Localisation : Collection 1er étage

convergence des procédures d'approximation stochastique # estimation récursive # normalité asymptotique # paramètre de contrôle # probabilité et martingale # processus de Markov à temps discret # procédure de Robbins-Monro # temps discret ou continu # équation stochastique

62L20

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## L'ordre dans le chaosvers une approche déterministe de la turbulence Berge, Pierre ; Pomeau, Yves ; Vidal , Christian ; Ruelle, David | Hermann 1992

Ouvrage

- 352 p.
ISBN 978-2-7056-5980-6

Collection enseignement des sciences , 0033

Localisation : Ouvrage RdC (BERG)

bifurcation # chaos # ordre # système dynamique # turbulence

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Stochastic models in reliability Aven, Terje ; Jensen, Uwe | Springer 1999

Ouvrage

- 270 p.
ISBN 978-0-387-98633-3

Applications of mathematics , 0041

Localisation : Ouvrage RdC (AVEN)

apprentissage stochastique # contrôle adaptable # modèle de système stochastique # modèle stochastique # méthode du gradient stochastique # méthode numérique # processus stochastique # stabilité # système stochastique et contrôle # théorie de reliabilité # traitement numérique # équation différentielle ordinaire # équation stochastique

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Stochastic approximation and its application Chen, Han-Fu | Kluwer Academic Publishers 2002

Ouvrage

- 357 p.
ISBN 978-1-4020-0806-1

Nonconvex optimization and its applications

Localisation : Ouvrage RdC (CHEN)

statistique # approximation stochastique # programmation stochastique # théorie du signal # algorithme # analyse numérique # méthode TS # trajectoire # sous-suite # analyse de convergence # méthode ODE # régression # optimisation globale # propriété asymptotique

#### Filtrer

##### Codes MSC

Ressources Electroniques (Depuis le CIRM)

Books & Print journals

Recherche avancée

0
Z