m

F Nous contacter

0

Documents  62L20 | enregistrements trouvés : 25

O

-A +A

P Q

Déposez votre fichier ici pour le déplacer vers cet enregistrement.


ISBN 978-0-387-90913-4

Lecture notes in statistics , 0020

Localisation : Colloque 1er étage (BAD)

chaine de Markov # controle optimal adaptif # modèle d'apprentissage mathématique # problème d'optimisation # problème du bandit # processus de Wiener # processus de décision séquentielle # procédure d'approximation stochastique # tests cliniques séquentiels # théorie et algorithme # théorème de Kersting #

62-06 ; 62L20 ; 62L99

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

- 276 p.
ISBN 978-0-8218-5122-7

Contemporary mathematics , 0115

Localisation : Collection 1er étage

analyse multivariée # décidabilité # intégration numérique # intégration statistique # statistique décision # théorie de la décision

60E30 ; 62E20 ; 62H10 ; 62L20 ; 65C05

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Research schools;Computer Science;Probability and Statistics

Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as “clustering”. In both case it appears as an optimal way to produce a set of weighted prototypes (or codebook) which makes up a kind of skeleton of a dataset, a signal and more generally, from a mathematical point of view, of a probability distribution.
Quantization has encountered in recent years a renewed interest in various application fields like automatic classification, learning algorithms, optimal stopping and stochastic control, Backward SDEs and more generally numerical probability. In all these various applications, practical implementation of such clustering/quantization methods more or less rely on two procedures (and their countless variants): the Competitive Learning Vector Quantization $(CLV Q)$ which appears as a stochastic gradient descent derived from the so-called distortion potential and the (randomized) Lloyd's procedure (also known as k- means algorithm, nu ees dynamiques) which is but a fixed point search procedure. Batch version of those procedures can also be implemented when dealing with a dataset (or more generally a discrete distribution).
In a more formal form, if is probability distribution on an Euclidean space $\mathbb{R}^d$, the optimal quantization problem at level $N$ boils down to exhibiting an $N$-tuple $(x_{1}^{*}, . . . , x_{N}^{*})$, solution to

argmin$_{(x1,\dotsb,x_N)\epsilon(\mathbb{R}^d)^N} \int_{\mathbb{R}^d 1\le i\le N} \min |x_i-\xi|^2 \mu(d\xi)$

and its distribution i.e. the weights $(\mu(C(x_{i}^{*}))_{1\le i\le N}$ where $(C(x_{i}^{*})$ is a (Borel) partition of $\mathbb{R}^d$ satisfying

$C(x_{i}^{*})\subset \lbrace\xi\epsilon\mathbb{R}^d :|x_{i}^{*} -\xi|\le_{1\le j\le N} \min |x_{j}^{*}-\xi|\rbrace$.

To produce an unsupervised classification (or clustering) of a (large) dataset $(\xi_k)_{1\le k\le n}$, one considers its empirical measure

$\mu=\frac{1}{n}\sum_{k=1}^{n}\delta_{\xi k}$

whereas in numerical probability $\mu = \mathcal{L}(X)$ where $X$ is an $\mathbb{R}^d$-valued simulatable random vector. In both situations, $CLV Q$ and Lloyd's procedures rely on massive sampling of the distribution $\mu$.
As for clustering, the classification into $N$ clusters is produced by the partition of the dataset induced by the Voronoi cells $C(x_{i}^{*}), i = 1, \dotsb, N$ of the optimal quantizer.
In this second case, which is of interest for solving non linear problems like Optimal stopping problems (variational inequalities in terms of PDEs) or Stochastic control problems (HJB equations) in medium dimensions, the idea is to produce a quantization tree optimally fitting the dynamics of (a time discretization) of the underlying structure process.
We will explore (briefly) this vast panorama with a focus on the algorithmic aspects where few theoretical results coexist with many heuristics in a burgeoning literature. We will present few simulations in two dimensions.
Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as “clustering”. In both case it appears as an optimal way to produce a set ...

62L20 ; 93E25 ; 94A12 ; 91G60 ; 65C05

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Research talks;Computer Science;Control Theory and Optimization;Probability and Statistics

Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. Given n observations/iterations, the optimal convergence rates of these algorithms are $O(1/\sqrt{n})$ for general convex functions and reaches $O(1/n)$ for strongly-convex functions. In this tutorial, I will first present the classical results in stochastic approximation and relate them to classical optimization and statistics results. I will then show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of $O(1/n)$ without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent. Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes ...

62L20 ; 68T05 ; 90C06 ; 90C25

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Research talks;Computer Science;Control Theory and Optimization;Probability and Statistics

Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. Given n observations/iterations, the optimal convergence rates of these algorithms are $O(1/\sqrt{n})$ for general convex functions and reaches $O(1/n)$ for strongly-convex functions. In this tutorial, I will first present the classical results in stochastic approximation and relate them to classical optimization and statistics results. I will then show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of $O(1/n)$ without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent. Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes ...

62L20 ; 68T05 ; 90C06 ; 90C25

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Research talks;Analysis and its Applications

We investigate a method based on risk minimization to hedge observable but non-tradable source of risk on financial or energy markets. The optimal portfolio strategy is obtained by minimizing dynamically the Conditional Value-at-Risk (CVaR) using three main tools: a stochastic approximation algorithm, optimal quantization and variance reduction techniques (importance sampling (IS) and linear control variable (LCV)) as the quantities of interest are naturally related to rare events. We illustrate our approach by considering several portfolios in connection with energy markets.

Keywords : VaR, CVaR, Stochastic Approximation, Robbins-Monro algorithm, Quantification
We investigate a method based on risk minimization to hedge observable but non-tradable source of risk on financial or energy markets. The optimal portfolio strategy is obtained by minimizing dynamically the Conditional Value-at-Risk (CVaR) using three main tools: a stochastic approximation algorithm, optimal quantization and variance reduction techniques (importance sampling (IS) and linear control variable (LCV)) as the quantities of interest ...

91G70 ; 91B30 ; 62L20

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

- 226 p.
ISBN 978-0-387-90826-7

Lecture notes in statistics , 0018

Localisation : Disparu;Ouvrage RdC (BRIT)

62L20 ; 62L99 ; 62Q05 ; 65U05

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

- 611 p.
ISBN 978-0-12-589320-6

Localisation : Ouvrage RdC (RIZV)

analyse séquentielle # effusion maligne # fonction spline # mélange # optimisation # pharmacocinétique # plan # statistique # statistique non paramétrique # théorie des grands échantillons # théorie du controle

62-06 ; 62L15 ; 62L20 ; 62Lxx ; 92C45

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.


ISBN 978-0-471-02403-3

Wiley series in probability and mathematical statistics

Localisation : Disparu

approximation # approximation asymptotique # estimation # inference non parametrique # inference parametrique # sta tistique

60D05 ; 62Fxx ; 62G05 ; 62G20 ; 62L20

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

- 161 p.
ISBN 978-2-88124-185-7

Stochastics monographs , 0002

Localisation : Ouvrage RdC (BICH)

analyse fonctionnelle # analyse stochastique # calcul des variations # équation stochastique # opérateur de Malliavin

35B65 ; 60Hxx ; 62L20

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

- 365 p.
ISBN 978-3-540-52894-4

Applications of mathematics , 0022

Localisation : Ouvrage RdC (BENV)

algorithme # approximation # stochastique

62-XX ; 62L20 ; 93-XX ; 93C40 ; 93E12

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

- 204 p.

Research monograph , 0042

Localisation : Ouvrage RdC (ALBE)

approximation stochastique # efficience # paramètre de vecteur # paramètre scalaire # rapidité de convergence # régression # théorie de la distribution

62Bxx ; 62Exx ; 62H10 ; 62Jxx ; 62L20

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

- 444 p.

Mathematics in science and engineering , 0066

Localisation : Monographie RdC (Adap)

approximation stochastiqu e # informatique # intelligence artificielle # méthode séquentielle # reconnaissance de forme # reconnaissance de la parole # statistique # système à apprentissage et à adaptation

62L20 ; 68-XX ; 68T05 ; 68T10 ; 68Txx

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

- 227 p.

Mathematics in science and engineering , 0052

Localisation : Ouvrage RdC (FU)

apprentissage de la machine # approximation stochastique # fonction de risque # méthode séquentielle # programmation dynamique # système séquentiel de reconnaissance de forme

62L20 ; 62Lxx ; 68T05 ; 68T10 ; 90C39

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

- 536 p.
ISBN 978-1-55608-008-1

Encyclopaedia of mathematics , 0009

Localisation : Encyclopédie de math. RdC

approximation stochastique # classe de fonction de Zygmund # encyclopédie

41A30 ; 62L20

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

- 202 p.
ISBN 978-0-521-07368-4

Cambridge tracts in mathematics and mathematical physics , 0058

Localisation : Collection 1er étage;Réserve

approximation stochastique # méthode de Kiefer-Wolfowitz # méthode de Newton-Raphson # processus aléatoire # processus de contrôle adaptif

62L20 ; 62N10

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.


ISBN 978-0-8218-1597-7

Translations of mathematical monographs , 0047

Localisation : Collection 1er étage

convergence des procédures d'approximation stochastique # estimation récursive # normalité asymptotique # paramètre de contrôle # probabilité et martingale # processus de Markov à temps discret # procédure de Robbins-Monro # temps discret ou continu # équation stochastique

62L20

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

- 352 p.
ISBN 978-2-7056-5980-6

Collection enseignement des sciences , 0033

Localisation : Ouvrage RdC (BERG)

bifurcation # chaos # ordre # système dynamique # turbulence

31B35 ; 58Fxx ; 60J35 ; 62L20

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

- 270 p.
ISBN 978-0-387-98633-3

Applications of mathematics , 0041

Localisation : Ouvrage RdC (AVEN)

apprentissage stochastique # contrôle adaptable # modèle de système stochastique # modèle stochastique # méthode du gradient stochastique # méthode numérique # processus stochastique # stabilité # système stochastique et contrôle # théorie de reliabilité # traitement numérique # équation différentielle ordinaire # équation stochastique

62L20 ; 65C05 ; 90C15 ; 93-02 ; 93E10 ; 93E23 ; 93E35

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

- 357 p.
ISBN 978-1-4020-0806-1

Nonconvex optimization and its applications

Localisation : Ouvrage RdC (CHEN)

statistique # approximation stochastique # programmation stochastique # théorie du signal # algorithme # analyse numérique # méthode TS # trajectoire # sous-suite # analyse de convergence # méthode ODE # régression # optimisation globale # propriété asymptotique

62-02 ; 62L20 ; 90C15 ; 93B40

... Lire [+]

Z