m

F Nous contacter

0

Documents  93E25 | enregistrements trouvés : 6

O
     

-A +A

P Q

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Localisation : Colloque 1er étage (PARI)

68DXX ; 68Q99 ; 68Qxx ; 93E25

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Research schools;Computer Science;Probability and Statistics

Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as “clustering”. In both case it appears as an optimal way to produce a set of weighted prototypes (or codebook) which makes up a kind of skeleton of a dataset, a signal and more generally, from a mathematical point of view, of a probability distribution.
Quantization has encountered in recent years a renewed interest in various application fields like automatic classification, learning algorithms, optimal stopping and stochastic control, Backward SDEs and more generally numerical probability. In all these various applications, practical implementation of such clustering/quantization methods more or less rely on two procedures (and their countless variants): the Competitive Learning Vector Quantization $(CLV Q)$ which appears as a stochastic gradient descent derived from the so-called distortion potential and the (randomized) Lloyd's procedure (also known as k- means algorithm, nu ees dynamiques) which is but a fixed point search procedure. Batch version of those procedures can also be implemented when dealing with a dataset (or more generally a discrete distribution).
In a more formal form, if is probability distribution on an Euclidean space $\mathbb{R}^d$, the optimal quantization problem at level $N$ boils down to exhibiting an $N$-tuple $(x_{1}^{*}, . . . , x_{N}^{*})$, solution to

argmin$_{(x1,\dotsb,x_N)\epsilon(\mathbb{R}^d)^N} \int_{\mathbb{R}^d 1\le i\le N} \min |x_i-\xi|^2 \mu(d\xi)$

and its distribution i.e. the weights $(\mu(C(x_{i}^{*}))_{1\le i\le N}$ where $(C(x_{i}^{*})$ is a (Borel) partition of $\mathbb{R}^d$ satisfying

$C(x_{i}^{*})\subset \lbrace\xi\epsilon\mathbb{R}^d :|x_{i}^{*} -\xi|\le_{1\le j\le N} \min |x_{j}^{*}-\xi|\rbrace$.

To produce an unsupervised classification (or clustering) of a (large) dataset $(\xi_k)_{1\le k\le n}$, one considers its empirical measure

$\mu=\frac{1}{n}\sum_{k=1}^{n}\delta_{\xi k}$

whereas in numerical probability $\mu = \mathcal{L}(X)$ where $X$ is an $\mathbb{R}^d$-valued simulatable random vector. In both situations, $CLV Q$ and Lloyd's procedures rely on massive sampling of the distribution $\mu$.
As for clustering, the classification into $N$ clusters is produced by the partition of the dataset induced by the Voronoi cells $C(x_{i}^{*}), i = 1, \dotsb, N$ of the optimal quantizer.
In this second case, which is of interest for solving non linear problems like Optimal stopping problems (variational inequalities in terms of PDEs) or Stochastic control problems (HJB equations) in medium dimensions, the idea is to produce a quantization tree optimally fitting the dynamics of (a time discretization) of the underlying structure process.
We will explore (briefly) this vast panorama with a focus on the algorithmic aspects where few theoretical results coexist with many heuristics in a burgeoning literature. We will present few simulations in two dimensions.
Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as “clustering”. In both case it appears as an optimal way to produce a set ...

62L20 ; 93E25 ; 94A12 ; 91G60 ; 65C05

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

- 529 p.
ISBN 978-0-262-62058-1

The mit press series in signal processing , 0004

Localisation : Ouvrage RdC (LJUN)

contrôle optimal # contrôle # estimation # identification des systèmes # système stochastique # théorie des systèmes # traitement numérique # distribution invariante # signal

93E10 ; 93E12 ; 93E25 ; 94A12 ; 94Axx

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.


ISBN 978-3-540-60699-4

Mathématiques & applications , 0023

Localisation : Collection 1er étage

algorithme fortement perturbé # algorithme stochastique # chaîne de Markov stable # cible attractive # convergence d'algorithmes à pas décroissants # critère de Lyapunov de stabilisation presque sûre # diffusion stable # loi de Gibbs # modèle markovien # méthode de Lyapunov pour l'EDO # méthode de l'EDO # méthode de l'EDS # recuit simulé sur un espace fini # recuit simulé vectoriel # tension de modèles non linéaires # vitesse de convergence en loi # vitesse en grandes déviations algorithme fortement perturbé # algorithme stochastique # chaîne de Markov stable # cible attractive # convergence d'algorithmes à pas décroissants # critère de Lyapunov de stabilisation presque sûre # diffusion stable # loi de Gibbs # modèle markovien # méthode de Lyapunov pour l'EDO # méthode de l'EDO # méthode de l'EDS # recuit simulé sur un espace fini # recuit simulé vectoriel # tension de modèles non linéaires # vitesse de convergence en ...

60F05 ; 60F10 ; 60H10 ; 62I20 ; 93E25

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

- 474 p.
ISBN 978-0-387-00894-3

Applications of mathematics , 0035

Localisation : Ouvrage RdC (KUSH)

approximation stochastique # fonction récursive # algorithme # apprentissage # bruit # file d'attente # commande adaptive martingale # convergence faible # algorithme asynchrone # algorithme de Robbins-Monro # algorithme de Kiefer-Wolfowitz

62L20 ; 93E10 ; 93E25 ; 93E35 ; 65C05 ; 93-02 ; 90C15

... Lire [+]

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

- xix, 281 p.
ISBN 978-0-8176-4534-2

Systems and Control : Foundations and Applications

Localisation : Ouvrage RdC (KUSH)

système stochastique # contôle optimal stochastique # équation stochastique à retard

34K28 ; 34K35 ; 34K50 ; 60F17 ; 60H20 ; 65C20 ; 65Q05 ; 90C39 ; 93E20 ; 93E25

... Lire [+]

Z