References of "Meyer, Gilles"
     in
Bookmark and Share    
See detailFixed-rank matrix factorizations and Riemannian low-rank optimization
Mishra, Bamdev ULg; Meyer, Gilles ULg; Bonnabel, Silvere et al

E-print/Working paper (2012)

Motivated by the problem of learning a linear regression model whose parameter is a large fixed-rank non-symmetric matrix, we consider the optimization of a smooth cost function defined on the set of ... [more ▼]

Motivated by the problem of learning a linear regression model whose parameter is a large fixed-rank non-symmetric matrix, we consider the optimization of a smooth cost function defined on the set of fixed-rank matrices. We adopt the geometric optimization framework of optimization on Riemannian matrix manifolds. We study the underlying geometries of several well-known fixed-rank matrix factorizations and then exploit the Riemannian geometry of the search space in the design of a class of gradient descent and trust-region algorithms. The proposed algorithms generalize our previous results on fixed-rank symmetric positive semidefinite matrices, apply to a broad range of applications, scale to high-dimensional problems and confer a geometric basis to recent contributions on the learning of fixed-rank non-symmetric matrices. We make connections with existing algorithms in the context of low-rank matrix completion and discuss relative usefulness of the proposed framework. Numerical experiments suggest that the proposed algorithms compete with the state-of-the-art and that manifold optimization offers an effective and versatile framework for the design of machine learning algorithms that learn a fixed-rank matrix. [less ▲]

Detailed reference viewed: 29 (4 ULg)
See detailLow-rank optimization with trace norm penalty
Mishra, Bamdev ULg; Meyer, Gilles ULg; Bach, Francis et al

E-print/Working paper (2012)

The paper addresses the problem of low-rank trace norm minimization. We propose an algorithm that alternates between fixed-rank optimization and rank-one updates. The fixed-rank optimization is ... [more ▼]

The paper addresses the problem of low-rank trace norm minimization. We propose an algorithm that alternates between fixed-rank optimization and rank-one updates. The fixed-rank optimization is characterized by an efficient factorization that makes the trace norm differentiable in the search space and the computation of duality gap numerically tractable. The search space is nonlinear but is equipped with a particular Riemannian structure that leads to efficient computations. We present a second-order trust-region algorithm with a guaranteed quadratic rate of convergence. Overall, the proposed optimization scheme converges super-linearly to the global solution while still maintaining complexity that is linear in the number of rows of the matrix. To compute a set of solutions efficiently for a grid of regularization parameters we propose a predictor-corrector approach on the quotient manifold that outperforms the naive warm-restart approach. The performance of the proposed algorithm is illustrated on problems of low-rank matrix completion and multivariate linear regression. [less ▲]

Detailed reference viewed: 48 (17 ULg)
Full Text
Peer Reviewed
See detailLow-rank optimization for distance matrix completion
Mishra, Bamdev ULg; Meyer, Gilles ULg; Sepulchre, Rodolphe ULg

in Proceedings of the 50th IEEE Conference on Decision and Control (2011, December)

This paper addresses the problem of low-rank distance matrix completion. This problem amounts to recover the missing entries of a distance matrix when the dimension of the data embedding space is possibly ... [more ▼]

This paper addresses the problem of low-rank distance matrix completion. This problem amounts to recover the missing entries of a distance matrix when the dimension of the data embedding space is possibly unknown but small compared to the number of considered data points. The focus is on high-dimensional problems. We recast the considered problem into an optimization problem over the set of low-rank positive semidefinite matrices and propose two efficient algorithms for low-rank distance matrix completion. In addition, we propose a strategy to determine the dimension of the embedding space. The resulting algorithms scale to high-dimensional problems and monotonically converge to a global solution of the problem. Finally, numerical experiments illustrate the good performance of the proposed algorithms on benchmarks. [less ▲]

Detailed reference viewed: 49 (8 ULg)
Full Text
See detailGeometric optimization algorithms for linear regression on fixed-rank matrices
Meyer, Gilles ULg

Doctoral thesis (2011)

Nowadays, large and rapidly evolving data sets are commonly encountered in many modern applications. Efficiently mining and exploiting these data sets generally results in the extraction of valuable ... [more ▼]

Nowadays, large and rapidly evolving data sets are commonly encountered in many modern applications. Efficiently mining and exploiting these data sets generally results in the extraction of valuable information and therefore appears as an important challenge in various domains including network security, computer vision, internet search engines, bioinformatics, marketing systems, online advertisement, social networks, just to name a few. The rapid development of these modern computer science applications sustains an ever-increasing demand for efficient machine learning algorithms that can cope with large-scale problems, characterized by a large number of samples and a large number of variables. The research reported in the present thesis is devoted to the design of efficient machine learning algorithms for large-scale problems. Specifically, we adopt a geometric optimization viewpoint to address the problem of linear regression in nonlinear and high-dimensional matrix search spaces. Our purpose is to efficiently exploit the geometric structure of the search space in the design of scalable linear regression algorithms. Our search space of main interest will be the set of low-rank matrices. Learning a low-rank matrix is a typical approach to cope with high-dimensional problems. The low-rank constraint is expected to force the learning algorithm to capture a limited number of dominant factors that mostly influence the sought solution. We consider both the learning of a fixed-rank symmetric positive semidefinite matrix and of a fixed-rank non-symmetric matrix. A first contribution of the thesis is to show that many modern machine learning problems can be formulated as linear regression problems on the set of fixed-rank matrices. For example, the learning of a low-rank distance, low-rank matrix completion and the learning on data pairs are cast into the considered linear regression framework. For these problems, the low-rank constraint is either part of the original problem formulation or is a sound approximation that significantly reduces the original problem size and complexity, resulting in a dramatic decrease in the computational complexity of algorithms. Our main contribution is the development of novel efficient algorithms for learning a linear regression model parameterized by a fixed-rank matrix. The resulting algorithms preserve the underlying geometric structure of the problem, scale to high-dimensional problems, enjoy local convergence properties and confer a geometric basis to recent contributions on learning fixed-rank matrices. We thereby show that the considered geometric optimization framework offers a solid and versatile framework for the design of rank-constrained machine learning algorithms. The efficiency of the proposed algorithms is illustrated on several machine learning applications. Numerical experiments suggest that the proposed algorithms compete favorably with the state-of-the-art in terms of achieved performance and required computational time. [less ▲]

Detailed reference viewed: 87 (28 ULg)
Full Text
Peer Reviewed
See detailRegression on fixed-rank positive semidefinite matrices: a Riemannian approach
Meyer, Gilles ULg; Bonnabel, Silvère; Sepulchre, Rodolphe ULg

in Journal of Machine Learning Research (2011), 12(Feb), 593625

The paper addresses the problem of learning a regression model parameterized by a fixed-rank positive semidefinite matrix. The focus is on the nonlinear nature of the search space and on scalability to ... [more ▼]

The paper addresses the problem of learning a regression model parameterized by a fixed-rank positive semidefinite matrix. The focus is on the nonlinear nature of the search space and on scalability to high-dimensional problems. The mathematical developments rely on the theory of gradient descent algorithms adapted to the Riemannian geometry that underlies the set of fixed-rank positive semidefinite matrices. In contrast with previous contributions in the literature, no restrictions are imposed on the range space of the learned matrix. The resulting algorithms maintain a linear complexity in the problem size and enjoy important invariance properties. We apply the proposed algorithms to the problem of learning a distance function parameterized by a positive semidefinite matrix. Good performance is observed on classical benchmarks. [less ▲]

Detailed reference viewed: 83 (18 ULg)
Full Text
Peer Reviewed
See detailLinear regression under fixed-rank constraints: a Riemannian approach
Meyer, Gilles ULg; Bonnabel, Silvère; Sepulchre, Rodolphe ULg

in Proceedings of the 28th International Conference on Machine Learning (2011)

In this paper, we tackle the problem of learning a linear regression model whose parameter is a fixed-rank matrix. We study the Riemannian manifold geometry of the set of fixed-rank matrices and develop ... [more ▼]

In this paper, we tackle the problem of learning a linear regression model whose parameter is a fixed-rank matrix. We study the Riemannian manifold geometry of the set of fixed-rank matrices and develop efficient line-search algorithms. The proposed algorithms have many applications, scale to high-dimensional problems, enjoy local convergence properties and confer a geometric basis to recent contributions on learning fixed-rank matrices. Numerical experiments on benchmarks suggest that the proposed algorithms compete with the state-of-the-art, and that manifold optimization offers a versatile framework for the design of rank-constrained machine learning algorithms. [less ▲]

Detailed reference viewed: 90 (4 ULg)
Full Text
Peer Reviewed
See detailRank-constrained linear regression: a Riemannian approach
Meyer, Gilles ULg; Bonnabel, Silvère; Sepulchre, Rodolphe ULg

Poster (2010, December)

Detailed reference viewed: 17 (0 ULg)
Full Text
Peer Reviewed
See detailAdaptive filtering for estimation of a low-rank positive semidefinite matrix
Bonnabel, Silvère; Meyer, Gilles ULg; Sepulchre, Rodolphe ULg

in Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems (2010, July)

In this paper, we adopt a geometric viewpoint to tackle the problem of estimating a linear model whose parameter is a fixed-rank positive semidefinite matrix. We consider two gradient descent flows ... [more ▼]

In this paper, we adopt a geometric viewpoint to tackle the problem of estimating a linear model whose parameter is a fixed-rank positive semidefinite matrix. We consider two gradient descent flows associated to two distinct Riemannian quotient geometries that underlie this set of matri- ces. The resulting algorithms are non-linear and can be viewed as a generalization of Least Mean Squares that instrically constrain the parameter within the manifold search space. Such algorithms designed for low-rank matrices find applications in high-dimensional distance learning problems for classification or clustering. [less ▲]

Detailed reference viewed: 51 (16 ULg)
Full Text
See detailFrom subspace learning to distance learning: a geometrical optimization approach
Meyer, Gilles ULg; Journée, Michel; Bonnabel, Silvère et al

in Proceedings of the 2009 IEEE Workshop on Statistical Signal Processing (SSP2009) (2009)

In this paper, we adopt a differential-geometry viewpoint to tackle the problem of learning a distance online. As this prob- lem can be cast into the estimation of a fixed-rank positive semidefinite (PSD ... [more ▼]

In this paper, we adopt a differential-geometry viewpoint to tackle the problem of learning a distance online. As this prob- lem can be cast into the estimation of a fixed-rank positive semidefinite (PSD) matrix, we develop algorithms that ex- ploits the rich geometry structure of the set of fixed-rank PSD matrices. We propose a method which separately updates the subspace of the matrix and its projection onto that subspace. A proper weighting of the two iterations enables to continu- ously interpolate between the problem of learning a subspace and learning a distance when the subspace is fixed. [less ▲]

Detailed reference viewed: 71 (27 ULg)
Peer Reviewed
See detailAmino acids for the measurement of protein synthesis in vivo by PET.
Vaalburg, W.; Coenen, H. H.; Crouzel, C. et al

in International Journal of Radiation Applications and Instrumentation. Part B : Nuclear Medicine and Biology (1992), 19(2), 227-37

In principle, PET in combination with amino acids labelled with positron-emitting radionuclides and kinetic metabolic models, can quantify local protein synthesis rates in tissue in vivo. These PET ... [more ▼]

In principle, PET in combination with amino acids labelled with positron-emitting radionuclides and kinetic metabolic models, can quantify local protein synthesis rates in tissue in vivo. These PET measurements have clinical potential in, for example, oncology, neurology and psychiatry. An optimal positron-emitting amino acid for the measurement of PSR has a high protein incorporation, can easily be prepared by automated equipment and has minimal non-protein radioactive metabolites. Presently L-[methyl-11C]methionine, L-[1-11C]leucine, L-[1-11C]tyrosine, L-[1-11C]phenylalanine, L-[1-11C]methionine and L-[2-18F]fluorotyrosine are under evaluation in normal volunteers and/or in patients. Several other amino acids are suggested. No comparison of the clinical usefulness of the different amino acids in man is yet available. Because of the longer half life of 18F compared to 11C, [18F]fluoro amino acids may have advantages over [11C]amino acids for the investigation of tissue with relative slow protein synthesis, such as brain, and for application in institutions with an off site, but nearby cyclotron. The half life of [13N]amino acids is considered to be too short for flexible clinical application. As yet no metabolic compartmental model has been investigated for [13N]amino acids. For routine application reliable preparation of the radiopharmaceutical is essential. Of all the amino acids under evaluation, a reliable, high yield, easy to automate production procedure is available for L-[methyl-11C]methionine only. It is however unlikely that this tracer can accurately measure PSR because of its non-protein metabolism. For the other amino acids the main problems in production are associated with complex multistep syntheses and/or low radiochemical yields, complex purification methods and the need to isolate the L-enantiomer. The kinetic metabolic models under investigation, consist of 4 or 5 compartments depending on the necessity to compensate for labelled metabolites. The metabolic profile of the amino acids is mainly extracted from animal experiments. Because of the number and amount of labelled metabolites in plasma, [11C]carboxylic labelled amino acids are preferred to amino acids with carbon-11 in another position. As yet no recommendation can be given on the optimal labelled amino acid(s) for PSR measurement in vivo nor on the methods to prepare the amino acids reported for this purpose. [less ▲]

Detailed reference viewed: 7 (0 ULg)