Results 1-20 of 98.
Piater+Justus+

Bookmark and Share    
Full Text
Peer Reviewed
See detailCan computer vision problems benefit from structured hierarchical classification?
Hoyoux, Thomas ULg; Rodríguez-Sánchez, Antonio J.; Piater, Justus H.

in Machine Vision & Applications (2016)

Detailed reference viewed: 29 (6 ULg)
Peer Reviewed
See detailCan computer vision problems benefit from structured hierarchical classification?
Hoyoux, Thomas ULg; Rodríguez-Sánchez, Antonio J.; Piater, Justus H. et al

in Computer Analysis of Images and Patterns (2015)

Detailed reference viewed: 20 (15 ULg)
Full Text
See detailProbabilistic Models of Visual Appearance For Object Identity, Class, and Pose Inference
Teney, Damien ULg

Doctoral thesis (2013)

The topic of object recognition is a central challenge of computer vision. In addition to being studied as a scientific problem in its own right, it also counts many direct practical applications. We ... [more ▼]

The topic of object recognition is a central challenge of computer vision. In addition to being studied as a scientific problem in its own right, it also counts many direct practical applications. We specifically consider robotic applications involving the manipulation, and grasping of everyday objects, in the typical situations that would be encountered by personal service robots. Visual object recognition, in the large sense, is then paramount to provide a robot the sensing capabilities for scene understanding, the localization of objects of interests and the planning of actions such as the grasping of such objects. This thesis presents a number of methods that tackle the related tasks of object detection, localization, recognition, and pose estimation in 2D images, of both specific objects and of object categories. We aim at providing techniques that are the most generally applicable, by considering those different tasks as different sides of a same problem, and by not focusing on a specific type of image information or image features. We first address the use of 3D models of objects for continuous pose estimation. We represent an object by a constellation of points, corresponding to potentially observable features, which serve to define a continuous probability distribution of such features in 3D. This distribution can be projected onto the image plane, and the task of pose estimation is then to maximize its “match” with the test image. Applied to the use of edge segments as observable features, the method is capable of localizing and estimating the pose of non-textured objects, while the probabilistic formulation offers an elegant way of dealing with uncertainty in the definition of the models, which can be learned from observations — as opposed to being available as hand-made CAD models. We also propose a method, framed in a similar probabilistic formulation, in order to obtain, or reconstruct such 3D models, using multiple calibrated views of the object of interest. A larger part of this thesis is then interested in exemplar-based recognition methods, using directly 2D example images for training, without any explicit 3D information. The appearance of objects is also defined as probability distributions of observable features, defined in a nonparametric manner through kernel density estimation, using image features from multiple training examples as supporting particles. The task of object localization is cast as the cross-correlation of distributions of features of the model and of the test image, which we efficiently solve through a voting-based algorithm. We then propose several techniques to perform continuous pose estimation, yielding a precision well beyond a mere classification among the discrete, trained viewpoints. One of the proposed method in this regard consists in a generative model of appearance, capable of interpolating the appearance of learned objects (or object categories), which then allows optimizing explicitly for the pose of the object in the test image. Our model of appearance, initially defined in general terms, is applied to the use of edge segments and of intensity gradients as image features. We are particularly interested in the use of gradients extracted at a coarse scale, and defined densely across images, as they can effectively represent shape as they capture the shading onto smooth non-textured surfaces. This allows handling some cases, common in robotic applications, of objects of primitive shapes with little texture and few discriminative details, which are challenging to recognize with most existing methods. The proposed contributions, which all integrate seamlessly in a same coherent framework, proved successful on a number of tasks and datasets. Most interestingly, we obtain performance on well-studied tasks of localization in clutter and pose estimation, well above baseline methods, often on par with or superior to state-of-the-art method individually designed for each of those specific tasks, whereas the proposed framework is similarly applied to a wide range of problems. [less ▲]

Detailed reference viewed: 85 (10 ULg)
Peer Reviewed
See detailEnhancing gloss-based corpora with facial features using active appearance models
Schmidt, Christoph; Koller, Oscar; Ney, Hermann et al

in Proceedings of the Third International Symposium on Sign Language Translation and Avatar Technology (2013, October 19)

Detailed reference viewed: 23 (4 ULg)
Peer Reviewed
See detailInertially-safe Grasping of Novel Objects
Rietzler, Alexander; Detry, Renaud ULg; Piater, Justus

Conference (2013)

Detailed reference viewed: 22 (1 ULg)
Peer Reviewed
See detailUsing viseme recognition to improve a sign language translation system
Schmidt, Christoph; Koller, Oscar; Ney, Hermann et al

in IWSLT 2013 Proceedings (2013)

Detailed reference viewed: 18 (7 ULg)
Peer Reviewed
See detailUnsupervised Learning Of Predictive Parts For Cross-object Grasp Transfer
Detry, Renaud ULg; Piater, Justus

in IEEE/RSJ International Conference on Intelligent Robots and Systems (2013)

Detailed reference viewed: 14 (1 ULg)
Peer Reviewed
See detailGeneralizing Grasps Across Partly Similar Objects
Detry, Renaud ULg; Ek, Carl Henrik; Madry, Marianna et al

in IEEE International Conference on Robotics and Automation (2012)

Detailed reference viewed: 27 (1 ULg)
Peer Reviewed
See detailRWTH-PHOENIX-Weather: a large vocabulary sign language recognition and translation corpus.
Forster, Jens; Schmidt, Christoph; Hoyoux, Thomas ULg et al

in LREC 2012 Proceedings (2012)

Detailed reference viewed: 13 (7 ULg)
Full Text
See detailReal-time Simultaneous Modelling and Tracking of Articulated Objects
Declercq, Arnaud ULg

Doctoral thesis (2012)

In terms of capability, there is still a huge gap between the human visual system and existing computer vision algorithms. To achieve results of su cient quality, these algorithms are generally extremely ... [more ▼]

In terms of capability, there is still a huge gap between the human visual system and existing computer vision algorithms. To achieve results of su cient quality, these algorithms are generally extremely specialised in the task they have been designed for. All the knowledge available during their implementation is used to bias the output result and/or facilitate the initialisation of the system. This leads to increased robustness but a lower reusability of the code. In most cases, it also majorly limits the freedom of the user by constraining him to a limited set of possible interactions. In this thesis, we propose to go in the opposite direction by developing a general framework capable of both tracking and learning objects as complex as articulated objects. The robustness will be achieved by using one task to assist the other. The method should be completely unsupervised with no prior knowledge about the appearance or shape of the objects encountered (although, we decided to focus on rigid and articulated objects). With this framework, we hope to provide directions for a more di cult and distant goal: that of completely eliminating the time consuming prior design of object models in computer vision applications. This long term target will allow the reduction of the time and cost of implementing computer vision applications. It will also provide a larger freedom in the range of objects that can be used by the program. Our research focuses on three main aspects of this framework. The rst one is to create an object description e ective on a wide variety of complex objects and able to assist the object tracking while being learnt. The second is to provide both tracking and learning methods that can be executed simultaneously in real-time. This is particularly challenging for tracking when a large number of features are involved. Finally, our most challenging task and the core of this thesis, is to design robust tracking and learning solutions able to assist each other without creating counter-productive bias when one of them fails. [less ▲]

Detailed reference viewed: 70 (6 ULg)
Full Text
Peer Reviewed
See detailGrasp Generalization Via Predictive Parts
Detry, Renaud ULg; Piater, Justus ULg

Conference (2011)

Detailed reference viewed: 15 (1 ULg)
Full Text
Peer Reviewed
See detailWhat a successful grasp tells about the success chances of grasps in its vicinity
Bodenhagen, Leon; Detry, Renaud ULg; Piater, Justus ULg et al

in ICDL-EpiRob (2011)

Detailed reference viewed: 14 (2 ULg)
Full Text
Peer Reviewed
See detailLearning Grasp Affordance Densities
Detry, Renaud ULg; Kraft, D.; Kroemer, O. et al

in Paladyn. Journal of Behavioral Robotics (2011), 2(1), 1--17

Detailed reference viewed: 21 (2 ULg)
Full Text
Peer Reviewed
See detailLearning Visual Representations for Perception-Action Systems
Piater, Justus ULg; JODOGNE, Sébastien ULg; Detry, Renaud ULg et al

in International Journal of Robotics Research (2011), 30(3), 294-307

Detailed reference viewed: 24 (8 ULg)
Full Text
Peer Reviewed
See detailContinuous Surface-point Distributions for 3D Object Pose Estimation and Recognition
Detry, Renaud ULg; Piater, Justus ULg

in Asian Conference on Computer Vision (2010)

Detailed reference viewed: 23 (1 ULg)
Full Text
Peer Reviewed
See detailLearning Probabilistic Discriminative Models of Grasp Affordances under Limited Supervision
Erkan, Ayse; Kroemer, Oliver; Detry, Renaud ULg et al

in IEEE/RSJ International Conference on Intelligent Robots and Systems (2010)

Detailed reference viewed: 29 (5 ULg)
Full Text
Peer Reviewed
See detailCombining Active Learning and Reactive Control for Robot Grasping
Kroemer, Oliver; Detry, Renaud ULg; Piater, Justus ULg et al

in Robotics and Autonomous Systems (2010)

Detailed reference viewed: 82 (4 ULg)
Full Text
Peer Reviewed
See detailDevelopment of Object and Grasping Knowledge by Robot Exploration
Kraft, Dirk; Detry, Renaud ULg; Pugeault, Nicolas et al

in IEEE Transactions on Autonomous Mental Development (2010), 2(4), 368--383

Detailed reference viewed: 21 (1 ULg)
Peer Reviewed
See detailRefining Grasp Affordance Models by Experience
Detry, Renaud ULg; Kraft, Dirk; Buch, Anders Glent et al

in IEEE International Conference on Robotics and Automation (2010)

Detailed reference viewed: 49 (8 ULg)
Peer Reviewed
See detailVideo analysis for continuous sign language recognition
Piater, Justus ULg; Hoyoux, Thomas ULg; Du, Wei ULg

in 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies (2010)

Detailed reference viewed: 45 (11 ULg)