References of "Detry, Renaud"
     in
Bookmark and Share    
Peer Reviewed
See detailLearning the Tactile Signatures of Prototypical Object Parts for Robust Part-based Grasping of Novel Objects
Hyttinen, Emil; Kragic, Danica; Detry, Renaud ULg

in IEEE International Conference on Robotics and Automation (2015)

Detailed reference viewed: 11 (2 ULg)
Full Text
See detailA vision-based autonomous inter-row weeder
Krishna Moorthy Parvathi, Sruthi Moorthy ULg; Detry, Renaud ULg; Boigelot, Bernard ULg et al

Conference (2014, March 05)

Autonomous robotic weed destruction plays a significant role in crop production as it automates one of the few unmechanized and drudging tasks of agriculture i.e. manual weed destruction. Robotic ... [more ▼]

Autonomous robotic weed destruction plays a significant role in crop production as it automates one of the few unmechanized and drudging tasks of agriculture i.e. manual weed destruction. Robotic technology also contributes to long-term sustainability with both economic and environmental benefits, by minimising the current dependency on chemicals. The aim of this study is to design a small low-cost versatile robot allowing the destruction of weeds that lie between the crop rows by navigating in the field autonomously and using a minimum of a priori information of the field. For the robot to navigate autonomously, necessary and sufficient information can be supplied by a machine vision system. One important issue with the application of machine vision is to develop a system that recognises the crop rows accurately and robustly which is tolerant to problems such as crops at varying growth stages, poor illumination conditions, missing crops, high weed pressure, etc. Aiming at accurate and robust real-time guidance of autonomous robot through the field, the plethora of image processing algorithms like Ostu’s threshold method and hough transform will be explored for two main processes namely the image segmentation and crop row detection respectively. In order to overcome the issue of large variabilities encountered in agriculture such as varying weather conditions, intelligent stochastic data fusion and machine learning algorithms will be used to combine data from heterogeneous sensors. Besides crop row detection, other major challenges foreseen are: mapping the unknown geometry of the field, high-level planning of efficient and complete coverage of the field, controlling the low-level op- erations of the robot, and ensuring security. Specialised sensors such as GPS will be considered to generate the map of the field enabling Simultaneous Localisation And Mapping (SLAM) in real time on a mobile platform. The generated map will be exploited along with the sensorial in- formation from crop row detection to efficiently plan and execute the guidance of the robot au- tonomously in the field, thereby enabling weed elimination. [less ▲]

Detailed reference viewed: 127 (10 ULg)
Peer Reviewed
See detailLearning dextrous grasps that generalise to novel objects by combining hand and contact models
Kopicki, Marek; Detry, Renaud ULg; Schmidt, Florian et al

in IEEE International Conference on Robotics and Automation (2014)

Detailed reference viewed: 18 (0 ULg)
Peer Reviewed
See detailRepresentations for Cross-task, Cross-object Grasp Transfer
Hjelm, Martin; Detry, Renaud ULg; Ek, Carl Henrik et al

in IEEE International Conference on Robotics and Automation (2014)

Detailed reference viewed: 45 (0 ULg)
Peer Reviewed
See detailInertially-safe Grasping of Novel Objects
Rietzler, Alexander; Detry, Renaud ULg; Piater, Justus

Conference (2013)

Detailed reference viewed: 13 (1 ULg)
Peer Reviewed
See detailGeneralizing Task Parameters Through Modularization
Detry, Renaud ULg; Hjelm, Martin; Ek, Carl Henrik et al

Conference (2013)

Detailed reference viewed: 10 (2 ULg)
Peer Reviewed
See detailUnsupervised Learning Of Predictive Parts For Cross-object Grasp Transfer
Detry, Renaud ULg; Piater, Justus

in IEEE/RSJ International Conference on Intelligent Robots and Systems (2013)

Detailed reference viewed: 11 (1 ULg)
Peer Reviewed
See detailSparse Summarization of Robotic Grasp Data
Hjelm, Martin; Ek, Carl Henrik; Detry, Renaud ULg et al

in IEEE International Conference on Robotics and Automation (2013)

Detailed reference viewed: 21 (0 ULg)
Peer Reviewed
See detailLearning a Dictionary of Prototypical Grasp-predicting Parts from Grasping Experience
Detry, Renaud ULg; Ek, Carl Henrik; Madry, Marianna et al

in IEEE International Conference on Robotics and Automation (2013)

Detailed reference viewed: 21 (3 ULg)
Peer Reviewed
See detailGeneralizing Grasps Across Partly Similar Objects
Detry, Renaud ULg; Ek, Carl Henrik; Madry, Marianna et al

in IEEE International Conference on Robotics and Automation (2012)

Detailed reference viewed: 15 (1 ULg)
Peer Reviewed
See detailCompressing Grasping Experience into a Dictionary of Prototypical Grasp-predicting Parts
Detry, Renaud ULg; Ek, Carl Henrik; Madry, Marianna et al

Conference (2012)

Detailed reference viewed: 54 (0 ULg)
Peer Reviewed
See detailImproving Generalization for 3D Object Categorization with Global Structure Histograms
Madry, Marianna; Ek, Carl Henrik; Detry, Renaud ULg et al

in IEEE/RSJ International Conference on Intelligent Robots and Systems (2012)

Detailed reference viewed: 18 (0 ULg)
Peer Reviewed
See detailGrasp Stability from Vision and Touch
Bekiroglu, Yasemin; Detry, Renaud ULg; Kragic, Danica

Conference (2012)

Detailed reference viewed: 34 (3 ULg)
Full Text
Peer Reviewed
See detailLearning Grasp Affordance Densities
Detry, Renaud ULg; Kraft, D.; Kroemer, O. et al

in Paladyn. Journal of Behavioral Robotics (2011), 2(1), 1--17

Detailed reference viewed: 16 (2 ULg)
Full Text
Peer Reviewed
See detailGrasp Generalization Via Predictive Parts
Detry, Renaud ULg; Piater, Justus ULg

Conference (2011)

Detailed reference viewed: 10 (1 ULg)
Full Text
Peer Reviewed
See detailLearning Visual Representations for Perception-Action Systems
Piater, Justus ULg; JODOGNE, Sébastien ULg; Detry, Renaud ULg et al

in International Journal of Robotics Research (2011), 30(3), 294-307

Detailed reference viewed: 16 (7 ULg)
Full Text
Peer Reviewed
See detailWhat a successful grasp tells about the success chances of grasps in its vicinity
Bodenhagen, Leon; Detry, Renaud ULg; Piater, Justus ULg et al

in ICDL-EpiRob (2011)

Detailed reference viewed: 9 (2 ULg)
Peer Reviewed
See detailJoint Observation of Object Pose and Tactile Imprints for Online Grasp Stability Assessment
Bekiroglu, Yasemin; Detry, Renaud ULg; Kragic, Danica

Conference (2011)

Detailed reference viewed: 10 (0 ULg)
Peer Reviewed
See detailLearning Tactile Characterizations Of Object- And Pose-specific Grasps
Bekiroglu, Yasemin; Detry, Renaud ULg; Kragic, Danica

in IEEE/RSJ International Conference on Intelligent Robots and Systems (2011)

Detailed reference viewed: 11 (0 ULg)
See detailLearning of Multi-Dimensional, Multi-Modal Features for Robotic Grasping
Detry, Renaud ULg

Doctoral thesis (2010)

While robots are extensively used in factories, our industry hasn't yet been able to prepare them for working in human environments - for instance in houses or in human-operated factories. The main ... [more ▼]

While robots are extensively used in factories, our industry hasn't yet been able to prepare them for working in human environments - for instance in houses or in human-operated factories. The main obstacle to these applications lies in the amplitude of the uncertainty inherent to the environments humans are used to work in, and in the difficulty in programming robots to cope with it. For instance, in robot-oriented environments, robots can expect to find specific tools and objects in specific places. In a human environment, obstacles may force one to find a new way of holding a tool, and new objects appear continuously and need to be dealt with. As it proves difficult to build into robots the knowledge necessary for coping with uncertain environments, the robotics community is turning to the development of agents that acquire this knowledge progressively and that adapt to unexpected events. This thesis studies the problem of vision-based robotic grasping in uncertain environments. We aim to create an autonomous agent that develops grasping skills from experience, by interacting with objects and with other agents. To this end, we present a 3D object model for autonomous, visuomotor interaction. The model represents grasping strategies along with visual features that predict their applicability. It provides a robot with the ability to compute grasp parameters from visual observations. The agent acquires models interactively by manipulating objects, possibly imitating a teacher. With time, it becomes increasingly efficient at inferring grasps from visual evidence. This behavior relies on (1) a grasp model representing relative object-gripper configurations and their feasibility, and (2) a model of visual object structure, which aligns the grasp model to arbitrary object poses (3D positions and orientations). The visual model represents object edges or object faces in 3D by probabilistically encoding the spatial distribution of small segments of object edges or the distribution of small patches of object surface. A model is learned from a few segmented 3D scans or stereo images of an object. Monte Carlo simulation provides robust estimates of the object's 3D position and orientation in cluttered scenes. The grasp model represents the likelihood of success of relative object-gripper configurations. Initial models are acquired from visual cues or by observing a teacher. Models are then refined autonomously by ``playing' with objects and observing the effects of exploratory grasps. After the robot has learned a few object models, learning becomes a combination of cross-object generalization and interactive experience: grasping strategies are generalized across objects that share similar visual substructures; they are then adapted to new objects through autonomous exploration. The applicability of our model is supported by numerous examples of pose estimates in cluttered scenes, and by a robot platform that shows increasing grasping capabilities as it explores its environment. [less ▲]

Detailed reference viewed: 24 (6 ULg)