Deep Reinforcement Learning for Mobile Manipulation Planning Control

Project title: Deep Reinforcement Learning for Mobile Manipulation Planning Control

Director of Studies: Dr Mario Gianni 

Second Supervisor:  Dr Martin Stoelen

Project description

Current research in autonomous mobile robot manipulation addresses the problem of planning and control of a robot arm and of a mobile platform in a decoupled manner. Unlike mobile vehicles and industrial manipulators, mobile manipulator robots have the main advantage of offering unlimited operational workspace to an on-board arm which results in more flexibility in manipulation tasks. Unfortunately, the actual approaches, which are based on decoupled planning and control, significantly reduce this additional degree of flexibility, thus resulting in a downgrading of the performance of the platform.

Jointly planning and control of a robotic arm mounted on a mobile platform has several advantages. It can prevent nearly singular configurations of the arm during task execution. It improves precision of manipulation of tools as well as of handling of potentially dangerous materials. Compliant motion control increases perception and recognition accuracy in active vision. It can compensate sensor failures in localization, mapping and soil interaction, and, finally, it can also increase mobile stability during navigation.

The proposed research aims at advancing current science and technologies in this field by looking at solutions for compliant mobile manipulation planning and control involving learning. Based on kinematic and dynamic modelling of the platform, the project firstly investigates techniques of dimensionality reduction to detect task relevant variables and constraints. It builds on the strong theory behind learning by demonstration, learning by failures, policy search methods and reinforcement learning techniques to develop a baseline approach for planning and control. It further extends these approaches with the design and development of a novel neural network architecture in the framework of deep reinforcement learning to achieve adaptation in planning and execution. This projects also aims at applying such advanced methods in practice under the setting of real scenarios and in collaboration with several stakeholders such as firefighters, industries and nuclear robotics research centres.

Eligibility

Applicants should have a minimum of a first class or upper second class bachelor degree. Applications from candidates with a relevant masters qualification will be welcomed.

Funding

The studentship is supported for three years and includes full home/EU tuition fees plus a stipend of £14,553 per annum. The studentship will only fund those applicants who are eligible for home/EU fees with relevant qualifications. Applicants required to cover overseas fees will have to cover the difference between home/EU and overseas tuition fee rates (approximately £10,350 per annum).General information about applying for a research degree at the University is available at: https://www.plymouth.ac.uk/student-life/your-studies/research-degrees/applicants-and-enquirers.  

You can apply via the online application form which can be found at: https://www.plymouth.ac.uk/study/postgraduate and select ‘Apply’.

Please mark it FAO Mrs Carole Watson and clearly state that you are applying for a PhD studentship within the School of Computing, Electronics and Mathematics.

For more information on the admissions process contact Carole Watson.

Closing date for applications: 12 noon, 6 April 2018.

Shortlisted candidates will be invited for interview in April. We regret that we may not be able to respond to all applications. Applicants who have not received an offer of a place by 4 May 2018 should consider their application has been unsuccessful on this occasion.