To meet the demands for reliable, safe, long-term operation, robots’ capacity to perceive their environments will need to evolve dramatically. Current approaches to sensing and computer vision fail in challenging weather, murky water, or low light. Recent developments in optics, machine learning, and computational imaging point the way forward, offering better performance and new senses like single-photon detection, imaging around corners, and the ability to follow a moving pulse of light through a scene. However, most of these developments are arising outside robotics, leaving unaddressed the unique characteristics of robotic perception:
- robots are embedded in their environments,
- experience a continuum of states, and
- can generally move and interact with objects to perceive better.
These unique characteristics open up opportunities for sensors to query the environment dynamically and to use manipulation as part of the perception process. Timing is also critical: where conventional vision algorithms are concerned chiefly with throughput or are not concerned with runtime at all, latency is a key factor in robotic applications where safe operation requires timely decisions. This issue is exacerbated by limited platform power and mass, restricting available computational power.
Project 1: Low-level sensor reconfiguration as part of the planning process
Expert-level control of sensors for next-generation robotic perception
Robotic planning and control of low-level sensing is presently very limited and does not approach the level of sophistication seen in human camera operators. Jointly planning a trajectory and camera settings like exposure time, aperture and focus settings promises vastly superior sensing. Considering more sophisticated reconfigurable sensors like active plenoptic imaging devices and foveated LiDAR increases both the degree of challenge and opportunity.
Intentional control of reconfigurable sensing can increase signal quality and reduce distractors by measuring more of what is relevant in context. The sensing aspects of this project will focus on developing models and interfaces that will allow better modular design and two-way information flow between planning, control and sensing.
Project 2: Manipulating to see better: sensing for underwater inspection with manipulation / defouling
How can we interact with objects to gather more useful information?
This arises in the underwater domain in the form of surface fouling, and in the air in the form of corrosion or vegetation visually taking over surfaces requiring inspection. There is a strong coupling between what the manipulator does and what the robot sees. In a sense, the camera and the manipulator are part of one interface between the robot and the world. In this project we will explore:
- Designing payloads so that sensing and manipulation can work together to better achieve high-level tasks like mapping and change detection.
- Designing manipulation and sensing behaviours that allow these modes to work together
- Interfacing with the planning-focused aspects of the project, see projects in Planning and Control.
Project 3: Computational imaging for up-close imaging
Designing new visual sensors and imaging pipelines for up-close operations
Moving from mapping to intervention means getting up close to infrastructure, raising challenges as common visual sensing systems have chiefly been designed for use at larger distances. This project will employ the tools of computational imaging to design visual sensing systems for effective wide-field-of-view up-close imaging associated with manipulation and intervention tasks.
Project 4: Computational imaging for seeing through turbidity & particulate
Designing new visual sensors and imaging pipelines for working in murky water and particulate
Seeing well underwater is a key challenge for several of our partners. Backscatter and attenuation limit contrast while particulate distracts from important visual content. While this project is intended to be predominantly concerned with seeing underwater, it could be expanded to include similar in-air issues such as fog, dust rain or snow.
Project 5: Vision-based control and active perception for multiple sensors & manipulators
Controlling more than one manipulator at a time poses new challenges best overcome by tightly integrating vision and control. This project will address key challenges in underwater intervention tasks by developing visual approaches to dextrous underwater manipulation for ROV and AUV systems, with a focus on scene reconstruction, semantic understanding and vision-enabled intervention. Such visual methods are not only necessary to achieve fully autonomous manipulation in subsea environments but can also support more effective piloted operations and facilitate safer, more reliable and faster task execution.
Project 6: Sensor auto-calibration and integration using implicit representations and unsupervised learning
Automatically learning to use sensors for modularity and maintainability
Developing and maintaining modular and flexible platforms is at the heart of many of our partners’ businesses. This work will develop the tools required to allow sensors to be swapped in/out of a system such that the robotic platform exercises the sensors and autonomously learns to interpret them, with no supervision or manual intervention.