The Canadian Space Agency (CSA) is seeking solutions that will improve the autonomy of future space robotic systems through the use of a vision system based on Artificial Intelligence (AI) techniques to detect in real-time potentially hazardous obstacles and verify clearance margins in dynamic and uncertain environments. Solution submissions close at October 20, 2022 14:00. Please refer to the tender notice for this challenge on Buy and Sell.
Collision-free autonomous path execution in a dynamic and uncertain environment is an important issue which CSA seeks to resolve in order to increase robotic system autonomy for future space missions. Robotic systems operate on space infrastructure or in other environments where collisions can potentially be catastrophic. Future space systems will need to have greater autonomy in order to be less reliant on communications with the ground and to alleviate crew workload; consequently they will need to rely on multiple layers of intelligent software for supervision and safety assurance.
Cameras both built into the robot and mounted to the infrastructure provide a relatively low mass, power, and volume solution for sensing the robotic workspace. Canadarm2 for example is equipped with on-board cameras in its end-effectors and along its booms in order to allow human operators to guide and monitor its operations. Future missions such as the cislunar Gateway call for autonomous robotic operations during periods when video cannot be downlinked to the ground for oversight.
This challenge seeks to explore the feasibility of an AI-based vision system to assist or offload the need for human ground controllers to monitor clearances. For CSA, an efficient robot supervisory system translates into improved safety and operational efficiency of the mission. CSA believes that a computer vision system based on machine learning will be able to detect obstacles and monitor proximity to structure and obstacles while handling payloads. CSA can provide Mobile Servicing System and International Space Station video and telemetry data during phase 1 and 2.
Desired outcomes and considerations
Essential (mandatory) outcomes
The solution must:
- Operate in real-time, in a space environment, under all solar lighting conditions.
- Use 2D camera data (still images and video streams) from existing cameras built into a space manipulator or mounted to its infrastructure and not assume any additional sensor functionality.
- Have sufficient resolution and accuracy to detect the presence of slender objects (~2 cm wide) such as cable harnesses and EVA tethers.
- Have a false positive rate of 2 % or lower and a false negative rate of 1% or lower.
- Have to assume that the cameras could be mounted on the manipulator’s booms or end-effector (e.g. may not be stationary during operation).
- Work with a data bandwidth (video stream from the camera to processor) limited to 512 kbps and an update rate of at least 2 Hz.
- Provide a 3D virtual world display of the manipulator and any obstacles detected.
- Provide obstacle detection along the entire length of a space robotic arm and about any attached or manipulated payloads.
- Be able to operate on a typical space platform such as a 2-4 core, ARM-based FPGA System on a chip, running VxWorks or Linux, a few GB of RAM, a solid-state drive, a power budget of approximately 10 Watts, Ethernet connectivity capable of transfer rates of at least 100Mbps.
The solution should:
- Have robust performance with respect to distorted and blurred images streamed from the onboard cameras due to the oscillations caused by movements of the robotic arm.
- Incur minimum budget increase to power, mass, volume, and computation resources (CPU and memory) allocated to a space robot’s avionics concept.
- Investigate the feasibility of adding path-to-flight computing hardware to the on-board avionics if the computation power of the space robot’s avionics is found to be insufficient.
- Be able to detect features inside of a given clearance volume or a keep out zone, for example a cylinder with depth of 2.4 m and diameter 2 m.
- Investigate multiple cameras to cover the entire volume of the keep out zone.
- Use a vision algorithm requiring a minimum number of independent camera views (expected to be part of study).
- Be applicable to a 7 degree of freedom, offset joint robot similar to the Special Purpose Dexterous Manipulator on the International Space Station.