How computer vision based ATTOL system helps air crafts in landing & takeoff

How computer vision based ATTOL system helps air crafts in landing & takeoff
ATTOL (Autonomous Taxi, Take Off And Landing)

To use vision-based onboard camera sensors for detecting marked runway features and provide guidance to aircraft for aligning with runway central line and holding.

Existing Methods:

Pilots rely mostly on their eyes, skills or ATC (Air Traffic Controllers) for Landing

ILS (Instrument Landing System) Installed Over Some Airports

Auto landing is mostly used as an assist. Full Autonomy not present yet.

ILS can be used for autonomy, but it’s very rare and it’s like airport flying the plane by wire

A team of German Researchers from TUM (Technische Universität München) did 100 % autonomous landing in 2019 in single passenger aircraft carrying only the pilot. They used visible light and infrared cameras on the nose of the aircraft to detect the airstrip. Computer vision processor trained to recognise and characterize a runway was used. Cameras recognised the runway from a great distance and the system guided the aircraft through the landing approach. This system is still under initial testing phases. -

ILS (Instrument Landing System):

·   The ILS system uses multiple radio antenna arrays installed near the runway to provide position guidance.

·   Localizer signal allows the aircraft to determine its lateral deviation from the centreline of the airstrip.

·   Glideslope signal helps the aircraft to determine its vertical deviation from glideslope. (Glideslope is the proper path of descend for an aircraft preparing to land)

ILS and its different components

Img: ILS and its different components

Pros:

1. Can help in low visibility landings

Cons:

1. Very expensive

2. Installed at very few airports, and out of them also, most are not having the newer versions.

Why do we need a new method?

·   Existing GNSS (Ground Network Satellite System) sensors are usually less accurate at lower altitudes due to ground reflections

·   ILS Systems are very expensive and installed at very few airports

Proposed solution (ATTOL):

·   A suite of sensors, actuators and computers will be equipped in an aircraft to help use Computer Vision and Machine Learning for autonomy.

·   ATTOL will help teach an aircraft to see and navigate the runway just like a pilot.

Airbus’ world’s first autonomous takeoff

Img: Airbus’ world’s first autonomous takeoff

·   Using CNNs (Convolutional Neural Networks):

Problem: Requires a lot of data for all cases. Lots of landings with a variety of runways, weather and lighting conditions. Manual annotation is also not possible as we cannot annotate distance, localizer or glideslope values.

Solution: Geometric Projection method to automate annotation using aircraft telemetry data.

How:

i.          Commercial air crafts continuously determine their own position and altitude from which the camera’s position and orientation in world coordinates are computer. Then, 3D world coordinates of runways & markings are geometrically transformed, which are extracted from satellite imagery into 2D pixel coordinates where they should appear in image. Then, runway distance, localizer & glide slope values can be computed from the position of the aircraft.

ii.          Photo realistic synthetic data is generated using flight simulators. To mimic real cameras, simulators are configured to match sensor features like dynamic range, saturation & exposure control is turned via post-processing shaders. Like this, all scenarios and approaches can be covered using simulators, even those which can be life threatening during real world scenarios.

Problem: We need detection (classification) for detecting landing strips and regression for estimating distance, localizer and glide slope values.

Solution: Used Multi-Task Learning to implement both in a single network. When different tasks rely on similar underlying understanding of data, sharing feature extraction layers can help the network generalize better, since features extracted in shared layers have to support multiple output tasks.

How: Creating highly modified version of SSD (Single Shot Multibox Detector) where a number of shared layers extract features about runway environments followed by multiple task specific heads that output bounding boxes, bounding box classes, distance to runway, localizer deviation and glideslope deviation.

Problem: The current system can only process one frame at a time

Solution: Techniques from Neural Network may allow us to leverage temporal information across sequences at a time.

·   This system can detect a runway from several miles and calculate runway distance. Localizer and glideslope deviation estimates are more challenging. Even an error of 0.1 degree proves a lot.

Wayfinder’s Computer Vision Software detecting airstrip

Img: Wayfinder’s Computer Vision Software detecting airstrip

Even after being trained on synthetic images, the network performed well on real images. Collecting more data can help improve the network more and help reduce the distinction between synthetic and real data. Also, injecting side-channel information like airport layouts, runway dimensions or non-standard glideslopes can help improve the performance of the network.

Also, vision-based sensors are a good option for runway track alignment and ground taxi. They can also serve as redundant sensors along with existing GNSS sensors during ground operations.

Problem: Estimating 3D position of aircraft with respect to a threshold / touchdown point on a runway

Proposed Solutions:

1. Using feature detection on runway lines and appropriate scaling using depth perception of a single camera. (Depth perception is the ability to see in 3D and estimate distance)

2. By computing depth from the defocus of the camera (Defocus / Improper focus. Objective is to estimate 3D surface of a scene from a set of images with different parameters, like focal setting or image plane axial position)

3. By using calibrated stereo (Two or more lenses/cameras, finding/adjusting their parameters. Finding out the rotation/translation between both cameras and finding correspondences in left & right image planes)

Wayfinder Team doing camera calibration before test flight

Img: Wayfinder Team doing camera calibration before test flight

Indoor Setup Required to Solve this Problem: Scaled down model of runway. Also, pictures need to be taken while simulating landing of aircraft which will be used to fulfill objectives and compare with real data. Pictures required from at least 30 known camera positions. Error Band Allowed for the solution is ± 10%.

 Detection of Runway and Touch Zone using Aerial Image

Img: Detection of Runway and Touch Zone using Aerial Image

More Information

  • In a world where annual air travel volume is expected to double by 2036 and the number of pilots cannot scale quickly enough to meet that demand, adaptable, autonomous systems may provide an alternative solution.
  • Advanced computer vision and machine learning from sensors to cameras are rapidly enabling major advances in the development of autonomous technologies and aircrafts.
  • The aim is to develop scalable, certifiable autonomy systems to enable self-piloted aircraft applications. These systems can be usable from small urban air vehicles such as air taxis to large commercial airplanes for which teams are already investigating single pilot operations.
  • The ATTOL project was initiated by Airbus in June 2018 with collaboration with Project Wayfinder with the ambition to revolutionize autonomous flight. The team is building scalable, certifiable autonomy systems to power self-piloted aircraft applications from small urban air vehicles to large commercial planes. Main objective was to explore how autonomous technologies, including the use of machine learning algorithms and automated tools for data labeling (annotation), processing and model generation, could help pilots focus less on aircraft operations and more on strategic decision-making and mission management. Hundreds of thousands of data points were used to train such algorithms, so the system can understand how to react to each and every event it could encounter. It can also detect any potential obstacles including birds and drones (Also known as sense and avoid system). The team is trying to create a reference architecture that includes hardware and software systems, which after applying a data-driven development process enables aircraft to perceive and react to its environment. Airbus aims to build scalable, certifiable autonomy systems to power the next generation of aircraft.
  • After doing extensive tests for around 2 years, airbus concluded the ATTOL project with fully autonomous flight tests. Airbus was able to achieve autonomous taxiing, take-off and landing of a commercial aircraft through fully automatic vision-based flight tests using on-board image recognition technology. In total, over 500 test flights were conducted. Approximately 450 of those flights were dedicated to gathering raw video data, to support and fine tune algorithms, while a series of six test flights, each one including five take-offs and landings per run, were used to test autonomous flight capabilities.
  • The rapid development and demonstration of ATTOL’s capabilities was made possible due to a cross-divisional, cross-functional global team comprising of Airbus engineering and technology teams.
  • The test flights were done with the A350-1000 aircraft at Toulouse Blagnac airport. The test crew comprised of two pilots, two flight test engineers and a test flight engineer.
  • Airbus was able to achieve autonomy in takeoffs first (around Jan 2020), then they worked on autonomous landings and autonomy in the taxiway was achieved during final rounds of testing (around June 2020). The reason it was conducted in the end was because it was getting very difficult for the ATTOL system to differentiate between the runway and the taxiway. Also, runways are mostly just straight paths but there are certain rules during the taxiway that the camera needs to be able to follow. Training the system to do that proved to be a difficult task.

 Airport Runway, Taxiway & Aprons

Img: Airport Runway, Taxiway & Aprons

To achieve the task, an advanced computer vision system was used with monitoring from the pilots. This was done with the help of onboard cameras, radars and lidars which helps in light based detection and ranging. The information from these sources were sent to onboard computers which process this information to steer the aircraft and control thrust. Several cameras were installed in different parts of the plane including the tails, nose and landing gears.

Experimental sensor configuration on demonstrator vehicle including cameras, radar, and lidar

Img: Experimental sensor configuration on demonstrator vehicle including cameras, radar, and lidar.

For autonomous systems the technology is divided between software and hardware:

  1. Software enables the aircraft to perceive the environment around it and decide how best to proceed. At the core of software are perception methods based on both machine learning algorithms and traditional computer vision. The best techniques that have been developed for other applications such as image recognition and self-driving cars are used and expanded to the requirements of autonomous flights. A decision-making software is also developed to safely navigate this world that is sensed around the aircraft.
  2. Hardware includes the suite of sensors and powerful computers required to feed and run the software. The rigorous requirements on hardware are met to ensure safe and reliable operation in the air.
  • The development of autonomous aircraft will be an iterative process. Starting with an application of the autonomous system in a limited environment, the system will be trained to fully understand that environment, and then expand the environmental envelope over time. The iterations will include a cycle of massive data collection followed by system testing, subsequent development, and then verification of system safety. Once each new iteration has been verified, it will be deployed to start the next cycle of data collection and continuous improvement of the system performance. Safety is also a major concern. Everything being worked on must pass rigorous testing to ensure what is created is not only revolutionary but certifiable and secure.
  • Rather than relying on an Instrument Landing System (ILS), the existing ground equipment technology currently used by in-service passenger aircraft in airports around the world where the technology is present, this automatic take-off was enabled by image recognition technology installed directly on the aircraft.
  • One of Airbus Test Pilot Captain Yann Beaufils said “The aircraft performed as expected during these milestone tests. While completing alignment on the runway, waiting for clearance from air traffic control, we engaged the auto-pilot. We moved the throttle levers to the take-off setting and we monitored the aircraft.  It started to move and accelerate automatically maintaining the runway centre line, at the exact rotation speed as entered in the system. The nose of the aircraft began to lift up automatically to take the expected take-off pitch value and a few seconds later we were airborne”.
  • Can help airlines increase efficiency & safety, reduce costs, reduce pilot workload, improve traffic management, address the shortage of pilots and enhance operations in the future.
  • Technology is still under testing. The main aim is not to remove the pilot or co-pilot from the cockpit but to assist them better during landings, takeoffs and taxiways and maybe in some point in future enable the aircraft to do all this by itself without any additional assist, if required. Autonomous technologies will support pilots, enabling them to focus less on aircraft operation and more on strategic decision-making and mission management. The pilot will be responsible for managing the aircraft and decision making during critical times.

First Data Collection Test Flight by Wayfinder

Img: First Data Collection Test Flight by Wayfinder

After the successful testing of ATTOL, wayfinder team is now working for data collection from real flights to improve the machine learning software and data collection pipeline.

The three aims they are working on right now:

  1. Developing the hardware platform i.e. a data collection platform to collect camera imagery for runway approach from an aircraft at various different airports.
  2. Collect data from all that imagery and feed machine learning algorithms being developed by wayfinder team to create autonomous takeoff and landing capabilities.
  3. Run the algorithms in real time on the aircraft and demonstrate the working in real time.

Onboard Computer for Data Collection on Wayfinder test aircraft

Img: Onboard Computer for Data Collection on Wayfinder test aircraft

The team is continuously working towards making progress in autonomous aviation revolution by:

  1. Architecting dedicated Machine Learning models
  2. Creating realistic simulations
  3. Developing safety critical software and hardware
  4. Advancing flights, testing systems with a flight test aircraft
  5. Scaling data collection capabilities and advancing algorithms

Flight test engineer system interface

Img: Flight test engineer system interface

Scenarios this technology may help in :

  1. If any/both pilots got any health related emergency / faint / fatal injuries mid flight,  this system may help land the aircraft automatically and safely.
  2. In military operations in / near enemy territory involving some kind of risk, using this and improved technology the aircraft can complete a mission without carrying a pilot or risking human life.
Train Your Vision/NLP/LLM Models 10X Faster

Book our demo with one of our product specialist

Book a Demo