Indoor Totally-Autonomous UAV Plane
Researchers working at MIT have cracked one of the most difficult challenges in autonomous aircraft sensing and AI to crop up since self-piloting aircraft first came about. They have devised a method whereby fast-flying UAV planes can detect their environment and react swiftly enough to be able to fly indoors, dodging both static and moving obstacles as effortlessly as any human pilot could.
Whilst there may not at first glance be much of a call for indoor high-speed aircraft, the critical point about this is that if they can react so lightning-fast and accurate in the chaotic spaces inside a building, then they are capable of dealing with anything the outside world throws at them.
Even more importantly, these UAVs are not relying on outside input to guide their movements: there is no ground side supercomputer calculating their possible flight paths. Every sensor and processor needed, is on-board the aircraft, built into the fuselage itself. This means that they can truly be autonomous, entirely relying on themselves to guide their flight.
Given the recent problems with GPS hacking, a further upside is of course that it gives the vehicles a means to interact with their environment intelligently. So that even if their GPS is hacked, they will not fly into a building even if told to do so as they can actually see it, recognise it as an obstacle, and avoid it.
A Plane is a Major Advance
Normally, a helicopter is the vehicle of choice for UAV research. Because a helicopter has the ability to hover, the ability to move sideways, and lift in minimal space, it is by far the easier option for development. Planes on the other hand cannot stop moving forwards. Worse, they have stall speeds so if they fly too slowly they simply drop out of the air.
The Association for Unmanned Vehicle Systems International (or AUVSI) has been posing challenges to stimulate work on UA control systems for decades, and a helicopter is almost always chosen by the many labs who take on the challenges, for these reasons. However, it cannot be denied that a plane is much more practical for many real-world applications, both military and civilian. If we are to replace most human pilots with self-piloting aircraft, developing control systems that work on planes, not helicopters is a must.
The last two challenges AUVI posed dealt with difficult terrain navigation without relying on GPS, or any exterior data. In these conditions the helicopters' hover-and-think ability really comes into it's own, so it is even more surprising (and spectacular) to see the challenge completed by a fixed-wing aircraft instead.
This work has been in the pipeline a while. At the 2011 International Conference on Robotics and Automation (ICRA), a team of researchers from MITs Robust Robotics Group described an algorithm for calculating a planes trajectory. Then in 2012 at the same conference, they presented an algorithm for determining its state its location, physical orientation, velocity and acceleration.
This new advance builds on both sets of work, to actually complete flight tests in a multi-story carpark, and in a gymnasium. In both locations, the UAV running their state-estimation algorithm successfully threaded its way between all the obstacles. In the carpark it had to avoid pillars and lighting fixtures, sometimes no more than three metres apart. It had to deal with the half-and-half nature of different floors, zipping from one to the other without colliding with anything and making sure it got the right one depending on whether it wished to drop or climb.
In the gymnasium, it had to dodge partitioning curtains and benches. At one point, the plane even guided itself between a long curtain and the wall in a tiny corridor little more than two metres wide and at full speed. If any part of the aircraft had so much as grazed either, the flight would have been over.
The computing time is perhaps the greatest challenge the aircraft faces. It has a laser range finder in its nose cone, and a three gyroscope set along with three accelerometers inside, so it can tell which way up it is, how fast it is flying, and how far away the next obstacle is. From this data, it has to work out the safest path to travel.
In the video (below) the plane travels in wide arcs a lot. It doesn't stick to a straight flight-path. This is deliberate. By swinging in an arc, the range finder can be used to map out the obstacles to either side of where the plane is heading next, and start to build up an internal map of where everything is. The more sweeps, the more detailed the map. If it was to fly straight, there is also the risk that it could end up with its wings bisected by obstacles that don't extend out far enough to trigger the range finder in the nose, but don't clear the edges of the aircraft either.
In a commercial setting it would be rather bad if an airliner clipped the side of a skyscraper with it's port wing because it didn't register that the building was there. Whereas, with a slow sweep first, the computer will pick up every object in the desired flight path.
However, it still has to crunch all this data in real-time. There's no possibility of dropping to hover while the on-board works out the numbers. As such, increasing the time the computer has to make the calculations is very desirable. This is why the team built their own UAV from scratch rather than rely on a commercial model. The plane that resulted has unusually short and broad wings, which allow it to fly at relatively low speeds and make tight turns but still afford it the cargo capacity to carry the electronics that run the algorithms.
Because the problem of autonomous plane navigation in confined spaces is so difficult, and because its such a new area of research, the MIT team is initially giving its plane a leg up by providing it with an accurate digital map of its environment. It then only has to work out where exactly in that map it actually is at that moment. Not an easy task when you consider the different orientations and speeds involved. Ultimately the accuracy of the digital map will be stepped down, and the plane will have to accurately place the landmarks itself, similar to how a normal plane's pilot has to visually identify the landmarks as they approach them, rather than rely on the on-board maps for accurate detail.
Adam Bry, a graduate student in the Department of Aeronautics and Astronautics is currently leading the development of the plane. He and Abraham Bachrach a grad student in electrical engineering and computer science have dealt with this exact-position problem by combining two different state-estimation algorithms.
One, called a particle filter, is very accurate but time consuming; the other, called a Kalman filter, is accurate only under certain limiting assumptions, but its very efficient. Algorithmically, the trick was to use the particle filter for only those variables that required it and then translate the results back into the language of the Kalman filter.
To plot the planes trajectory, Bry and Roy adapted extremely efficient motion-planning algorithms developed by AeroAstro professor Emilio Frazzolis Aerospace Robotics and Embedded Systems (ARES) Laboratory. The ARES algorithms, however, are designed to work with more reliable state information than a plane in flight can provide, so Bry and Roy had to add an extra variable to describe the probability that a state estimation was reliable, which made the geometry of the problem more complicated.
Paul Newman, a professor of information engineering at the University of Oxford and leader of Oxfords Mobile Robotics Group, says that because autonomous plane navigation in confined spaces is such a new research area, the MIT teams work is as valuable for the questions it raises as the answers it provides. Looking beyond the obvious excellence in systems, Newman says, the work raises interesting questions which cannot be easily bypassed.
But the answers are interesting, too, Newman says. Navigation of lightweight, dynamic vehicles against rough prior 3-D structural maps is hard, important, timely and, I believe, will find exploitation in many, many fields, he says. Not many groups can pull it all together on a single platform.
The MIT researchers next step will be to develop algorithms that can build a map of the planes environment on the fly. Roy says that the addition of visual information to the rangefinders measurements and the inertial data could make the problem more tractable. There are definitely significant challenges to be solved, Bry says. But I think that its certainly possible.
Smarter robot arms - Source of the efficient motion-planning algorithms