Written by Patrick Sherman
Advanced Flight Technologies
Column
As seen in the November 2018 issue of Model Aviation.
“It’s about the data, not the drone,” is a sentiment heard frequently among serious commercial operators these days, but with the Roswell Flight Test Crew, it’s an idea that runs counter to our approach to flying.
We proudly consider ourselves to be “silk scarf” pilots, in direct control of the machine at all times. Those newfangled systems such as GPS position-hold and optical collision avoidance? We don’t need ’em! When we started flying nine years ago, our flight controllers didn’t even have accelerometers. How’s that for “attitude” mode?
In certain applications, direct control is still relevant—and even necessary. In tactical operations, such as monitoring a structure fire or a foot pursuit, the pilot needs to be very much in the loop and able to immediately respond to changing conditions on the ground. Furthermore, any truly competent pilot should be able to control his or her aircraft without GPS position-hold and other advanced features, so that he or she is prepared in case of a system failure.
There are, however, an increasing number of applications for drones that require the use of autonomous functions, including aerial surveying and mapmaking. It’s a different way to fly, but it still requires the same care and attention of any other drone operation. This was brought home to us when the Roswell Flight Test Crew recently received a call for help from the city of Camas, Washington, and nothing was on fire.

Lacamas Lake, in southwest Washington state, is a popular spot for summer recreation. It’s so popular that the local city government called in the author to provide aerial photography to help with planning for improved parking.
The Mission
Lacamas Lake, in southwest Washington state, is approximately 20 miles from our home base in Portland, Oregon. It’s a popular location for summertime recreation and is becoming a little too popular for the community infrastructure that supports it. During peak season, the parking lots that dot its shores fill up early, and visitors end up parking along the sides of nearby roads, causing congestion and putting people in danger as they try to walk along the side of the roadway to reach the lake. For the city to be able to address the problem, it must first document it and develop solutions, such as expanding parking lots. Having learned about the potential of drones, the city contacted us to see if we could produce some aerial imagery to further the analysis. There was a problem though. The city didn’t want a couple of artistic shots from oblique angles, which is our specialty as silk scarf pilots. Instead, a straight down, Google Earth-type view of multiple locations around the lake was requested—and at a resolution of centimeters-per-pixel, rather than the meters-per-pixel, which is typical for satellite imagery. No small, commercial drone has, or will likely ever have, a camera with enough resolution to capture this type of image in a single frame. Even if the technology existed, there would still be a problem of perspective projection distortion. When rendered in a 2D image, parallel lines in a 3D scene will appear to converge. Imagine a drone hovering directly above a tall tree, with two other tall trees visible on the edges of the camera’s field of view. The tree in the middle will be rendered perfectly. However, the trunks of the two trees at the edges of the frame will look as though they are leaning outward, such that if they continued into the earth, they would meet somewhere hundreds of feet underground. This is the same illusion that makes it appear that railroad tracks converge at the horizon even though they are, in fact, parallel. Successfully completing this mission would require a new approach and some new tools.Photogrammetry Who?
We were not the first people to confront this problem. The solution is not to take a single photograph but instead to take multiple photographs and “stitch” them together like a jigsaw puzzle through a process called photogrammetry. Photogrammetry has a surprisingly long history. None other than Leonardo da Vinci conceived perspective and projective geometry as early as 1480, creating the conceptual framework that makes photogrammetry possible. In 1849, Frenchman Aimé Laussedat was the first person to use terrestrial photographs to compile a topographic map. In 1858, Aimé pioneered aerial photogrammetry using kites, and later, balloons. He was subsequently hailed as the “Father of Photogrammetry.” In 1908, Italian Cesare Tardivo captured the first photographs from a manned, heavier-than-air flying machine to be used for mapmaking. The science advanced rapidly during World War II, helping the Allies identify the Peenemünde Army Research Center, where Germany was developing the V-2 rocket. It’s a testament to the ceaseless advancement of technology when the same science used 80 years earlier to secretly smash the Nazi war machine would be deployed to help a small town solve its parking problems. Although the pilots who flew photoreconnaissance missions over Europe had to rely on expert skills, steady hands, and nerves of steel, we would be relying on a flight plan that would assume control of the drone and execute the mission without our direct intervention.
Comments
Add new comment