As autonomous aviation systems advance, NASA seeks to keep the skies safe

Decades of effort has been dedicated to studying hazards in the national airspace system and learning to predict, detect, and mitigate the safety risks. As autonomous aviation systems proliferate, NASA has ambitious plans to ensure that the commercial airline industry maintains its high level of safety.

FierceElectronics spoke with Misty Davies, Deputy Project Manager for the System-wide Safety Project at NASA Airspace Operations and Safety program, about NASA’s plan for the next 25+ years to address this complex issue.

FE: What does the role of Deputy Project Manager, System-Wide Safety mean on a day-to-day basis and what does your role as outreach coordinator involve?

Davies: Deputy Project Manager is one of those ‘the-buck-stops-here’ jobs.  In general, I spend most of my time working together with the Project Manager to solve any major problems that arise with the project.  We do everything from project planning – putting together work packages to address technical gaps; to project execution – making sure that we have the resources for our day-to-day and are meeting our milestones; to outreach. 

That outreach can be to executives in our own agency, to colleagues and executives in our sister agencies, to industry, to academia, and to the public at large. On any given day, I’ll have four to thirteen scheduled meetings, plus usually a large number of emails to send, documents to review, presentations to generate, and spreadsheets to correct. I occasionally even get to sneak in an hour or two of purely technical work; although that work, honestly, is done mostly by students I’ve taken on these days.

FE: Can you describe NASA’s basic principle of “trusting humans” in the process, especially from a safety perspective?

Davies: NASA really has done a wealth of research on the kinds of work that humans do best, and the kinds of work that it is best to leave up to automation. For example, people are not great monitors – we get bored and distracted really easily!  On the other hand, machines are not great at realizing that they are in a situation that is subtly different from situations they have been in before, and deciding on the best course of action when the environment has changed.  We talk a lot about "human-machine teaming" or "human-automation teaming" at NASA. The question we are always asking is, "What is the best division of labor between human cognition and machine automation that helps us to maintain or to improve safety?"

FE: With autonomous vehicles, dramatic shifts are taking place in how humans will be interacting with technology. How is NASA thinking about this with regard to the traditional way it has approached human-machine interaction? What is likely to change?

Davies: What we are seeing is that more and more of what we call "inner-loop" functions are best handled by automation. For example, automation may be much better at maintaining stability in a vehicle, especially if there has been a failure (in a control surface, for example). What we are seeing is that people are taking over more and more of the "outer-loop" functions, and that this means that we can handle missions that are much more complex than we could handle previously.

There are lots of areas you can point to for evidence that this is happening, but one striking example to me is our movement towards "m:N operations."  This means that there are m operators for N vehicles, and the goal is to move towards smaller m’s and larger N’s.  Right now, large unmanned aerial systems often have 3 operators for a single vehicle.  There is a lot of talk about swarm operations, in which one operator may be responsible for as many as 3 vehicles, on average.  However, what we are seeing is, when we think about that division, you want to divide the labor in different ways. For example, if you have 3 operators and 9 vehicles, maybe you want to have one operator responsible for normal flight and mission operations, one operator responsible for taking off and landing, and the third operator focused exclusively on troubleshooting problems.  

FE: Air transportation is further along than the automotive industry when it comes to autonomous systems. What can they learn from what you’ve done, especially with regard to avoiding any adverse impacts on safety?

Davies: It is interesting to me that I have heard this both ways – that the automotive industry is ahead and that the air transportation industry is ahead!  I think it depends on your perspective. We have a long tradition of highly-automated systems in aviation.  In general, a single aircraft costs a lot more (in dollars) than an automobile, in and it also has a potentially much larger cost in terms of dollars and human safety when there is an accident. 

Most of aviation’s highly automated systems are about maintaining or improving safety in aviation vehicles that are becoming ever more capable (and more complex). Since civil aviation air transportation is very risk averse, as a society we’ve adopted a rigorous set of rules and processes around safety assurance. I think that we are seeing similar advancements in autonomous systems for both automobiles and aviation, but so far only in domains where we are willing to accept the risk. I think we both learn a lot when we talk to each other. 

FE: Has NASA developed a roadmap for transitioning where it is today to a future state with more autonomous technologies?

Davies: Sure! Which roadmap would you like to read first? We’ve been working on this for a long time now.  For aviation, the 2014 National Academies report on ‘Autonomy Research for Civil Aviation’, which talks about ‘increasingly autonomous systems’ is still valid.  The NASA Aeronautics Research Mission Directorate just updated its Strategic Implementation Plan in 2019.   

My System-Wide Safety project is based around building out an In-Time Aviation Safety Management System (IASMS) that could enable increasingly autonomous systems.  That IASMS was first described in a National Academies report. We continue to build out our concept of operations for both Urban Air Mobility (UAM) and the IASMS.

FE: When it comes to autonomous technology, safety is a very complex issue. Can you talk about the degree to which NASA is working with other organization, industry groups, academia, and government to address these issues?

Davies: It is a very complex issue, and we will only make progress together. NASA works very hard to make sure that we are engaging with the community at large. For NASA’s research enabling new aviation concepts, we have the Advanced Air Mobility (AAM) Ecosystem Working Groups, which is NASA’s way of trying to bring all the different groups together in one forum.

However, NASA participates in as many different community activities and groups as we can: We participate on industry standards committees and form research transition teams with our regulator partners.  We sponsor research by academia. We meet together in working groups with our European and Canadian counterparts. We also participate as often as we can in these kinds of forums, sharing what we think we’ve discovered and listening to the needs and the new state-of-the-art in the community.

FE: As machines become increasingly autonomous, where do you think human capabilities will remain essential

Davies: I think a lot about the ‘Miracle on the Hudson’ – Captain Sullenberger’s brilliant rescue of US Airways Flight 1549.  I’ve seen some algorithms since then that are focused on trying to replicate that decision to land on the river. What I think is interesting about that particular flight is that Captain Sullenberger and his copilot made a very quick decision: They didn’t think they could turn around and land at the airport they had just taken off from, nor land at another very close airport in the very near vicinity. They came to that conclusion, even though both airports were already full of amazing air traffic staff who were clearing the skies and the runways for them just in case. 

When we simulated that disaster afterwards, the odds of them successfully landing at either airport are lower than we prefer, so the crew probably made the right decision to ditch in the Hudson. Once the crew made that decision, they had a very short time to run through the checklists for the "water landing," so the crew had to shortcut the checklist. (An interesting thing to note is that the previous "water landing’" occurred after an accident in which the airplane was at altitude. The accident investigation board extended the checklist after that accident.)  A few items on that checklist were missed. One of the things that automation is great at is checklists.

In short,  I think we’ll still need humans to make the higher level decisions.  Where are we going? When something goes wrong, what do we do now?  I think we’ll see more and more aides and suggestions to the human operators, but we’ll still leave the higher-level decisions to them, especially when the overall environment and situation is off-nominal.

Editor’s Note: Misty Davies, Deputy Project Manager for the System-wide Safety Project at NASA Airspace Operations and Safety program will be moderating a panel on Safety and Security in Autonomous Technology, on December 16, 2020 at 12:15 pm Eastern, during AutonomousTech Innovation Week. For more information and to register for your free pass please visit the event website.

Related

Safety, security: Inextricably entwined in autonomous vehicle design

Key emerging tech: autonomous vehicles and AI-ready sensors