Crew Safety Perspective

There’s been a lot of discussion over the past year or two on a few blogs (this one, Transterrestrial Musings, and also Wayne Hale’s blog, among several others) about the proper level of emphasis on crew safety for commercial crew vehicles. The basic thesis that I and several of these other bloggers have made was that crew safety was only one of several important metrics, and shouldn’t be overemphasized at the expense of all others. The fear being that if “safety is our first priority”, then actually accomplishing the mission, or doing things affordably enough to enable new commercial markets often take the backseat (if not being neglected entirely). The problem is that it’s easy to brush off this argument. After all, we’re talking about human lives here. So what if it takes an extra billion or two, and adds 2-3 years to the development time, and ultimately costs so much that the resulting vehicles are only affordable for NASA, so long as we reduce the risk to our brave astronauts who’ll be flying on these risky commercial vehicles?

I had an interesting thought experiment that I think puts this line of thinking in a different perspective.

One of the most promising applications I’ve seen for microgravity research on the station is the development of vaccines. Apparently some infectious diseases (I think mostly bacterial ones) behave very different in microgravity–they grow much faster. This increase in virulence combined with turning off some of the confounding factors supposedly enables researchers to more quickly isolate the cell receptors, genes and such that govern the spread of the disease, allowing researchers to craft vaccines that have fewer negative side effects, are more effective, and in theory can make it through clinical testing and to market faster than terrestrial-developed counterparts. At least that’s the theory as I understand it, in semi-layman’s terms. Two specific diseases are currently being worked on by NASA and commercial firms like Astrogenetix are Salmonella and MRSA (Methicillin-Resistant Staphylococcus Aureus). The theory is that a MRSA vaccine developed on the station will be more effective than a terrestrial version, and will have fewer negative side effects.

So here’s the thought experiment. Right now, according to a little googling, MRSA kills about 19,000 Americans per year. As I understand it, there are a few terrestrially developed antibiotics and vaccines in the works, but say that a microgravity developed MRSA vaccine was effective 10% more often (ie that if say the terrestrial versions could save someone’s life 50% of the time, the microgravity-developed vaccine could save someone’s life 60% of the time). That would equate to ~1900 lives saved per year, 158 lives saved per month, or approximately 5 lives saved per day. And mind you, those numbers are only for American lives saved.

Right now the development of vaccines like this are highly dependent on the frequency of up and downmass opportunities on the space station as well as on the crew time available for doing research. From conversations I’ve had with CASIS, the ISS National Lab, and some others at NASA over the past few weeks, those two challenges (delivery/return frequency and crew research availability) are by far the two biggest challenges to effective use of the ISS. While there are several potential solutions to these problems–and in fact, I’m working on some really intriguing ones on the crew research availability side at Altius at the moment–one of the simplest ways to help improve the situation for both of these problems would be for Commercial Crew to enter operational services quicker.

Right now other than very tiny payloads on Soyuz, Dragon is the only way of getting payloads back from the station, and even when Elon’s team gets up to full speed, that’s only three opportunities per year. Any of the commercial crew vehicles being developed would add substantially not just to total downmass “tonnage” but more importantly to the frequency of downmass opportunities, increasing that number to potentially 5-6 times per year.

Additionally once commercial crew vehicles are flying, their lifeboat capability (and I agree with Rand’s take on how necessary that really is) will enable adding an extra crew-person to the ISS, bringing it to a total of seven crewmembers, with four of them on the US side. Right now between the three crewmembers on the ISS, we’re only getting about 1800-1900 man-hours of research work done per year on the station, with an average of about 35hrs per week total between the three of them. Just adding an additional crew member on the USOS side would likely double that number, potentially doubling the ROI for the station.

Between these two changes enabled by getting Commercial Crew into operations, experiments like the MRSA vaccine development process can proceed much quicker. As Tom Pickens of Astrogenetix explained at a Space Angels Network event I was presenting at in Houston a bit over a year ago, their development process depends on the ability to do 5-6 launch/process/return iteration cycles during the development of a given vaccine. Adding additional flight opportunities, and making sure that the experiment gets processed while the delivery vehicle is on station so it can make it back on the same vehicle (“Sortie Science” as the National Lab folks are calling it), can both greatly shorten the amount of time it takes to get the vaccine developed and into clinical testing.

While shortening the development cycle has serious positive commercial profitability benefits (a vaccine or design that isn’t completed is like a non-interest bearing checking account with a very high monthly fee that you only get profit from once the product actually hits the market), it has a dramatic value in saved lives in this particular case. Put simply, every day a vaccine like this gets to market sooner means a certain number of people who aren’t going to die painfully and prematurely. In the particular case of a 10% better MRSA vaccine, we’re talking about saving an extra 5 American citizens per day sooner that you get the MRSA vaccine to market.

So what does this have to do with space safety? Pretty simple. If NASA isn’t blowing smoke about the benefits of microgravity research for developing vaccines (and I for one believe them in this case), the delays in Commercial Crew availability due to added safety requirements come with an impressive cost in human lives. Adding an extra year to bump the theoretical reliability of commercial crew from 99% to 99.5% for instance just potentially cost you almost 2000 American lives, just from this one vaccine alone. These are lives that could’ve been saved by allowing a faster, more streamlined commercial crew development process. And by not starving it for funds to pay for heavy-lift rockets without destinations.

Think about that. Just shaving 36 hours off of the availability date of commercial crew could potentially save more lives than would be lost in the worst case Commercial Crew crash. Even if expediting the process, dropping many of the NASA Human Rating requirements, dropping some of the abort tests, and sticking with Space Act Agreements instead of FAR Contracts really meant a massive decrease in actual safety (I don’t think it would) to say a 5% chance of losing a crew on a given flight, over the course of the ISS’s life you would have saved hundreds of times more US lives by taking that course than you would potentially risk in astronaut lives.

Gives you some perspective, doesn’t it?

This entry was posted in Commercial Crew, Commercial Space, COTS, NASA, Space Policy, Space Safety, Space Transportation. Bookmark the permalink.

28 Responses to Crew Safety Perspective

  1. Andrew Swallow says:

    The Acceptable Quality Level (AQL) for a hazardous defect is frequently specified as 0.01%. That would be 1 failure every 10,000 launches. We are currently no where near that good.

    The Shuttle Loss of Crew (LOC) was about 1 in 50 launches. From the way people are acting this was too high, so NASA needs to aim for a LOC of about 1 in 1,000.

  2. Chris (Robotbeat) says:

    Andrew: So, replace one arbitrary limit with another? Jon’s idea at least nails down a reasonable cost/benefit analysis (both have units of human lives lost or saved), with enough margin built-in that you can handicap the benefit side and still make it out ahead by speeding things up.

    Jon: Good argument. I think though that the best argument is that if we want perfect (short-term!) safety, we shouldn’t go at all. Like Tito has realized, we are all mortal. Life is a one-way mission, what matters is what we do with it so let’s get on with it and get on the rocket. On a broader scale, you can say the same thing about our nation or even our species.

  3. Chalmer says:

    I agree with the blogger. Unfortunately importunity cost is something NASA often ignores. They are not alone in that, but it is sad either way.

  4. Andrew Swallow says:

    Using the same limit as everyone else will get you out of a lot of problems.

  5. john hare says:

    Andrew,
    It may get you out of a lot of problems, but it still leaves thousands of people a year dead that could possibly have been saved. In saving the visible casualties, you sacrifice the invisible ones. By Jons’ numbers, a crew of seven could be lost every week for the next five years and it would just then match the losses from the lack for a single year of an improved vaccine.
    As Jon and Rand have noted on several occasions though, no commercial company would consider operating at those loss rates. They couldn’t afford the vehicles even if there were no psychological effects on the customers, or lawsuits from them. It is more often the perception of safety as enforced by the current stake holders that jacks up the cost and time to market of new systems.
    The point of this post, as I interpret Jons’ meaning, is that the opportunity costs of not taking chances is in the thousands of lives per year from the lack of a single vaccine in the US alone. Multiply that by several vaccines/treatments, and multiply that by the world population instead of just the US, and you end up with something that would be considered a crime against humanity if it were visible causation deaths instead of invisible causation by lack of action.

  6. Andrew W says:

    The first step towards safer spaceflight is to start with a safe design for the launch vehicle/spacecraft. The Shuttle was never going to be very safe, I’m betting Falcon/Dragon will prove safer at a fraction of the cost.

  7. Andrew W says:

    Extending on my above comment, because NASA was stuck with an inherently unsafe design, the efforts to improve safety were – out of necessity – elevated to a point where they were strangling shuttle operations. With an inherently safe design I don’t think that will happen.

  8. Rand Simberg says:

    From the way people are acting this was too high, so NASA needs to aim for a LOC of about 1 in 1,000.

    People are acting irrationally. The solution is not to establish an arbitrary and unrealistic (and perhaps for now unaffordable) standard, but to explain to them why we need to be more accepting of risk.

  9. Pingback: Transterrestrial Musings - How Delaying Commercial Crew Is Deadly

  10. ken anthony says:

    Andrew, bringing the shuttle into it just confuses the issue. Keep it simple. Lives are lost regardless. The point is that needless regulations cost lives that would have been saved had there been no regulation at all.

    People ‘in charge’ have a natural need to ‘do something’ with the added pressure coming from both above and below. The only real solution is for them to not be in charge.

    People need to change their mindset regarding the need for most regulations. Commercial entities are almost fully regulated simply by having customers (especially when competition exists.)

  11. Andrew W says:

    “The point is that needless regulations cost lives that would have been saved had there been no regulation at all.”

    Often true, but in the case of spaceflight, I struggle to find anything I see as an example of this, did “needless regulations” cost lives during Mercury, Gemini and Apollo? How about Soyuz? All of these systems I’d describe as relatively simple and “inherently safe” designs. It might be true that less emphasis on safety could have saved money, but I’m skeptical that a claim could be supported that these vehicles and their launch rockets were over engineered in the name of safety, after all, the cost of the loss of a vehicle isn’t only the inconvenience of losing the crew.

  12. ken anthony says:

    Andrew,

    Did you miss the point about commercial ventures having a built in incentive for safety? Mercury, Gemini, Apollo and Soyuz were all government programs.

    This is all about seeing the unseen. It has occurred to me that we have a moral obligation to create wealth because poverty kills. That means we have a moral obligation to include space in our economic sphere.

  13. Tom Billings says:

    Andrew said:

    “Often true, but in the case of spaceflight, I struggle to find anything I see as an example of this, did “needless regulations” cost lives during Mercury, Gemini and Apollo? How about Soyuz?”

    While historical analogies are sometimes useful, they should not be used in this evaluation. These were built not commercial endeavors, but part of a decades-long propaganda campaign in world war 3. In fact, each of them was built with far higher levels of risk than is considered acceptable today, because of the high priority of winning that propaganda victory. As an example, Gemini Program was saved from catastrophe by Neal Armstrong, when he stopped a Gemini on-orbit from tumbling his capsule out of control, because a reaction control motor was stuck “on”.

    The Saturn V, or its first stage thrust structure, *was* over-engineered for the Moon missions in some ways, because Von Braun wanted the option to hang *9* F-1 engines off the bottom of the first stage, if needed to accomplish the mission, or to contribute to later Mars missions. The key is that it was *the*mission* that in fact had first priority, not safety. Today, the politicians simply see nothing being done by manned spaceflight as being as important as that propaganda victory in WW3.

    Jon is pointing out that saving people from dying by horrid diseases is *worth* the risk of someone dying in a space launch. Meanwhile, the pols, and the NASA bureaucrats that have spent their professional lives under the pols, keep trying to act like it’s still January 1973.

  14. Andrew W says:

    Ken and Tom, I don’t understand the relevance of the points you make.
    I don’t dispute that commercial ventures have strong incentives for safety, and I don’t dispute that there’s merit in the idea that lives saved can be weighed against lives cost.

    The point I’m making is that the risky nature of the Shuttle is the main reason for the perceived excessive focus on safety in space flight, and that with inherently safer launch systems there would have been no need for such a emphasis on safety.

    Agree or disagree, but please stop trying to represent my point as a challenge to your ideologys.

  15. Rand Simberg says:

    The point I’m making is that the risky nature of the Shuttle is the main reason for the perceived excessive focus on safety in space flight, and that with inherently safer launch systems there would have been no need for such a emphasis on safety.

    Your point is wrong. The man reason for the not “perceived,” but very real but misguided focus on safety in spaceflight is that spaceflight is not perceived to be important. That’s the point of Jon’s post, and my book.

  16. Karl Hallowell says:

    The point I’m making is that the risky nature of the Shuttle is the main reason for the perceived excessive focus on safety in space flight, and that with inherently safer launch systems there would have been no need for such a emphasis on safety.

    There’s two problems with that assertion. First, we see similar risk-irrational behavior in far safer fields, such as passenger aviation. We also routinely see that sort of irrational behavior in medicine which often has very stark payoffs such as Jon outlined above. This can be readily explained by the perverse incentives facing regulators. They get punished for certain kinds of failures such as plane crashes, bad drug reactions, or the destruction of a Space Shuttle. They don’t get punished for other failures, such as the collapse of large airliner manufacturing, the vast costs of new drug testing, or the destruction of the VAB by hurricane. That results in far less tolerance for the failures of the first kind than the second.

    Second, how do you get a vehicle as safe as you wanted it, 1 LOC in 1000 flights? My assertion here is that you get that safe only through a much higher launch frequency. IMHO better vehicle design alone (keep in mind that you don’t have much idea what actually is better vehicle design unless you’re launching stuff) isn’t going to get you an order of magnitude improvement, much less the more than two orders of magnitude improvement you want from 1 in 50 to 1 in 1000.

    And this is another area where the powers that be hold things back. There’s no benefit to that high of a launch rate, when the main goal is merely appearing to do something in space. A few launches a year and a 1 in 50 failure rate is adequate.

  17. Andrew W says:

    I agree with Karl almost entirely, the one point where I don’t quite agree is his assertion “that you get that safe only through a much higher launch frequency”.
    I’ll stick with my opinion that the initial design of the vehicle is the main factor in its safety, and that the Shuttle, being principally designed to serve political ends was an unsafe design from the outset.

    Airliners, whose design has also significantly been influenced by politics (the Comet springs to mind), often suffered from safety problems as a result of that political meddling.

    Rand said: “The man reason for the not “perceived,” but very real but misguided focus on safety in spaceflight is that spaceflight is not perceived to be important. That’s the point of Jon’s post, and my book.”

    I didn’t read that as “the point of Jon’s post”, and though I’m as happy to argue that regulation often causes more harm than good, including with respect to safety.
    I don’t see that spaceflight not being perceived as important as leading to a misguided focus on safety, I would have though the opposite more likely, a perceived as unimportant area of enterprise being subject to less focus on safety by regulators (Karl’s examples of excessive regulator control would I think be areas most people, and the regulators, see as important).

  18. Hey guys,
    Sorry for putting up a controversial post like this on a Saturday evening, when I usually try to avoid getting on the interwebz on Sundays.

    One important thing I noticed is that there are *two* different Andrew’s on this comment thread. Andrew Swallow was claiming we need some arbitrary safety level of 1:1000 or 1:10,000. Andrew W (who I’ve actually met) was mostly just stating that starting from a fundamentally safer design (like Falcon+Dragon) was likely to get us most of the safety we needed. Andrew W. is that a fair representation?

    ~Jon

  19. e.p. says:

    Allow commercial companies with commercial astronauts to assume risk. Allow commercial insurers to insure said commercial companies’ risk of failure impacting uninvolved people as long as some minimal e_c is achieved.

    No need for thresholds of success/failure if the risk is limited to participants and all other risks are insured.

    -e.p.

  20. Andrew W says:

    Jon, regrettably you’re confusing me with someone else, we haven’t met.

    Andrew W (who I’ve actually met) was mostly just stating that starting from a fundamentally safer design (like Falcon+Dragon) was likely to get us most of the safety we needed. Andrew W. is that a fair representation?

    ~Jon

    That’s a fair summation, though I think my point that the emphasis placed on safety inevitably increases with an unsafe design and that that is fundamentally why there has been this focus on safety that’s developed within NASA.

  21. Andrew Swallow says:

    A LOC only applies to manned space flights. This means that the Falcon 9 and the Atlas V can be debugged using unmanned flights.

    Unmanned flights can be used to launch the biological material. The material can be berthed to the ISS or stay in a Dragonlab. This effects the missions LOM but not the LOC. New biological material can be launched before the manned CST100 flies.

    If the rocket blows up the client could routinely be given a replacement launch vehicle for free. This gives the launch organisation and manufacturer a significant financial reason to remove the bugs. The cost of building replacement satellites gives the client a powerful financial motive to chose launch vehicles will good LOM rates.

  22. Paul451 says:

    Every loss of crew (Apollo 1 and the two shuttle losses) has led to a 2 year “review/redesign” period. So if there is a value of 5 lives/day in having extra crew on the ISS, each LOC grounding of the launcher is costs up to 3600 lives, not just crew.

    [Of course this just means you should have multiple crew launchers, so you are never truly grounded. But I think we all know that, and know that those who actually control budgets will do everything they can to remove essential redundancy.]

  23. ken anthony says:

    a challenge to your ideologies

    My only ideology is truth above all. Correct me and you have a friend.

    the initial design of the vehicle is the main factor in its safety

    Far more important is a willingness to change when understanding increases. This means you can start with any design and end up with any other.

    From a practical viewpoint, people are resistant to change, so your point seems more reasonable than it is. People also foolishly hold to sunk costs. Being able to determine which are not is a rare gift.

  24. ken anthony says:

    Sorry for putting up a controversial post

    Controversy is a good thing Jon. All progress happens because we question our own thoughts.

  25. guthrie says:

    Excellent perspective on the OC in human lives in delaying commercial space activity. Thank you. Perhaps private space ventures would benefit by relocating to states which are more willing to allow such activity (like Singapore?) or developing launch systems outside territorial jurisdiction (ie, ocean-borne platform)?

  26. Warren Platts says:

    I respectfully disagree with the premise of the article. Zero accidents are possible. The Shuttle accidents were both foreseeable and preventable. The problem with the Shuttle program was too much acceptance of risk–not too little! The fatalistic philosophy that “accidents will happen” inevitably leads to fatal accidents.

    But if you want to talk dollars, arguably the worst thing was the loss of the orbiters themselves. What’s the life of an astronaut worth in terms of compensation? $10M conservatively? But the orbiters were worth billions. Throw the costs associated with the investigations and lost time, it all adds up to real money.

    You might be surprised, but major industrial companies like Alcoa and Shell that have adopted the zero-accident philosophy find that things run more smoothly and that their bottom lines actually improve.

  27. Sam Dinkin says:

    There are less speculative ways to turn saved money into saved lives. If lives are so important, buy a million AEDs and cut $1B out of the space program. That would probably save a good fraction of the 250,000/year who die of sudden heart attack http://www.theheartfoundation.org/heart-disease-facts/heart-disease-statistics/

    There ought to be some kind of indulgence one can buy to engage in risky activities in one area of the economy that would be more than offset by using the money to make another area safer. Sort of like carbon trading.

  28. Pingback: Crew Safety Perspective | Selenian Boondocks | The Newspace Daily

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>