XCOR Thoughts

For those of you who read my blog but somehow hadn’t heard the news already, three of XCOR Aerospace’s four founders left the suborbital rocket startup this past week. I got a notification last week on LinkedIn about Dan DeLong leaving, but found out today via Twitter and a press release from XCOR that Jeff Greason and Aleta Jackson had both left as well. For some reason this news feels like a little bit of a gut punch, so I feel like sharing some of my rambling, semi-coherent thoughts on the subject.

I’ve been following XCOR since they first started in 1999. I was barely 19 at the time, and had been interacting with (arguing with) Doug Jones, Jeff Greason, and Dan DeLong on the sci.space.* usenet groups for about three years by that point. I remember John Hare sending me an email while I was in the town of Bolinao on my mission (shortly before 9/11) with pictures of the EZRocket’s first flights. I remember crashing on XCOR’s hangar floor the night before watching SpaceShipOne’s first flight (and being terrified by how much that building creaks in the Mojave winds). I remember being grateful for all the legwork XCOR (particularly Randall Clague) did in trying to help shape the reusable vehicle experimental permit and launch licensing process in a way that protected the uninvolved while enabling the industry to learn and grow. I remember XCOR encouraging Masten to move down to Mojave, and later helping us in our successful Lunar Lander Challenge attempts. I remember watching the flights they did of the X-Racer, and being impressed by how technically competent their team always felt. I remember Jeff Greason serving on the Augustine Committee, acting as the voice of reason, and a sort of “Elder Statesman1” representing the commercial space industry. I could probably go on.

It just feels weird thinking of the idea of an XCOR without Jeff, Dan, and Aleta there.

While we don’t know the cause of their departures from XCOR, I’m not sure whether them being booted, or things getting bad enough that they would rather leave than stay another few years is worse. I know one of my fears as a founder has been the idea of eventually losing control and being kicked out of my own startup. That would be almost as awful as having my family disown me. I hope that’s not the situation, and that it was more one of the three of them decide that they needed a change of pace and/or seeing new opportunities they needed to pursue. I also hope for the sake of my many friends still there at XCOR that the company will manage to soldier on and make it to flight with Lynx. I also hope that Jeff, and Dan, and Aleta will soon find new projects that can use their skills, and that they can yet see all the hard work they put into XCOR pay off for them and for the industry.

I’m not sure if I’ve really added anything, but this news just has me in a bit of a funk. Good luck, my friends!

Posted in Random Thoughts | 3 Comments

Anti-radiation Biological Countermeasures: Amifostine

Amifostine (image rights: Ganfyd-licence user Mlj)

Amifostine (image rights: Ganfyd-licence user Mlj)

Whenever human spaceflight comes up, inevitably someone mentions radiation. Personally, I think the radiation risk is WAY overblown. “Compound conservatism” is rampant, I believe, and gets worse as time goes on and people keep recycling the same sources, adding some safety factor each time. (see here for a slightly longer explanation) Being extra conservative with radiation risk assessment eventually can cause an estimate for the tolerable risk that’s completely detached from reality, leaving very little budget left to deal with the other, much bigger risks if there’s even any money left to do the mission at all!

If we followed EVERYONE’s conservative advice for radiation risk, we’d be asking astronauts to fly in a giant sphere of polyethylene with no windows, hardly any room, and no EVAs ever (no “one small step” moment because of the risk of radiation, let alone a colony). We certainly wouldn’t be flying to ISS as we are now.

That aside, we can look at what IS a reasonably feasible and low-mass approach to dealing with radiation. Instead of the usual water or polyethylene or regolith shielding or magnetic shielding, I will look at a somewhat over-looked option: biological countermeasures. Radiation is, of course, often used to treat cancer. As such, there is a sizable body of work and several possible treatments that limit the toxicity of radiation to normal (non-cancerous) cells (thus allowing a higher dose to be used against cancerous cells, which are protected less). The most studied drug is, I believe, Amifostine. “Amifostine is the only approved radioprotective agent by FDA for reducing the damaging effects of radiation on healthy tissues.” (Cakmak et al)

While most such studies look at the ability of Amifostine to protect healthy cells from cell death and other damaging effects of radiation (such as damage that may lead to neurodegeneration), which seems to be effective (according to Cakmak and friends), what is most relevant to us in this discussion is the effect on a specific type of radiation-induced toxicity: carcinogenesis. People have suggested that stopping cell death may actually increase tumor-related toxicity (I see their argument, but it is much more likely that, due to Amifostine’s free radical scavenging, the total damage to the DNA is reduced) but is that actually true? No. No it’s not:

Paunesku et al:

Amifostine protected against specific non-tumor pathological complications (67% of the non-tumor toxicities induced by gamma irradiation, 31% of the neutron induced specific toxicities), as well as specific tumors (56% of the tumor toxicities induced by gamma irradiation, 25% of the neutron induced tumors). Amifostine also reduced the total number of toxicities per animal for both genders in the gamma ray exposed mice and in males in the neutron exposed mice.

(note: neutrons have a high quality factor, sort of like GCRs)

However, there is the argument that long-term use of a radioprotectant is not very effective, since it could reduce the body’s natural defense mechanisms.

As an aside, these very natural defense mechanisms are exactly why I think the threat posed by long-term chronic low doses of radiation is actually quite low… The body adapts to the constant radiation by increasing its natural repair/scavenging mechanisms… But with a short, very large acute dose, the body does not have time to adapt and its repair mechanisms are over-whelmed. It is these large acute doses that the general risk of cancer is actually based off. I find that extrapolating down from acute doses is incredibly unrealistic (on the ultra-pessimistic side). Aside over.

So, it may be that Amifostine and similar drugs are really most effective against acute doses of radiation. You might want to inject a little Amifostine when you learn a flare is on its way (once you get inside your radiation shelter). BUT I am not entirely convinced that there’s no benefit at all to Amifostine for chronic low-dose radiation. Even so, this whole field has tremendous potential. Imagine, you can potentially reduce the tumor toxicity of a really bad solar flare event by 25% with just a few grams of extra mass! And that’s on top of the benefit you might get from shielding and fast transit. One a per-mass basis, biological countermeasures are essentially unbeatable. This is why I think that if we’re going to spend any resources on solving the radiation problem, it probably should be to maximize whatever benefit we can get from drugs like Amifostine and, say, finding out if we can maximize our bodies’ built-in repair mechanisms through, say, targeted gene therapy. There are examples of extreme radiation tolerance and gene repair in nature that put even some rad-hard electronics to shame, so the ultimate potential (on the physics side) of biological countermeasures is pretty high as well. Biology may be a lot messier and frustratingly complex, but the potential gains make this path toward radiation mitigation worth it. Once developed, a drug or treatment would be very cheap, while shielding your transit craft with tens of tons of polyethylene or something will always be fairly expensive (even with space mining) or at least cumbersome.

Posted in Uncategorized | 6 Comments

Simple SPS ??

Reading about the drawbacks of conventional Solar Power Satellites and the comments in response to Chris eventually triggered an idea, or perhaps a memory of something hinted at in something I read once. I’m somewhat less certain of complete originality of my ideas than I used to be.

The standard concept SPS has a few drawbacks that Chris brought out quite well. I hadn’t really given it that much thought and found the problems to be more interesting than the idea itself. There are several conversions in getting from sunlight to the terrestrial power  grid. Each conversion has some efficiency loss which increases the required SPS size. The four or five conversions times efficiency of each jacks up the SPS size to several times the value that one would think of without doing the trades. The kilowatt per meter making a square kilometer a gigawatt facility becomes several square kilometers of SPS to net a gigawatt on the grid.

Another problem is the heat that must be disposed of to keep the solar cells and transmitters cool enough to work properly. The mass on orbit doubles again to net your gigawatt on the grid. A couple of unsettling problems if you happen to be an SPS fan.

In comments it was suggested that it would be better to simply orbit a mirror to reflect sunlight to the desired location. That doesn’t work because the sun is not a point source of light. Sunlight is converging at about 1% to a mirror in GEO and will diverge at the same 1% when reflected to Earth. The reflected sunlight would cover a disk of well over 300 kilometers on the ground so that a one km mirror would light the ground at 1/90,000 of solar power.

So my thought is to use one mirror to focus solar radiation on a hot spot that would then be a point source of radiation for a second mirror to send to Earth. The hot spot could be thought of as similar in intent to the tungsten filament in a light bulb in a flash light. This cartoon is not to scale and is meant to show the intent only.


This possibly could reduce beam spread to something reasonable at the expense of the beam being smeared across many frequencies. The visible light SPS could serve a few functions sometimes suggested by reflected sunlight advocates. If one km of sunlight could be focused such that 50% of the light was in a 10 km diameter, 1/200 of sunlight would be considerably brighter than a full moon. City lights for a large city without any conversion at all, and both storm and strike proof. Battlefield illumination as desired out of reach of interdiction. Operations lighting in the arctic for commercial and military uses. Night search and rescue. And so on for illumination as the beam would be too weak for power collection.

If a full sunlight focus is possible, then zenith solar cells would be considerably more productive than current usage.

The main advantage of a scheme like this, if feasible, is that it would be relatively light, cheap, and simple with a real likelihood of being implemented with mostly ET sources such as asteroids or the moon.

Posted in Uncategorized | 35 Comments

2H15 A Good Time for Geeking Out

I have to say the last half of this year is a great time to be a space/sci-fi/RPG nerd:

  • July: New Horizons successfully completes Pluto Flyby
  • August: Shadowrun HongKong by HareBrained Schemes comes out
  • September: HareBrained Schemes planning to launch a Kickstarter for a Battletech tactical Mech combat game for the PC
  • October: The Martian movie hits the theaters
  • November: Not sure, but possibly the SpaceX return to flight, and maybe if we’re lucky their first successful F9R first stage landing?
  • December: Star Wars Episode 7 hits the theaters — hopefully JJ Abrams does a better job with this than Lucas did with the prequels…

I’m sure there are other things that ought to be on the list, but we have at least one serious geek-out moment per month this year. That is all.

Posted in Administrivia | 5 Comments

Mars surface shielding from radiation

I want a short little aside here to talk about a little pet peeve of mine:
People talk as if Mars’ atmosphere does basically nothing to reduce the radiation dose as compared to free space. This is definitely not true, but the confusion comes from a few areas, but largely because people have not bothered to do some basic math and geometry.

1) People use the datum or even higher altitude sites to calculate the surface pressure. The pressure at the datum (the sort of average height on Mars, analogous to “sea level,” but not really) is 636 Pascals (6.36mbar http://nssdc.gsfc.nasa.gov/planetary/factsheet/marsfact.html ). But the scale height of Mars is 11.1km. Scale height is the constant used to determine pressure given a simple exponential model of the planetary atmosphere. The lower altitude, the higher pressure, determined by this equation:

P = P_{0} \: e^{-\frac{z}{H}}

Where P is the pressure at the altitude “z”, and P0 is the pressure at “zero” altitude, and H is the scale height.

So at Mars, P0 = 636Pa, H=11.1km, and the lowest point on Mars is in a corner of Hellas Basin at z=-8.2km (i.e. 8.2km below the datum), whereas pretty much all of Hellas Basin is 6km below the datum. https://www.psi.edu/epo/explorecraters/hellastour.htm

That gives us an estimate of over 1300Pa surface pressure at the deepest point ( https://www.google.com/webhp?#q=636Pa*e^(8.2/11.1) ) and at least 1090Pa anywhere inside Hellas basin ( https://www.google.com/webhp?#q=636Pa*e^(6/11.1) ).

2) People forget that Mars having a lower gravity means that the mass needed to get a certain pressure is higher than on Earth. So while 1kPa on Earth would mean just 10 grams per cm^2 of shielding, on Mars it is:
https://www.google.com/webhp?#q=636Pa*e^(8.2/11.1)/(3.71m/s^2) = 35.9g/cm^2.
https://www.google.com/webhp?#q=636Pa*e^(6/11.1)/(3.71m/s^2) = 29.4g/cm^2

3) That’s already decent shielding. However, there’s another significant point: That’s just the shielding at the zenith of the sky, which is the thinnest part! Everywhere else is thicker shielding, near the horizon is MUCH more shielding.

To explain this, I tried to write out the concept of a “solid angle” and how it is relevant:
Solid angle and Mars sky - Sep 7, 2015, 2-04 AM - p1
And then:
Solid angle and Mars sky - Sep 7, 2015, 2-04 AM - p2

So as you can see, the vast majority of your sky shielding (at least 70%) is over 1.4 (i.e. sqrt(2) ) times your zenith shielding. So we can write that as:

https://www.google.com/webhp?#q=sqrt(2)*636Pa*e^(8.2/11.1)/(3.71m/s^2) = 50.7g/cm^2
https://www.google.com/webhp?#q=sqrt(2)*636Pa*e^(6/11.1)/(3.71m/s^2) = 41.6g/cm^2

So, anywhere in Hellas Basin has basically half the dose of free space (shielded by the planet itself) PLUS another at least 40 grams per square centimeter of shielding just from the atmosphere.


From www.buildtheenterprise.com, but I believe they took this from another paper.

From www.buildtheenterprise.com, but I believe they took this from another paper. EDIT: Yes, it’s from Rapp et al 2006

EDIT: to give an idea of how much 40 g/cm² of shielding can do here is this graph that shows roughly the attenuation capabilities of polyethylene and aluminum. Mars’ atmosphere’s shielding capabilities would be somewhere between those two. While this isn’t quite enough to be happy from a GCR dose long-term (you’d want shielding on your hab), it does make EVAs far less dangerous in case of a solar flare (especially any acute effects), and also makes EVAs in general represent a much lower risk of long-term exposure. But the main effect is that solar flares represent a risk less than a tenth than the case without shielding (ie just the spacesuit).

(Also, as a side note: much of the northern part of Mars is far below the datum as well. Not quite 40g/cm^2 of shielding, but a solid 30-35g/cm^2 in many places… But there are MANY reasons why you might want to build your settlement at low altitude anyway.)

EDIT AGAIN, 2015-09-08:
Here is a graph from Rapp et al 2006 which I’ve drawn roughly where the equivalent dose of 20 cm of water shielding would be for Hellas Basin’s >40g/cm^2 of CO2 shielding. I added the red horizontal line for 20cm of water, looks to be just under 30cSv/year (I believe this is in open space, not on Mars), the green line is for 40g/cm^2 of regolith, which is a worst case for CO2 (carbon has lower atomic mass than the typical silicon, calcium, and aluminum that make up the balance of regolith besides oxygen) at about 28cSv, and 50g/cm^2 CO2 (deepest spot on Mars) with 27cSv or so for GCR annual dose. But again, this is free space. Those are just rough numbers, so that’s a bit of false precision there, but it does show that Hellas Basin has about as much equivalent shielding as a foot of water.Material shielding comparisons1--Rapp2006--edited2

(caption: “Figure 1. Point estimates of 5-cm depth dose for GCR at Solar Minimum as a function of areal density for various materials (figure1.jpg). (Simonsen et al. 1997)”)

Posted in Uncategorized | 27 Comments

Summary of Some ULA Papers from AIAA SPACE 2015

ULA has often used the AIAA SPACE conferences as a venue for discussing technical ideas they were working on. In fact, I’ve written several blog posts over the years summarizing or commenting on previous versions of their papers. This year’s papers represent the first batch of AIAA SPACE conference papers since Tory Bruno took over as CEO in 2014, and to me show how strongly he’s backing his team’s efforts to accelerate implementation of some of these ideas that they’ve been pursuing for years.

You can find all of these papers on their publication page, here: http://www.ulalaunch.com/Education_PublishedPapers.aspx. At my request, ULA was kind enough to label all of the SPACE 2015 papers so you can pick them out of the crowd. I haven’t read all of the new papers, but here are three I wanted to provide summaries for:

There was also a paper on the Emergency Detection System for commercial crew flights, and a presentation by George Sowers talking about potential cis-lunar architectures enabled by their Vulcan/ACES vehicles, but I won’t review those here. I should also note that while I’m a big ULA fan, I’m also a SpaceX fan, so if there were any SpaceX papers from SPACE 2015 that people would like me to review, please let me know (via email if you have it, twitter, or in the comments).

ACES Stage Concept
ULA has been interested in doing a larger upper stage to replace Centaur since shortly after this blog was created 10yrs ago. While progress has been slow and mostly theoretical for a long time, the changes at ULA have made ACES a much higher priority. While Vulcan without ACES would allow them to retire the Atlas V and Delta-IVM families, without ACES they can’t retire the Delta-IVH, which is something they really need to do to get their launch costs competitive with SpaceX.

For those of you haven’t seen any previous articles of mine about ACES, think of it as an enlarged Centaur, with a wider diameter (5.4m–same as the Payload Fairings on Vulcan), more thrust, and the Integrated Vehicle Fluids system replacing the existing RCS, battery power, and pressurization systems (and some of the avionics). According to the paper, they’re still trading 1, 2, and 4 engine versions with at least three potential LOX/LH2 upper stage engines: the Blue Origin BE-3, the AerojetRocketdyne RL-10, and XCOR’s piston-pump fed RL-10 competitor. By using a lot of lessons learned from Centaur and DCSS, the ACES stage should be one of the highest performance LOX/LH2 stages to fly, be able to operate far longer than any other high-performance upper stage in history, have very low LOX/LH2 boiloff, and be surprisingly cost competitive.

Conceptual Layout of a 4x RL-10 Version of ACES

Conceptual Layout of a 4x RL-10 Version of ACES

I’d suggest reading the paper for more details, but some of the highlights that stuck out to me, as someone who has been following previous iterations of this concept, include:

  • Their goal is to have the ACES stage actually be comparable cost to the existing Centaur stages, in spite of having 3x the propellant load, and 4x the thrust.
    • Part of this is by automating more of the tank welding steps, simplifying the structure to minimize the number of attachment points to the forward and aft bulkheads, and going with a concave-up common bulkhead with a centralized LH2 sump, among other things. While bigger, the structure will be a lot simpler,and consequently easier to manufacture, than Centaur.
    • Going with integrated vehicle fluids (IVF) system instead of the existing Hydrazine RCS, high-pressure Helium pressurization system, and large one-use batteries, both saves a lot of mass and cost, and particularly saves a lot in integration and testing. ULA is working with Roush to develop the IVF modules as an integrated and separately tested module where most of the testing happens before integration with the ACES stage.1 And those cost savings are on top of the huge performance and capability increases from IVF.
    • Part of this is by using aft-mounted avionics and encapsulated payloads to avoid needing to assemble the stage in a cleanroom. The avionics have also been modernized, and some of the avionics capabilities are being offloaded to the IVF controllers. The avionics for ACES should be both cheaper and far more capable than what is currently flying on Centaur.
    • Probably the part I was most skeptical about was how they were going to get the engine costs down–if they go with RL-10 class engines, ACES would have 2-4x as many engines as a Centaur. There are definitely efficiencies of scale, since at their planned flight rate they’d be using 6x as many RL-10 class engines per year as they currently are. But some of the pricing may also be from the fact that Aerojet Rocketdyne knows they have to compete with both XCOR and Blue Origin, so everyone is trying to provide the best realistic deal. The engine cost is probably the area I’m least convinced on, but hopefully there will be more in the future about how they intend to keep the engine costs down for ACES.
  • As mentioned above, they’re going with avionics mounted on the aft bulkhead, as a way of eliminating cleanroom requirements for the stage production.
  • The wider tank diameter means that even with 3x the propellant mass, the stage is actually almost the exact same length as Centaur, possibly making ground interface modifications less drastic than it would be if the tanks were wildly different lengths.
  • IVF will add all sorts of new capabilities, including durations > 1 week, making refueling (either from depots or the “Distributed Launch” concept I’ll discuss later) far easier since you only have two fluids, and as I’ll discuss later, significantly enhanced maneuverability capabilities–up to and including rendezvous. In a way, IVF turns ACES into a sort of service module for medium-duration (up to weeks) spaceflight.
  • For long duration, low-boiloff missions, they’re looking at two options for MLI technologies that can function exposed to aerodynamic forces on the OML. They didn’t mention the vendor by name, but I have written previously about one such company working on that type of MLI technology…
  • Apparently they also have a trade for doing a smaller 2-engine ACES variant (assuming the main ACES stage is a 4x RL-10 stage), to address the lower end of the market. There wouldn’t be much savings in anything other than the engines, but that might matter for some lower-end missions.
  • They mentioned my old company (Masten Space Systems) and their XEUS horizontal lander concept that could turn an ACES stage into a lander for large lunar payloads. They did mention that once IVF is working, that IVF might be able to help provide Oxygen and Hydrogen to the landing thrusters, allowing for a much higher performance version of XEUS using O2/H2 thrusters instead of storables. With the power capability from IVF, they could possibly run electropumps for high performance landing engines 2.
Masten XEUS Lander Concept Art

Masten XEUS Lander Concept Art

All told, it’s cool to see this idea finally take shape. While Centaur class upper stages can enable some manned BLEO mission concepts (when refueled on-orbit), the ACES upper stages have enough higher performance that they make such missions much easier, and they’re genuinely better for the application too. I really hope they can find a way to accelerate the development of ACES compared to their previously announced plans, because ACES opens up so many cool new mission possibilities. And if they can really keep ACES cost competitive with their existing Centaur stages, that’ll be even more amazing (though going through the details provided, it sounds like they have a realistic shot at pulling that off).

Distributed Launch
Which brings me to the second paper. This one was written by Bernard Kutter, who I’ve previously done a propellant depot paper with (at SPACE 2009 while I was still at Masten). His paper discusses an updated concept for in-space refueling using expendable drop-tanks, which they call “distributed launch.” I’ll first summarize the concept and then discuss the pros and cons compared to using a depot.

The primary application of Distributed Launch that was described uses an ACES-derived dual-fluid LOX/LH2 tanker that gets launched to orbit, followed by a separate Vulcan/ACES launch with the payload. The two stages would then rendezvous, transfer propellant from the tanker to the ACES stage with the payload attached, and then the ACES stage would then do the earth-departure burn to send the payload to GEO, lunar vicinity, or beyond. There are plenty of variations on the theme (using multiple tankers, having the tanker launch vehicle be something other than a Vulcan/ACES, etc), that’s the general concept. The picture below illustrates the concept with a Cygnus-like payload on ACES3.

ACES Distributed Launch Concept

ACES Distributed Launch Concept

It’s interesting to note that even though a Vulcan/ACES based tanker can only partially refill a Vulcan upper stage (30.5mT of usable propellant vs. the ACES capacity of ~70mT4), it still enables sending almost the full maximum payload launchable to LEO on a Vulcan/ACES vehicle all the way to escape velocity. If I’m doing my math right5, that’s double the escape-velocity payload of a max-performance, expendable Falcon Heavy for probably only a bit more than 2-3x the price… On the other hand, a partially-reusable Falcon Heavy will drop the price by a decent amount, but at the cost of some non-trivial payload performance. But on the gripping hand, a partially-reusable Vulcan vehicle can provide at least some fraction of the reusability savings of a partially-reusable Falcon Heavy, but at a lower performance hit. Anyhow, that observation isn’t from the paper explicitly, but was an interesting aside.

Distributed Launch Payload to C3 Comparisons

Distributed Launch Payload to C3 Comparisons

Ok, now for a few more details describing how the system works that I found interesting:

  • Assuming both the tanker and payload launch vehicles are Vulcan/ACES vehicles, they need to be able to handle as much as 1 month between the tanker launch and the payload vehicle launch. Which means they need to hit an aggressive boiloff rate (no LOX boiloff, and less than 0.7%/day LH2 boiloff, for a combined 0.1%/day boiloff rate) with the LH2 tank a little oversized to compensate for boiloff.
  • They make a pretty believable case that this is achievable based on previous Titan/Centaur data and the following modifications:
    • 20 layers of MLI instead of 3 for Titan Centaur, to cut down on radiative heat transfer from Earth and the Sun.
    • The tanker propellant tanks are based on an ACES stage tanks, but with no MLI penetrations on the LH2 tank6, and only a ring of low-thermoconductivity struts connecting it to the launch vehicle, cutting way down into heat leaks from the rest of the vehicle into the tanker7.
    • The common bulkhead insulation is designed so that the heat leak from the LOX to the LH2 tank balances out most of the heat leak from the vehicle and outside world into the LOX tank. The boiloff GH2 is run through vapor-cooling systems on the struts connecting the LOX tank to the vehicle, intercepting any remaining heat, so the LOX tank doesn’t heat up so long as there is LH2 on board.
  • The stage stays settled using a transverse (end over end) rotation scheme. By leaving the delivery upper stage attached, with the nice heavy engines at the bottom, the CG for the stack once the upper stage propellants are mostly empty is somewhere near the center of the tanker LOX tank.  This means the tumbling will keep LOX on most of the walls of the LOX tank, but the LH2 will be up at the “top” of the tanker tank, with a GH2 barrier between it and the LOX, which should cut down on heat transfer from the LOX to the LH2 somewhat8. The tanker would de-spin and transition to a 1 milligee axial settling acceleration once the payload Vulcan was launched and nearing rendezvous.

    Distributed Launch Tanker Illustration. Picture on Left Shows Fluid Locations While Spinning End-Over-End

    Distributed Launch Tanker Illustration. Picture on Left Shows Fluid Locations While Spinning End-Over-End

  • They suggest placing the drop tank into an orbit with a repeating ground-track, with a low altitude, so that when the payload Vulcan launches, it has fast direct rendezvous windows once every day or two (depending on the orbit you pick). This minimizes the time the payload has to wait in LEO before departure.
  • Distributed Launch leans heavily on the new maneuvering capabilities provided by IVF to enable the two stages to rendezvous and then formation fly in settling mode during propellant transfer operations. I’m actually pretty confident that the rendezvous and closing operations are doable with IVF, but I’m more skeptical about the ability to formation fly while a single set of fluid hoses connect the two vehicles. The fluid hoses will be under at least some pressure, which means you’ll be transmitting forces and torques between the two vehicles, and the coupled dynamics of such a formation flying situation with those disturbing forces scares me a bit. I’m not saying the problem isn’t solvable–I haven’t run numbers on the scenario in question, so it might be totally doable GN&C wise. If it isn’t, I know exactly how I would solve the problem, but that’s a topic for a venue other than a review of their paper. Suffice it to say I think that this is a solvable problem.

So, how does this compare versus using a dual-fluid depot like the ones we’ve blogged about previously here on Selenian Boondocks?

Benefits of Distributed Launch over Depots:

  • Easier to deal with low flight rates due to lower fixed-infrastructure costs, and lower boiloff between missions.
  • Easier to place the tanker into the optimal plane to enable low-penalty BLEO launches to destinations with short launch windows and tricky departure declinations (NEOs, Comets, and some planetary missions).
  • The short duration minimizes requirements on the depot itself–most of the spacecraft controls can be handled via IVF if the tanker delivery vehicle has IVF, minimal need for MMOD protection, no need for the tanks to be both filled and detanked, less need for liquid mass gauging since they’re filled on the ground and you can measure boiloff.
  • Distributed Launch can also be repeated at non-LEO locations. For instance, doing this twice could enable placing a tanker in EML-1 or 2, and then sending a payload there to rendezvous with it and refuel it.

Drawbacks compared to traditional depots:

    • If you have high demand for distributed launch, launching a new tanker each time starts becoming tedious and expensive.
    • The tanker doesn’t really save that much over a depot, and what savings it does provide rapidly go away if you do end up having enough flight rate to justify a depot.

Depots make it easier to handle propellant deliveries in a launcher-agnostic fashion, including using smaller vehicles to perform the deliveries.Once you go to more than one tanker transferring propellant in distributed launch–remember that a Vulcan/ACES V564A can only delivery ~30.5mT to orbit, but the ACES stage can use 70mT of propellant–you start adding more docking events to the payload vehicle. It might be preferable to to have the depot take the heightened risk of multiple tanker deliveries than to have the payload delivery upper stage take that risk. A depot can probably afford more robust rendezvous and interface hardware than a single-use drop-tank setup.

All that said, distributed launch is a fascinating idea, and it helps put almost all the technology on the shelf for future depot missions while allowing you to start when there isn’t enough demand for a full-blown depot. Also, it’s interesting to note that getting ACES to the point where it can rendezvous with another space object means you could use it almost as a space tug for delivering bigger payloads to space facilities9, enabling delivery of larger cargo, station modules, and raw materials to orbital manufacturing sites in addition to just propellant tanking. This concept of using upper stages to deliver payloads directly to another vehicle or facility without the need for tugs or prox-ops vehicles is a concept near-and dear to my heart at Altius, and a direction we want to encourage over time.

I’m pretty excited to see where Bernard and his team take this between now and next year. I really think that this approach of using orbital refueling to enhance launchers’ BLEO capabilities is a intriguing one. With luck, maybe I can finagle my way into being involved in next year’s paper.

Launch Vehicle Recovery and Reuse
This last paper is an update on a concept ULA first presented in 2008, with more discussion of alternative approaches and why they think this approach is better. In review for those who haven’t read this before, ULA’s “SMART” (Sensible Modular Autonomous Return Technology) reuse concept involves recovering just the first stage engines instead of the whole first stage like SpaceX is trying to do with Falcon 9. The first stage engines would be connected to the stage via separable structural and fluid connections. Once first stage burnout was complete, the engine pod would separate from the stage, inflate a Hypersonic Inflatable Aerodynamic Decelerator, and then once it was going subsonic, it would release a guided ram-air parachute. A recovery helicopter would then recover the engine-pod in mid-air, like the old Corona spysat film capsules were recovered during the early space age. This would allow the engine to experience a recovery environment that is very benign relative to flight, without expending a lot of propellant or other mass for the recovery.

SMART Recoverable Engine Pod and HIAD

SMART Recoverable Engine Pod and HIAD



The concept here is that the engines are half the cost of the first stage, but less than a quarter of the mass. And by doing mid-air recovery, you can keep the environments benign enough that reuse should be straightforward, and requires the minimum payload hit. You use the HIAD to decelerate instead of supersonic retropropulsion, and you have the recovery helicopter down range so you don’t need any boostback.

It’s an interesting idea, but it’ll also be interesting to see what SpaceX manages with Falcon 9. Coming from a background of VTVL powered landers at Masten, I’m definitely biased towards the SpaceX approach. It does require more of a performance hit, and it’s less clear if propulsive landing is going to do bad things to the engines, but RTLS propulsive landing removes constraints with downrange recovery–which we’ve seen can be a big deal from previous SpaceX recovery attempts on their Autonomous Drone Ships. And while the engines are most of the cost, the rest of the hardware is non-trivial. To me the real questions are going to be: how high of a flight rate will there really be demand for–the higher the flight rate, the more gas-and-go reuse makes sense, and how much refurbishment time will SpaceX’s approach take. If the refurbishment time is low, I’m not sure how SMART will compete with that long-term.

I’m not trying to rip on SMART–most of the developers of the technology are people I’d consider friends. I’m just expressing my biases coming from a VTVL powered landing background.

Ultimately though, it’s good having different groups trying out different approaches. We’re still in the infancy of reusable orbital vehicle development, and the more ideas tried, the more likely we’ll find the right answer–and it may be possible that there are more than one right answer. Lastly, if it turns out SpaceX makes rapid progress with Falcon 9 reuse (which I wouldn’t bet against), ULA has demonstrated its ability to adapt and come up with clever outside-the-box solutions.

While ULA has presented some really interesting ideas over the years, this year’s presentations are all the more exciting because there’s a real chance we’ll get to see ULA actually try these technologies. Their situation is such where they have to innovate, but fortunately they’ve got an extremely talented and created team. I hope these reviews were interesting to readers, and I hope they encourage everyone to read the whole articles. They’re well worth the time.

Posted in Commercial Space, Launch Vehicles, Lunar Exploration and Development, Orbital Access Methodologies, Propellant Depots, Space Transportation, SpaceX, ULA | 19 Comments

Minimum viable microwave-based space-based solar power system

The Solomon Islands pay almost $1/kWh for electricity. You could provide beamed solar power to them on a demonstration basis for a fraction of that price.

Unfortunately, being in the ocean is one of the worst places for beamed power, since you have lots of clouds and moisture and also saltwater which likes to corrode things. Even so, you could probably demonstrate space-based solar power to them on a scale that would be relevant but without costing too much. It’d be competitive with their $1/kWh pricing, once you set it up.

The military is tolerant of high energy prices, too. If you could setup a transportable, lightweight 200m diameter receiver that could power a whole military base day and night, you’d have a lot of interest, even if it cost $5-10/kWh. Those prices make beamed propulsion start to look viable, even if you end up throwing away 90% of your energy due to losses and an undersized receiver.

So how would we design such a system? Well, if we’re using microwaves (in order to reduce rainfade to only occur in the worst conditions, like heavy rain) at 10GHz and 3cm, then we’re limited to still needing very large antenna if at GSO. The antenna do not scale down well at all, so we can try getting closer to the Earth. That means we need multiple satellites (though we can start with just a single demo satellite). Equatorial orbiting or close to it is necessary to keep the size of the constellation small, but this also means we must keep the altitude fairly high or we lose coverage over most of the planet. A compromise is about 1 Earth radius, 6400km in altitude. The biggest antenna you can probably fit in a (modified, most likely) Falcon Heavy fairing while having room for the rest of a satellite is probably 300m, and you have a 8000km distance to your receiver (due to being at an angle), so your receiver would need to be:

6400km/(300m/(3cm)) = 800m in diameter to get the vast majority of the beam.

But… It turns out that the majority of the energy in the beam is in the center, so if you reduce the size of the receiver, you don’t lose power in proportion to area.

The Airy Disk diffraction pattern from a circular aperture

Let’s say you had a 250m diameter effective receiver size (probably needs to be about 300m due to being flat on the Earth and not tracking the satellite). Turns out the power inside a circle that’s 1/3.2 times as big as the 800m one is about 21% of the beam’s power (as opposed to 83% for 800m). I will dig into this later, but rough figures for now. You throw away another half the power due to conversion losses, so let’s say you only get 10% of the solar array’s generated power.


Let’s say you start with 20MW array (1kW/kg, weighing 20 tons), 15 ton thermal management system, 5 ton antenna with stearing system, 5 ton transmitters (it might make sense to electronically steer the antenna… but this is tough with a short wavelength and such a huge, lightweight aperture), 5 ton structure and reaction control for a total of 50 tons. ~$100m is the going price for a Falcon Heavy launch, and even with scrimping and saving, you probably pay at least that much for the satellite, not counting your extensive development costs. $200 million for the satellite and launch, maybe $50 million for your 300m receiver and associated electronics. But you’ll need like 10 of these satellites, with on average about 2 transmitting power to your receiver at any one time (4 Megawatts). About $2 billion for 4 Megawatts of power, $500/Watt. But operating for 15 years, that’s about $4/kWh, which is something the military would tolerate. If you had like 4 receivers spaced evenly around the world, you’d be more efficient, so about $1.25/kWh. (You would get about 20-30 minutes of shadow at most, but 25 minutes of battery power is cheap and lightweight, easily a rounding error in the above $50 million for the receiver.)

…however, I don’t think the slew rates required for that size of antenna are realistic. Also, I’m pretty sure solar plus storage can beat the crap out of that cost of electricity (as does nuclear, if you can find a small and portable reactor and can tolerate having one), and I will show that later on.

But this is a pretty modest constellation with very large assumed inefficiencies. If you were able to capture 55% of the power, with evenly spread out stations, with lowered launch costs due to reusability (also, I just assumed you’d get SEP tug from LEO to 6400km for free), you could start competing with electricity prices in Europe, parts of South America, etc.

…but again, you have the slew rate problem. If you solve in-space modular assembly of a large GSO satellite and set aside a large receiver in a desert somewhere (on the order of a 500MW solar farm footprint), you actually have much better chance of competing in the much larger $0.20/kWh market, especially considering you’ll be providing nighttime power. I tend to think the future lies in multiple-kilometer-wide space antenna, though, since the advantage over ground-based solar power is you’re offloading the large area requirements to orbit (and you need a small beam footprint). This would allow area-constrained places like Singapore or Luxembourg to get a lot of energy without requiring a lot of space.

Posted in Uncategorized | 8 Comments

Power beaming

Power beaming is clearly central to space-based solar power concepts. Here I will provide a quick overview of my understanding of power beaming, the various equations involved, typical example calculations.

If power beaming were efficient and cheap, I believe space-based solar power would be quite viable even for grid power. However it’s not, and that largely has to do with the distances involved AND the fact that you need to convert energy multiple times, with losses along the way. The distances involved aren’t a complete show-stopper, since you can solve that problem just by operating at a large enough scale. However, the conversion inefficiencies (and the need to dump waste heat, etc) is not going to go away simply by operating at greater scale (although it helps).

The first equation we need is the diffraction limit. Roughly speaking, the spot size of a transmitted beam (microwave or laser) is:

Spot size = distance-to-spot * wavelength/(aperture diameter).

This is close enough for an order-of-magnitude estimate. More detailed work to follow.

But if we have a satellite out in Geosynchronous orbit (36000km altitude) transmitting power at roughly 10GHz (3cm wavelength, the shortest wavelength that still penetrates readily through the atmosphere) with an antenna 300m in diameter (NRO SIGINT/ELINT satellites are rumored to be that big, but maybe only around 100m in diameter), you’d have a spot size on the order of:

3.6E7m*3E-2m/(3E2m) = 3.6E3m or 3.6km in diameter…

…turns out that not all the energy of your beam is contained in this diameter (“Where’s that factor of 1.22,” you cry), but that’s a halfway decent start (and you’d need an infinitely wide aperture to collect all the energy in the beam…). 3.6km is obviously huge. The biggest full-aperture dish ever built is the half-way finished Chinese Arecibo clone at 500m. Still, there are ways to tweak this.

With a laser operating at 1micron, in medium-Earth-orbit (10000km) with 1 meter diameter optics needs only a:

1E7m*1E-6m/1m=10m diameter receiver to receive the vast majority of the beam’s energy. This is much, much better, obviously. You could put a 10m diameter receiver on top of a tethered airship or drone or something that allows you to transmit it to the ground without interference from clouds.

Or heck, use it to power high-altitude aircraft… but that’s a whole ‘nother blog post! (And suffice it to say, there are lots of caveats about laser transmission of energy, too.)

Posted in Uncategorized | 13 Comments

Space-based solar power preview

NASA Suntower Space-based solar tower concept

NASA Suntower Space-based solar tower concept

I will start out with a look at space-based solar power, including both microwave and laser based beaming approaches as well as seeing if there’s some use that can be had for that power that doesn’t involve beaming it. This will probably lead into discussion concerning interstellar propulsion. Also, I will present why I think storage makes space based solar power unnecessary and probably non-competitive on Earth, at least for grid usage. I will also explain why I think launch costs are not the greatest barrier for space-based solar power.

Posted in Uncategorized | 11 Comments

Why we won’t run out of fossil fuels and why that’s a problem

This is my first post… I guess started because of a comment on the previous post by Jon. I was in the middle of writing a long comment, then decided to flesh it out and add some references.

I used to almost buy into the Peak Oil idea, the idea that we’ll run out of oil someday and face an energy-starved Apocalypse. I never believed that last part because I’ve always been a big believer in nuclear power and realized America had vast amounts of coal (which can be converted into oil products via the Fischer-Tropsch process which was used extensively by Nazi Germany and then later by apartheid South Africa). But I figured that it would at least mean we’d eventually be forced to use renewable energy and nuclear power at some point.

But then I learned about Oil Shale (not to be confused with fracking) or kerogen shale, basically rock that has semi-solid organics inside that can be extracted into oil. https://en.wikipedia.org/wiki/Oil_shale

Turns out the US has the biggest such deposits in the world. Much more such oil than Saudi Arabia has traditional oil. Like 4 trillion barrels of oil shale (in place). More than ten times the size of the in-place reserves of the Bakken formation (shale oil… tight oil… basically, oil that is extracted by fracking). Of course, this is expensive to extract. But as technology improves, and the price of oil is high enough ($50? $100? $200/barrel?), it can be extracted, just like the tar sands of Canada.
I’m a techno-optimist. I think technology will allow us to continue getting better and better at extracting carbon from the ground. We have hundreds of years of coal (at least in the US) using current reserves and methods. Total estimates of the amount of proven reserves of coal are just under 1 trillion tons, of which the US has over a quarter. To put that in perspective, the atmosphere contains about 3 trillion tons of CO2. But remember that burning carbon combines it with 2 atoms of oxygen, meaning that if you burned all the world’s proven coal reserves right now, you’d literally double the atmospheric concentration of CO2. (Note: if you burn it slowly over decades, about half is absorbed, i.e. by the ocean in the form of carbonic acid…)

Using more advanced tools, we have tens of thousands of years. Basically all of northern Alaska has coal underneath it (thin layers, but still): http://groundtruthtrekking.org/Issues/AlaskaCoal/HowMuchCoal.html

How much coal is there?


The total amount of in-place coal in Alaska is something like 5 trillion tons. This is a much larger estimate than decades past. Yup, about 5 times the proven reserves of the entire world! Burn that immediately, and we’d multiply the atmospheric concentration by roughly 6x. Who knows how much is underneath northern Canada or Greenland or Russia… Or countless other places in the world.

We keep finding more carbon underground. The Shale Revolution is proof of this. Tar sands in Canada is proof of this. Oil Shale is proof of this. We. Won’t. Run. Out.

But the problem is that, given current climate models (which no doubt most of you have problems with) we can’t even afford to burn all of the currently proven reserves of coal, let alone ten or a hundred times that much as technology improves. That would put as much CO2 in the air as there was when the Sun was significantly less bright (young stars like the Sun are a little dimmer than stars the Sun’s age), meaning the climate would be FAR warmer than it was last time the atmosphere had that much CO2.


We’re not talking about 4 degrees F difference, we’re talking 20 degrees F. Probably much more (depending on how good we get at removing coal from the ground …and depending on poorly-understood but possibly-disastrous feedback mechanisms). Even if you think the climate’s sensitivity to carbon dioxide concentration is much less than what the models suggest, at some point, technology will allow us to burn enough fossil fuels to STILL dramatically change the Earth’s climate. The Earth doesn’t have quite as much carbon as Venus, but if you include all subterranean sources of carbon, it’s a lot closer than you might think.

The atmosphere of Venus is 90 times more dense than that on Earth and it is made of 96.5% of CO2 and a 3% of nitrogen. This means that both planets have the same amount of Nitrogen on their atmospheres. Surprinsingly the CO2 on Earth is stored on calcite type rocks and if we would convert the CO2 on these rocks into atmospheric CO2 it would amount to the same amount of CO2 that there is on Venus’ atmosphere.

This is why it’s critical to develop carbonless energy sources AND intentionally move away from fossil fuels: technological progress will pretty much guarantee we won’t run out of fossil fuels. The physics of CO2 insulating the planet is well-understood from the spectroscopy of gasses and fundamental physics, but the feedback mechanisms are not well-understood (if you’re going to be skeptical of the models, this is where you should look). But even if you neglect ALL feedback mechanisms, you’re still talking about perhaps 10-20 F increase in temperature plus a large increase in atmospheric circulation (i.e. weather) if you burn all this CO2. Throw in some feedback mechanisms (melting and degassing of permafrost, methane clathrates), and who knows.

Luckily, energy is everywhere. In the wind, in the water, in sunlight, in the Earth, and even in atomic bonds. We can easily find better ways to harness useful energy than burning things. And of course, we aren’t going to find another planet nearby which has vast amounts of both oxidizer and fuel, so if we’re going to expand to the cosmos, we need to solve these problems anyway. This post is to provide motivation for some later posts, where I discuss all the various ways we can produce abundant energy.

Posted in Uncategorized | 30 Comments