Main Parachute Deployment (MPD)


The Pegasus II descent phase begins at 100,000 feet with the Delivery System Release (DSR). This is the point where Mission Control remotely separates the balloon from the craft. The craft will begin its descent phase using only a 2 foot drogue parachute to keep the craft vertical to help maintain communications. The speeds will reach over 300 mph in rarified air, and begin slowing as the craft plummets to lower altitude where the air is becomes denser. Pegasus II will be traveling far too fast to land safely on this 2 foot drogue and requires its main 7 foot parachute to be deployed to slow the craft. Main Parachute Deployment (MPD) is the highest stress moment for the flight, where only 67 seconds remain before the onboard fail safes kick in (hopefully) and try to force an MPD event 54 seconds prior to landing.

Distributed Intelligence

The Pegasus Mission is all about experimentation and we have to put our craft, thousands of man hours, and our money at great risk to run some of these experiments. MPD is the single most nerve wracking, nail biting event on the entire mission. MPD is controlled by a processor running in the cloud that continuously estimates where the craft will be vertically 90 seconds into the future based on the current telemetry from the craft. A form of distributed intelligence. Once the processor determines that 5 consecutive forecasted values are at or below the MPD altitude (which is actually air pressure), then the processor calculates the time in seconds to when the craft hits the MPD altitude. A timer is started and the processor sends the MPD command when the timer elapses. The field gateways should receive the signal from the cloud-based processor and send the MPD command to the craft. MPD should occur at 12,000 – 11,800 feet.


The Big Bet

The big bet is low latency. If the processor does not receive the telemetry at a fast enough rate, then the calculations will be off, i.e., the craft will not be where the processor thinks it is…too low. If the MPD command is not quickly sent and received the craft will be at a lower altitude than targeted, which reduces or eliminates the safety margin. We not just testing a special system for MPD, rather we are testing the entire system. MPD uses exactly the same infrastructure and software that sends the telemetry to thousands of users that view the flight. So, we are experimenting with a huge number of messages flying around in both directions (from and to) at the same time and at scale. Risky? If it wasn’t, then we would not learn anything.

Pegasus II – A Detailed View of Information Paths

The previous post gave you a view into the communications architecture.  Today we provide a more detailed view on the overall system based on its functions.

Craft Telemetry [Figure 1] is information flow from Pegasus II.  This information is contains meteorological measurements as well as craft location and health.

The Ground Telemetry [Figure 2] communicates information about location as well as information relative between the 2 ground stations (launch & mobile) and the craft.

The Live Video [Figure 3] streaming video from Pegasus to the cloud and the launch site, which can be viewed on the Web site.

User Messages [Figure 4] allow users to send messages to Pegasus II while in flight from phone apps.

SMS Notifications [Figure 5] Pegasus II is capable of sending notes about interesting events during flight, e.g., when it is launched, altitude milestones, and the risky descent stage.  Users can signup for these notifications and the craft will send you several text messages during the flight.

Flight Operations [Figure 6] are critical for control of Pegasus II during flight.  The rotation of the live video and the Delivery System Release (DSR) are controlled by Mission Control. DSR is the point where we release the balloon and begin the descent stage.  During descent we will plummet toward the surface of the Earth reaching speeds around 300 mph in rarified air.  The main parachute deployment (MPD) occurs only 1500-3000 feet on the surface, not much time until impact.  Therefore, we are adding an analytics package to quickly analyze the descent and automatically execute Pegasus II’s MPD command.

Finally, Mission Control can monitor aspects of the system [Figure 7] to understand the number of users connected to the system at any given time, messages sent and received, and the quantity of traffic within the system.

Launch window is 7/10/2015 – 7/24/2015 in Cheyenne, WY.  Dare Mighty Things.

Figure 1 – Craft Telemetry

Figure 2 – Ground Telemetry

Figure 3 – Live Video

Figure 4 – User Messages

Figure 5 – SMS Notifications

Figure 6 – Flight Operations

Figure 7

Pegasus II Communications Architecture

A quick look at the communications architecture for Pegasus II.  Our 7lb payload of radios and sensors is generating a considerable amount of communications to various users and services.  Phone Apps, Web Site, Power BI, and Flight Ops at Mission Control all have user interfaces.  Additionally, we added SMS and email services to notify users to the flight launch and a few critical points during the flight.  We will store the telemetry as we receive it and make it publically available through DocDB.  The video (not shown) will be streamed from its own field gateway directly into Azure Media Services where you can see Pegasus’s eye-in-the-sky during the flight.

Also, we are likely to scratch the proposed Boulder, CO launch site due to several issues with winds. Currently, investigating Wyoming as a stronger candidate for a launch site.


A broader view of the overall Piraeus Architecture that enables the bidirectional communications for Pegasus II


Finally a view of how we use Orleans, which is a horse power behind our scale and low latency (real-time IOT)







Recovering Pegasus-I

Recovering Pegasus-I was an extremely interesting part of the mission. This article goes into how it is done, and the fully story is here.

During the flight we immediately lost the Ham Radio, which was our backup GPS, and we were flying only on the onboard primary GPS. The craft had turned NE from East as expected, but with the mission altitude objective reached and difficultly of the terrain 17 miles East of Othello, WA, we decided to begin the descent stage.

Figure 1 is a map of 5 minute intervals of the flight path.


The winds gods were kind to us that day and you can see in Figure 2 that the radial distance of the flight was not very far.

Figure 2

The maximum altitude achieved by Pegasus-I was 84,899 feet, which gave us about 13 minutes during the descent stage before the craft lands. Once we cut down the craft, we begin a rapid descent in rarified air (not much friction) and rate of descent progressively slows as the air becomes denser at lower altitudes. The initial descent rate was over 200 mph and telemetry tells us the yaw, i.e., the rotation parallel to the ground was spinning wildly, while the roll and pitch were bumpy, but relatively stable.

During the descent the latitude and longitude from the primary GPS cut out at an air pressure of 38.8 millibars, due to an antenna becoming loosened. The rapidly spinning yaw likely put significant horizontal force on the antenna. The last recorded air pressure was 37.2 millibars at an altitude of 75,555 feet. Pegasus had dropped 9,344 feet in just 36 seconds at average of 256 ft/sec or 180 mph.

When GPS cut out, we were in dismay. Regardless of the other telemetry still pouring in, we did not have the location of the craft. The chase team exited the vehicle and began scanning the sky by sweeping from West to North, the expected landing zone. While this was actually the correct place to look, the cloud ceiling was only about 1,000 feet, which meant we had less than a minute to visually identify the craft. Figure 3 shows the position of the craft when we lost GPS, the position of the chase team and the last recorded position of the craft, which the chase team was not aware at the time.

Figure 3

After arriving home 3 days later, I was checking the telemetry we captured from Pegasus-I. First, I looked at the moment the GPS cut out, Figure 4.

Figure 4

I scanned further down and noticed this with only 26 seconds left in the flight, Figure 5.

Figure 5

Cross checking the air pressure with the location and topo map, it was apparent that landing point was consistent with the air pressure. Further analysis showed that the points between where the GPS cut out and came back online where in the expected location due to wind velocities and descent rate. The expected location was mapped in Figure 6 and sent to Mark to attempt recovery 7 days after the flight.

Figure 6

Figure 7 shows the last GPS coordinates and where Mark actually found the craft undamaged. The fact that we were streaming and capturing the telemetry enabled us to locate the craft and oddly enough provided and interesting story about the role our real-time IOT technology played in recovery.

Figure 7


We are getting ready for Pegasus-II.  We will target the American West for the flight slightly East of the Continental Divide.  We hope to get some outstanding video of the Rocky Mountains.  Mission parameters will include LIVE video where users will get to see the Pegasus “Eye-In-Sky” using Azure Media Services as well as a target celling of 100,000 feet.  Not to mention an additional 4 or 5 cameras onboard the craft.

We have upped the ante with live video, but also using 38 sensors capable of streaming the telemetry in real-time to Phone Apps (Android, iOS, and Windows Phone) as well as a map to display the position of the Pegasus-II and the chase vehicle in real-time.

Adding to the game plan, we will be allowing Web site and phone app users to place information onboard Pegasus-II while inflight and supply these users with an incredible photo of the craft at its apex as a thank you.

It’s game ON for Pegasus-II and pushing the limits of real-time IOT.

Security in Piraeus

Piraeus’s architecture is very intentional to reduce the amount of effort needed to leverage the system. The security of the system is built on a concept that an “authority” that can manage resources within the system, e.g., create topics or subscription grains within Orleans. Those grains are ACL’d by access control policies such that only authorized callers can access to those grains. The callers can be anything that can present an authenticated security token where the attributes of the security token comply with the access control policy of the resource being accessed.

The architecture greatly simplifies how Piraeus functions and also reduces latency within the system. The key is access control and how Piraeus manages this. Piraeus puts the user in control of what resources can be access by what callers. It does this through distributed access control policies. These polices can be cached, expired, and renewed by such that they can change within Piraeus without every stopping or redeploying the system. This clean separation of access control management from Piraeus enables to it be both fast and flexible.


One of the new technologies, other than Orleans, introduced in Piraeus is Claims Authorization Policy Language (CAPL). CAPL is a powerful, but simple logic-based and security token agnostic language for access control that enables rapid authorization decisions. CAPL policies can be created externally to Piraeus and then consumed to provide control over resource access, which reduces the complexity and increases the flexibility within Piraeus.

Topic grains within Piraeus include metadata when they are provisioned. Two important pieces of metadata are URIs that reference access control policies for the topic and subscription grains. Piraeus uses these references to return CAPL access control policies to ACL the resources from a service. We call this authorization service, Authorization as a Service, or ZaaS. ZaaS is where the access control policies are managed and Piraeus simply uses the ZaaS policy store to get and update the access control policies for the resources.

We want the access control system to be low latency and as such Piraeus will retrieve the policies when the Topic is provisioned. The policies will be cached locally as well as in Redis. When the policies expire from cache, they will be retrieved again and updated in the local and Redis cache. The Redis cache allows Piraeus to scale its gateways and make the access control polices available to the entire system. Once the policy is retrieved from Redis, it is cached locally to further speed up the process. When the policies expire locally, they are immediately refreshed to the local and Redis cache from the ZaaS service.

The security architecture greatly increases Piraeus’s flexibility because identity and attribute stores are completed separated from the system. Piraeus can scale and be leveraged in multiple data centers without the burden of porting these stores and/or synchronization because of this segregation, something that would be not possible with CAPL.

The benefits of the security architecture actually increase when you consider this clean separation.  Because we are using security tokens with attributes (not connection strings), this means that no caller is trusted, e.g., device, user, etc.  This is more in line with the scale provided by federated identity that is commonly used in Web sites.  Like I say, “If you don’t trust billions of people, then why would you trust devices when they will exceed the number of people on the Internet in the near future.”  The Piraeus architecture builds on this and opens the flexibility on how devices acquire and manage security tokens.  Piraeus does NOT manage devices and provide tightly coupled mechanisms for security token acquisition…by design.  The reason is that scenarios are so diverse for device registration and management that it needs to be either customized or leverage any existing product.  Since Piraeus is not a principal in the identity transaction, it simple requires that security token presented be by a trusted issuer.  Access to resources by the caller are based on the attributes of the security token as executed by a CAPL access control policy.  Again, cleanly separating the issue of token acquisition and management from the Piraeus Architecture.  Roll your own, use an existing product, just present a security token signed by a trusted issuer to Piraeus and it functions securely using CAPL to maintain access control to resources.

The Power Of Self-Organizing Systems

Pegasus-I was a system with the overall goal to broadcast telemetry and control flight operations in real-time. The system was a combination of organized and self-organized sub-systems that interacted with Pegasus-I during the flight. Let’s examine how we structured this and role of self-organizing systems in the mission as well as how we will leverage them for Pegasus-II and Pegasus-III in the near future.

A system is an organized set of things that form a complex whole designed to achieve a goal. The system designed and used for Pegasus-I included seven (7) interconnected actors, Pegasus-I, Ground Station, Chase Vehicle, Web site, and three (3) Azure Blob Storage containers. The nodes could transmit, receive, or transmit and receive information depending on their function. The overall system was composed on nine (9) different sub-systems that managed telemetry and flight operations. These sub-systems communicated with the actors to execute specific tasks within a sub-system.

Three (3) sub-systems were organized, meaning we had predetermined and configured paths for the information to flow. These sub-systems were associated with storing telemetry received from Pegasus-I by the Ground Station and the Chase Vehicle as well as storing the location of the Chase Vehicle itself. These sub-systems were required to be organized because our storage containers in Azure Blob Storage do not connect to Piraeus. Piraeus requires a durable subscription to be configured to any passive receiver. Below in [Figure 1] shows the system actors and organized sub-system graphs for telemetry and location. You will notice that the only organized sub-systems simply ingest telemetry and location from either the Ground Station or Chase Vehicle to Azure Blob Storage. The Web site does not send or receive any information through these sub-systems.

Figure 1


Four (4) of sub-systems were self-organizing, which means they organize themselves on-the-fly to create sub-systems that did not exist before. If these new sub-systems create ephemeral subscriptions to receive information, then those subscriptions will only last for the duration of the connection by the caller. They are disposed upon disconnect. The reason we made this choice for Pegasus-I was that certain sub-systems were only concerned with “now”, not any past history of events to or from the inflight craft. This also reduced the complexity of the system design because we could depend on Piraeus to create the sub-system graphs in Orleans on demand and connect them to the appropriate parties immediately.

When the Web Site connected to the Piraeus gateway, it was entitled to subscribe to the same topics grains in Orleans as the Azure Blob Storage containers. This allowed the Web site to create new sub-systems on-the-fly to receive telemetry and location, shown in [Figure 2]. If the Web site disconnected, both the new subscriptions and their respective observers would be disposed and upon reconnection the sub-systems would be recreated by Piraeus within Orleans.

Figure 2


This takes care storing and displaying telemetry without requiring complex configuration or maintenance, but what about those critical flight operations for delivery system release and parachute deployment? We used the same ability for self-organization for those also. We only need to configure topics [Figure 3] for the Web site to send specific commands to the responsible parties, Ground Station and Chase Vehicle.

Figure 3


Once the Ground Station and Chase Vehicle connection to the Piraeus gateway, the sub-systems were created [Figure 4] and communications enabled from the Web site to these parties.

Figure 4


When you look at the entire system and its sub-systems [Figure 5], it is a complex system. However, using self-organizing systems within Piraeus, we were able to take advantage of creating the sub-systems without the need to configure all of them in advance. We only configured seven (7) topic grains and three (3) subscription grains in Orleans to design the entire system and enable communications and storage so users could view the flight in real-time and allow Mark and I to control flight operations.

Figure 5


Instead of describing the various uses of self-organizing systems, which are numerous, I want to communicate how the planning for Pegasus-II and Pegasus-III will use them. We want to be able to bring the excitement of high altitude science to people in a very personal way and allow people actively participate in the experiment in real-time. In our own way, it is like being onboard the Calypso with Jacques Cousteau. Doing this requires that we leverage not only a Web site, but also phone apps to broadcast to users real-time telemetry, maps, and streaming video through Pegasus’s eye-in-the-sky. These phone apps will receive telemetry, location and update rapidly from 100K feet in the upper atmosphere to user’s eye. Additionally, we working on concepts to get some user-defined personalized information onboard the craft during flight such that users can directly communicate with Pegasus and see personal message during flight. However, we cannot control the number of people using the phone apps or when the users choose to turn the apps on or off, or simply a phone dropping a connection due to poor reception. Therefore, we need a self-organizing system for these users, and Piraeus supports just that for this type of experience.

-Matt Long

Orleans Above the Cloud – Piraeus Overview


The “Internet of Things”, IOT, is a big topic and involves many important concepts about systems, e.g., open vs closed, organized, self-organizing, closed and open-loops. All are worthy topics to understand to put a foundation under IOT. However, I will resist these topics and only discuss Piraeus and the role that Orleans has in the architecture for the sake of brevity.

Piraeus is a multi-channel, multi-protocol, in-memory event broker. It enables edge devices or services to connect and transmit or receive information from other system entities without any coupling and without system entities having direct knowledge of each other. It is a high throughput, low latency, and linearly scalable Operational Technology that simplifies the ability for an open-system to achieve its goal.

Diagram of the physical architecture used in Pegasus-I, i.e., how we got the real-time out Piraeus and Orleans.


The Piraeus Architecture (Operational Technology)  that was used for Pegasus-I


A system can be modeled as a directed graph and such Piraeus uses this simple construct to enable communications through its gateway to components within a system or sub-system.  Information enters Piraeus through its gateway, then is distributed throughout the graph.  The leaves of the graph are connected to either active or passive system components that receive the information.

The components of Piraeus are relatively simple. A gateway exists that can receive information through a variety of channels and protocols. Once this information is received by the Piraeus gateway it is fed into the appropriate graph for that specific information topic. The node that receives this information is a virtual actor, called a grain inside of the Orleans host process. This type of grain is considered a “topic” within Piraeus and topics are graphed to “subscription” grains, which Orleans can communicate. Once the information enters the topic grain is fanned out to all subscription grains associated with the topic. The subscription grains are observable and feed information directly and immediately to the specific channel and protocol that is associated with the subscription.  There is no polling in the entire Piraeus architecture…ever.

That is a high-level summary, so let’s dig deeper into what is happening under the covers. Topic grains are always provisioned prior to communications being established. They describe the head node or ingress resource of a sub-graph as part of a system or sub-system. The topic grain’s job is to act as a resource for subscription grains to attach and fulfill the sub-graph enabling transmitter and receiver to communicate unidirectional. This gives us consistency with how a generalized system would behave.

The subscription grains can be either durable or ephemeral. Durable subscription grains are configured with metadata and attached to a topic grain. The metadata is part of the subscription grain-state within Orleans and therefore requires no orthogonal look up or database to manage that would impact performance. Durable subscriptions are used by either active or passive receivers. An active receiver is one that creates a connection to the Piraeus gateway. Once that active connection is established, the identity of the actor is used to create and associate “observables” with the specific subscriptions grain(s) for that identity.  When information flows into the subscription grain from the topic grain, the observable for the subscription is already established and associated with the channel and protocol of the receiver. This creates a direct pipeline between transmitter and receiver.  A passive receiver is a service that does not connect directly to the Piraeus gateway. When information is passed to these durable subscriptions, the subscription grain will forward the information immediately to the service. Piraeus supports RESTful Web services, Event Hubs, Azure Blob Storage, and Service Bus for these passive subscriptions.  This means it is possible to create, e.g., to create Web service, and start receiving information from Piraeus without doing anything else.

Ephemeral subscriptions can be used with self-organizing systems.  These are systems that can organically change and system actors can enter and exit arbitrarily.  Ephemeral subscriptions only exists for the duration of the active channel.  Once the channel is dropped, the subscription grain and its observable are removed and resources are disposed.  This enables a self-system to organize without prior knowledge of edge system components.  Of course, access control plays a major component on what is allowed, which is not a topic for this post.

Orleans is a radically different type of technology, which is a key enabler for the Pegasus-I mission to near space. It makes possible some of the key concepts around flowing information at low latency and organizing graphs through its concept of virtual in-memory actors, i.e., grains.   Observables in Orleans also give Piraeus a simple way to mate specific receivers to their respective subscription grains without need to leverage any store and forward technologies and makes for a remarkably elegant end-2-end communications story that is simple to use. We were able to get telemetry from Pegasus-I inflight to the Web site for a user to view in about 20 milliseconds on average, going from a radio transmission from Pegasus to our field gateways and over MiFi on our phones to Piraeus.  Of course, we also proved it also works at 85,000 feet, 2.2% atmosphere, and -60 degrees Fahrenheit.

Notable Technical Details

  • Orleans is a high throughput, low latency, and linearly scalable technology that enables Piraeus to easily manage a system graph or sub-graph to enable communications between system components.
  • Subscription grains are either durable or ephemeral in the context of Piraeus.
  • Subscription grains can be associated with either active or passive receivers.
  • Piraeus enables systems to be either organized or self-organizing.
  • Information flow is in-memory by default, but can be persisted if required.
  • Piraeus uses no databases or storage accounts to perform its operations.
  • Orleans grain-state is persisted in a customized Orleans Storage Provider that leverages Redis.
  • Piraeus has a multi-channel and multi-protocol gateway that does not couple channel and protocol between transmitter and receiver.
  • There is no intrinsic limitation to message size within Piraeus

There is a tremendous amount more about the technical detail, which is far to overwhelming for this overview post.  I will try to post more fined-grained detail at a later date for those interested.

Best Wishes,

-Matt Long

Pegasus-1 Mission Photos and Video

Photo Montage Video | Flight Video | Narrated Video | Mission Prep Video | Flickr | The Miracle of Pegasus-1 | Pegasus Prequel

Mark Nichols and Matt Long – The Pegasus Mission Engineering Team
Mark and Matt the night before flight
Matt Long (foreground) and Mark Nichols check equipment the night before launch.
Mark Nichols making a preflight adjustment with a cutting tool.
Software check
Matt Long verifying flight operations software.
Mark and Matt tie off the delivery system
Matt Long and Mark Nichols tie of the delivery system, a high altitude balloon ~6.5′ in diameter, with 165 cubic feet of helium.
Gupreet Singh Pegasus-1 Launch and Chase Team
Rohit Puri Pegasus-1 Launch and Chase Team
Sensor Payload for Pegasus-1
Payload Housing
Payload Housing
Preflight payload
Preflight payload assembly
Launch site from Pegasus-1
Launch site from Pegasus-1
Exiting the clouds
Pegasus-1 exiting the clouds in lower atmosphere
Sensor package
Sensor package

13-15 miles up
Pegaus-1 13-15 miles up
Mid-Level Flight
Mid-Level Flight of Pegasus-1
Entering the Stratosphere
Pegasus-1 entering the Stratosphere
Directional Ground Station
Directional Ground Station with extended range
Mark Nichols Directional Ground Station
Mark Nichols Directional Ground Station
Location, Location, Location
Ham Radio and GPS for tracking Pegasus-1
Getting Ready
Prep the launch site, Othello, WA 01/28/2015
Dragging the drogue
12″ Drogue parachute dragging behind Pegasus-1
~85,000 feet, 2.2% atm, -60 degrees
Flight APEX on Pegasus-1 ~85,000 feet, 2.2% atm, -60 degrees
Near Flight Apex
The beauty of our planet as seen from Pegasus-1



A planet (we think) seen from Pegasus-1 during the daytime.
Pegasus-1 recovered one week after flight.
Balloon released and caught on camera. It will burst at 26 foot diameter.

The Miracle of Pegasus-1  | Pegasus Prequel