Introducing Hyper {Nano}: Hyper Cloud in a Bottle ⚡️🍷

In my previous post about Clean Architecture at Hyper, I briefly mentioned Hyper Nano.

In this post, i'd like to focus on Hyper Nano, and how you can use it to build awesome applications using the Hyper Service Framework and Hyper Cloud.

We are currently offering hyper cloud tours and a free Architecture Consultation, If you think we could help you build your next great project, let us know!

What is Hyper Nano

As discussed in my previous post, Hyper Nano is the compiled Hyper Service Framework using a set of "local development" adapters, that allows Hyper to be run locally, is simple to spin up and simple to blow away.

It currently is running the following Hyper Services:

Usage

You can find usage documentation in the README, but I wanted to highlight some key features here.

Hyper Nano supports "bootstrap" and "purge" options. Bootstrap can be used to create hyper services on startup. Purge can be used to destroy services on startup.

This will require the --experimental flag to be set.

You can read about how to use these features here

Given this, here are some common workflows you can enable with Hyper Nano.

Workflows

Here are some common workflows you can enable with Hyper Nano when building applications with Hyper. I will assume the Hyper Nano binary was downloaded and named hyper-nano and your application domain is named foo-app

I will also list some pros and cons of each approach (Tyler's opinion), and share the workflow we use at hyper every day, to build hyper.

Local Development with Persistent Services

This is the the most common workflow. You are developing your app, using Hyper as a services tier, on your own machine using localhost, and you'd like to have a persistent set of services. This would be like running Postgres, Redis, Elasticsearch, etc. using docker-compose and mounting volumes onto each, so that the data persist across containers.

First you would download a hyper binary.

You would most likely not include the hyper binary or it's files in __hyper__ under source control, just like you wouldn't commit a local database to source control.

You would start Hyper Nano using ./hyper-nano which would start it listening on port 6363. As a one time setup, you would create the Hyper Services you needed, either using the REST api or hyper-connect. Then your application would consume those services using hyper-connect passing http://localhost:6363/foo-app.

Alternatively you could pass flags to bootstrap those services ie. ./hyper-nano --experimental --data --cache --domain=foo-app which will create the database and cache service, and noop if they already exist.

After the initial setup any time you wanted to start developing, you would just start Hyper Nano with ./hyper-nano and develop away.

If you ever wanted to wipe all the data, you could either use the REST api, hyper-connect, or just simply delete the __hyper__ folder used to store data in Hyper Nano. You would then need to recreate those services again, just like if you deleted a database, and had to recreate it.

For data, You would either seed the services or gradually build data up in the persistent services aka. no local seed data required

Pros:

  • Is most familiar development workflow for most folks
  • Sandboxed to localhost
  • Doesn't require internet connection

Cons:

  • Manual steps to create and remove services that aren't needed when using Hyper Cloud
  • If the data in the services becomes corrupted, a common occurrence during feature development, and there is no seed data being used. You have to manually correct it, or wipe the data away and build it up again.
  • Each new feature uses the same Hyper service instances aka. No feature sandbox
  • Must maintain a runtime on local ie. Node, Yarn, Deno, installing dependencies, cross versions, etc.
  • Each developer must do these steps to set up a local environment, which could cause divergence aka. "it works on my machine" scenarios

Local Development Using Ephemeral Services

This workflow is similar to above, but the idea is that you blow away and recreate the services each time you start work on a new feature.

You would start Hyper Nano with ./hyper-nano --experimental --data --cache --purge --domain=foo-app, which would destroy any old data and cache service, and then recreate them. Then you would develop as normal.

For data, since the services are deleted and recreated every time, you would need some sort of seeding mechanism. hyper.data.bulk on hyper-connect is a great option for performing this.

Pros:

  • No manual steps to create and delete servcies
  • Sandboxed to localhost
  • Feature development is sandboxed
  • Doesn't require internet connection

Cons:

  • You must have a seeding mechanism (I personally see this as a pro which I will cover in the next workflow)
  • Must maintain a runtime on local ie. Node, Yarn, Deno, installing dependencies, cross versions, etc.

Cloud Development using Ephemeral Services

This workflow uses Ephemeral Hyper Services, but also an Ephemeral development environment as well. Using services like GitPod or GitHub Workspaces, you would spin up a new environment for each new piece of work being done. This environment setup should download ./hyper-nano and start it, using bootstrapping and purging to create services. As part of environment set up, you would also run your seed script to bulk load your services with data.

When you were done developing, you would push up your code to the remote, and then simply close the Cloud Environment. For the next piece of work, rinse and repeat.

Pros:

  • No manual steps to create and delete services
  • Sandboxed to Cloud Environment
  • Feature development is sandboxed
  • All are other dependencies are sandboxed ie. Runtime, dependencies, are sandboxed (no nvm!)
  • Encourage trunk based development
  • This becomes your onboarding flow for new developers, so your onboarding flow always stays up to date and lean
  • Less likely for environment divergence to occur aka. no more "it works on my machine".

Cons:

  • Environment setup must be codified for something like GitPod to execute.
  • Needs an internet connection
  • You must have a seeding mechanism (I personally see this as a pro in this setup)

What Hyper Does

You can probably guess that we use the "Cloud Development using Ephemeral Services" workflow at hyper. We develop exclusively in Cloud Environments, and spin up new services and seed data for each new feature or bug.

When we are done with the feature, we simply close and forget that environment and then spin up a new one.

We use a .gitpod.yml file to configure our Cloud Environment with everything to start developing.

This means our seed data must be kept up to date, which requires some work, but I personally think this is a good thing. It keeps our seed data from becoming stale tech debt, and keeps onboarding new team members lean. It also means that we can tune our seed data to include more and more data sets to support feature development, as more features become part of the application.

Suppose you had a subscription based platform. You could seed data where one customer is in a "paid" subscription state, another customer is "overdue" subscription state, and another customer is "no subscription" state. It becomes easy to build features for each of those customer states with your set of seed data

Deploying your Application

Eventually you're going to need to deploy your application, where your application consumes your actual services in your Hyper Cloud Application.

If you're using hyper-connect to consume your Hyper Services, all you have to do swap out your connection string that you pass to hyper-connect and that's it! From local development to using actual services without any code changes 😎, for all of your Services.

Conclusion

Hyper Nano has been a huge boon for us at hyper developing products like Hyper Cloud. Consider trying Hyper Nano in your development workflow and let us know what you think!

We are currently offering hyper cloud tours and a free Architecture Consultation, If you think we could help you build your next great project, let us know!