This article is written for interview / documentation purposes. If you have any questions about the content please contact me.

NodeCloud started as a small project in August 2013, back when Docker was at it's infancy (v0.6.0) and Kubernetes didn't yet exist. NodeCloud later evolved to become NodeGear (thanks to Seb who I believe coined the name).

NodeGear is an PaaS which enables easy node.js hosting without having to worry about scaling, uptime and monitoring.

At the time, there was a similar service I liked and copied some ideas from, nodejitsu. They were later acquired by GoDaddy..

From the end user's perspective, you could deploy any node app by pushing to a git repository and you would be given a running instance, with a domain name and served through https. All logs would be captured, environment variables protected, the app would be automatically restarted if it crashed, and if you wanted to you could scale the instances and (in the future, datacenters). We also had a partnership to provision sandbox mongodb databases automatically. By default the app would be hosted under * and secured by a wildcard certificate. You could however provide your own domain name and upload our own certificate for production use.

It was needed because I was consulting a few clients and the number of projects that needed management was getting too big; I felt a burning desire for an easier way to manage my client projects. The need was to effortlessly host a project (most commonly a website or API), without leaving the command line or logging into web hosts.

At the end of this project, when we had a small number of customers we quickly realised the pains of hosting content that isn't yours. Foreseeing it would grow to be unmanageable, the project was shut down (~ March 2015). The end goal of having an easier life as a sysadmin was not achieved, it actually got worse! It's also a bit of an irony since none of my client's websites ended up being hosted there.

There were many iterations, lots of learning from mistakes made. Some of those will be covered below, including a brief walkthrough of the project.

Contributors & Team members

  • Matej Kramny
  • Seb Haigh
  • Alan Campbell & Castaway Labs LLC
  • Zia Ur Rehman

All code is available here:

Architectual diagram

This should give you an idea of how nodegear worked:

Perhaps the most interesting component is the proxy serving the user apps ("ng-proxy" on the diagram above). It's been renamed to dproxy.js after.

It acts as a reverse proxy, like nginx, but it has a dynamic service table with built-in load balancing. Again this, at the time of making, didn't exist. Unlike nginx though it did not rewrite the request.

It's interesting because it was challenging, worked in a novel way (for me) and hard to do right. It was accepting TCP requests on port 443, getting the SNI information and looking up the name and address in redis (including TLS certificate data), followed by what node does best, piping data between sockets. It's small and clean

There are many more problems we solved (but won't go into here) such as:

  • keeping processes up (interfacing with docker api)
  • watching after their resource allocation (process table watching)
  • keeping application logs
  • managing databases for processes (mysql apis)
  • hosting & authenticating user's git repos (we used a customised gitolite)
  • keeping access logs to the apps

Marketing page

Made with vanilla html/js.


This is where users go to log in and manage their apps. This is made with AngularJS, Pug (or jade as it used to be) and Less. This all compiles into a directory of html, js and css code that is served through nginx as a single page application. It's quite lightweight since additional pages are loaded as needed. The views (html partial files) are loaded by the js code which is loaded through angular's routing table. AngularJS didn't really support this at the time, but was made possible with some dynamic component registration. In today's Angular versions, this is natively supported as lazy loading.

The whole thing looks like:

(click) -> look up js file in routing table -> [load js code] -> [load html code] -> present view to user

Since the frontend compiles to a dumb website, it is very easy to deploy and scale through various CDNs when the need comes. The support through the API is covered below.


  • Login / Registration / 2FA
  • Add payment methods and set up billing (using stripe)
  • Support system
  • Manage SSH keys
  • Manage your apps
  • Log viewing / download
  • View real time processes / RAM usage
  • Super admin features such as user impersonation and monitoring
  • Domain management

Below is a snapshot of the interface as it was.

Login / Registration

Login has a contextual background to indicate success/failure.

Sign up process:


The design was upgraded

Dashboard and CPU usage

Logs (old version)

Access logs

Database deployment

What's happening behind the scenes is the server contacting a partner for MongoDB. We also supported MySQL databases which were self-hosted

Process deployment

Payment and settings

App deployment

A demonstration of how the user would deploy a node.js project.

Deployment by supplying git url (such as ghost.js blog)

Deployment through terminal