Long ago in the early days of the Internet, pointing your browser at a URL meant your machine would start up a conversation with one server, and only one—the one connected with that URL. That may still happen if you visit a personal blog, but today all of the major websites and most of the small ones are really constellations of servers, sometimes dozens, sometimes hundreds, and sometimes even thousands.
One big reason for Node’s rise may be the simplicity. A functional Node.js microservice can be built with just a few lines of code: one to listen on a port, a handful to connect to any database, and then a few more to encode the business logic. The difference between a “Hello World” example and a running microservice is only a function or two. Node.js was designed and built by people who wanted to serve up bits on the Internet.
It’s not just the simplicity of the code. The Node.js community will cite many more practical reasons why Node.js is everywhere. Some will focus on the incredible, lightweight Node.js runtime. It starts up in seconds and doesn’t chew up RAM to create threads to process each incoming and outgoing request. The IO routines are optimized for getting data in and out quickly without spending too much time on creating objects to track threads and other ephemera. The call-back paradigm can be a bit of a challenge to novice programmers, but the result is blazing speed and very little load on the machine. This makes it easy to spin up multiple, fast microservices in the cloud.
The API Blueprint project is just a specification for a YAML-like language that describes your API. The real value comes from all of the tools that can read the spec and do something intelligent with the API. There’s Drakov, a mock server for testing your code, and Dredd, a tool that tests your API documentation against the back end, making sure your API is always consistent with the current version of the blueprint. There are also standard parsers built for languages like Python and C. There are dozens and dozens of different projects that use the API Blueprint format.
Tired of standing up your own servers? Claudia.js is a toolkit for moving your Node.js routines to AWS Lambda or AWS API Gateway. You grab your routines for responding to events and then Claudia will deploy them with a single invocation of the command line. (After you configure all of the Amazon account details, of course.)
The phrase “zero-configuration” sounds like music to any developer who needs a quick microservice, and that’s just what Cote offers. The word “zero” isn’t strictly correct because you’ll need to spell out some basic details of your objects and APIs, but Cote will organize itself after that, saving you the tedium of configuring IP addresses, ports, and routes. Most of the cleverness is found in the broadcast mechanisms and low-overhead protocol that allow the Cote instances to find each other and communicate.
There are dozens of web application frameworks for building out Node.js sites. Express is one of the simplest and most common. You don’t need to add all of its structure for creating a microservice, but it can help provide some standardization for the roles that it handles. It’s an interesting question whether the back-end microservices should have the same architecture as the front end. They may not need the extra templating features all of the time, but when they do, adding a simple framework like Express promotes consistency and stability.
Feathers is an open source project with a big collection of third-party plug-ins that creates quick REST APIs with just a few lines of configuration. The real magic, though, lies in the extra lines that you can add to quickly integrate the database, authentication, and pagination. Feathers gets even more clever by offering a real-time push mechanism for announcing new events to clients through Sockets.io and Primus libraries. These are just a few of the reasons why the Feathers developers describe their project as “batteries included.”
Hapi is another popular Node.js framework that can be used to handle the basic configuration for websites or for the microservices that back up the websites. There are dozens of plug-ins including some that handle some of the standard jobs for microservices like security and configuration. You can also integrate Hapi with common service models like Seneca.
If you think of your microservice as “middleware” that sits in front of some source like a database, Koa makes it possible to build fairly extensive pipelines for your data. Well, pipeline may not be the best metaphor for Koa because the functions are applied in a “stack-like” structure. That means your functions invoke other functions and then get another crack at the data as it’s coming back. The so-called “stack” is implemented with promises.
There is a surprisingly large collection of packages built on top of Koa that pull in security, monitoring, pre-processing, and many other options. Most projects will add several of these to complete the package.
One of the downsides to splitting your back end into so many microservices is that the front end must make so many different calls. KrakenD is an aggregator that will help you organize the communications with the back end and all of the responses, making life simpler for the front end, which has to make just one call to KrakenD. If some of these back ends return more data than the front end needs at the moment, KrakenD can prune away the cruft.
Node.js users who want to build a quick API can turn to LoopBack, another framework that takes your data model and turns it into a full API. You get all of the CRUD functions for manipulating the objects and a few extra maintenance functions like access control. Most of the standard databases are supported by easily installed connectors, making LoopBack one of the simpler ways to spin up a microservice that persists basic data.
Once you’ve written your code, you’ve got to keep it running. Writing a good microservice always includes adding a few test routines. While this can be a bit of a chore when you have dozens of microservices, it helps you zero in on issues faster while avoiding regressing the code base when you think you’re improving it. Mocha is one of the most widely used Node.js test frameworks. Mocha allows you to integrate good unit tests with your build routine with just a few functions for doing the test.
One of the fastest microservices frameworks for Node.js is Moleculer. Along with the speed, Molecular also offers many of the important features for spreading a heavy load across a cluster, with both intelligent balancing based upon CPU load or latency and purely random selection or round robin. You can draw on plug-ins for Redis caching, serialization, or different transports. Failed connections are rolled back and retried with multiple fallback options.
In theory, the browser is made for the humans in meatspace, while the microservices serve the browser from behind the scenes, handling all of the data in the background using machine-oriented formats like JSON or YML. In reality, there are often good reasons why the microservices need to delve into those human-centric formats and parse information marked up in HTML and CSS. Sometimes it’s just to turn some webpage into a PDF. Sometimes it’s to automate some ancient corner of the code base that only speaks in webpages. Sometimes it’s because you want to write elaborate test routines that truly represent what the user sees. Puppeteer is a headless version of the web browser that runs inside Node.js, waiting for a URL to go to work. You can use Puppeteer to perform lots of background work to support the various microservices.
Does your app require some proxy work to keep the pipes connected? Redbird is a reverse-proxy generating machine with support for most of the sophisticated encryption algorithms you need to ensure privacy.
Another option for creating a microservice is Restify, a fast framework that offers classic routing using a Sinatra-style collection of chained function invocations. You can insert your own customizing code as needed. Restify will also insert standard debugging calls (DTrace) to all of the different routes automatically, making it simpler to understand the spaghetti mess of microservices calling other microservices. Your code won’t need it, of course, but you can still embrace it because it will help others on the team.
Officially, Sails is a full MVC (model-view-controller) framework for building out web apps on Node.js. But it also comes with routines for automatically generating a REST API from the M in the MVC, aka the “blueprints” in Sails speak. If you need a REST API, you can start with Sails and add the V and C as needed later.
A good way to sketch out your microservice architecture is to create a bunch of rules that match some pattern of incoming parameters. Seneca was designed to let you do just that. You write a function and a few patterns that describe when it should fire, and then Seneca does the rest. It offers plug-ins that connect to all of the major databases as well as some examples for building out major features like the back end of a CMS or a shopping cart. Seneca aspires to be more than just a microservice foundation.
As more and more enterprise coders explore the serverless paradigm, they’ll need an abstraction layer to smooth out the differences between the various platforms, from AWS Lambda to Apache OpenWhisk. Serverless is one with a name that might be a bit too generic. The first thing it offers is a chance to build out APIs and get them up and running in a serverless environment. You can draw on oodles of plug-ins and examples and deploy to AWS Lambda, Azure Functions, Google Cloud Functions, IBM OpenWhisk, and more.
Microservice architectures have many parts and all of these parts have an interface to the outside world. Swagger offers a standard format for designing these APIs. You pour out your ideas about what the API will do and then Swagger turns them into both human readable documentation and machine-readable testing code. The standardized structure enforces a consistent model across your codebase.
Share this post if you enjoyed! 🙂