Akana Platform Free Trial

Get Started Today »

Ian Goldsmith

There seems to be a tendency amongst architects to think of microservices as a way to build entire applications, taking monoliths and completely re-implementing them as a set of microservices. This belief makes a lot of architects, especially in large enterprise, question whether microservices is right for their organization. I think there is a different way of looking at this, where we can examine an application, identify one or more clearly defined functions that have scaling or distribution challenges, and pull those functions out of the application as microservices. As the title of this post says, this will allow you to eat your microservices elephant one bite at a time.

This sounds easy, but what does it mean to pull a function out of an application? How can the application invoke that function? How does this help with scaling? One short and simple answer to this is AMQP RPC. Essentially, you use a queue (independently scaled appropriately for the traffic it needs to handle) to pass messages to a set of worker processes (microservices – independently scaled according to the processes’ ability to handle the load) which implement the function, placing responses onto a replyTo queue. Then you place a client function into your application that drops a request onto the queue and waits for a reply. To illustrate this I have created a simple example using the Akana API Gateway to model the application, and we have pulled a simple function out of it for currency conversion (just an example; in the sample you will see that the function is really just a simple hello world).

nodeamqp-example

This sample is very simple, purely designed to showcase the concept. The API Gateway exposes an API, calls to this API result in the Gateway using an AMQP RPC process activity to invoke the remote function via a CloudAMQP queue, and then, once it has the result, it will alter the request appropriately and invoke the backend application. It’s also remarkably fast, typically adding only a few milliseconds to the overall processing time for the application request.

The beauty of this approach is that each of the components involved can be scaled independently. In this example I am using Heroku as the runtime for the worker processes, and CloudAMQP as the queue provider. Scaling either of these components is as easy as executing a simple command like heroku ps:scale worker=x for Heroku, or using a web console to configure your CloudAMQP environment. The Akana API Gateway is a fully stateless, massively scalable entity, and can also be scaled on demand, independently from any backend services, the queues, or the workers.

From the diagram above you can see that I’ve used a few of my favorite tools to build this demo:

  • Cloud 9 IDE – a very nice development environment in the cloud
  • Github – where the code is stored, and a very convenient integration point between all the other pieces of the equation
  • Snap CI – a continuous integration platform that detects code changes in the Github repository and automatically deploys the updated code to Heroku
  • Heroku – Platform-as-a-Service, in this case used to run the nodejs worker processes
  • CloudAMQP – RabbitMQ in the cloud, in this case providing the exchange and queue used for my RPC demo

This combination of tools allows me to very quickly and easily update and scale the underlying worker processes (microservices) and the queues used to communicate with them.

This model for leveraging microservices to offload scale-challenged functions from applications becomes even more interesting when you combine it with parallel invocation in the Gateway or the application. With processes like this one, you can invoke as many worker functions as you need all at the same time and not incur the latency penalty of sequential invocations:

process-split-join

If you want to give this approach to microservices a try yourself, you can start with this sample application. Follow the instructions in the README and have at it.

Share Button

Add a comment