The server-side registry pattern
By Lori MacVittie
If you've been following along in this series on service registries and microservices, you'll know we're on our final topic: The server-side registry pattern. If you haven't been following along, check out part one (Service registries: What are they and why do you need them?) and part two (The client-side service registry pattern) here on FierceDevOps.
To recap, service registries are needed in microservice architectures to enable discovery of and distribution of load across microservices. Essentially, they're an integral component to scaling these highly volatile, short-lived instances of applications.
One of the ways to deploy such a component is on the server-side. One advantage of the server-side pattern is simplifying the client. Unlike the client-side registry pattern, the server-side registry pattern abstracts discovery and distribution and manages it on behalf of the client, which means the client does not need additional code to take advantage of the registry. Also advantageous is that many load balancers are capable of acting as a service registry. Given that most environments (cloud and on-premise) already rely on such services to enable scale, this advantage is a significant one. The con here is that integration will be required, either with a separate service registry or with systems managing the microservice lifecycle.
The basic premise of the server-side pattern is that a load balancing proxy acts as the "endpoint" for the client; the client communicates with the proxy only. It is the role of the load balancer to determine how to distribute (route) each request. This seems simple; after all, a load balancer is designed to distribute a load across a pool (cluster) of resources, whether they be servers, apps or microservices.
True. However, most load balancers were not designed to handle the volumetric change that may be required to scale a microservices-based application. If you're using containers, you might look to something like Kubernetes, which acts as a service registry and includes basic load balancing as part of its function. Many folks prefer a broader set of options for load balancing or aren't using containers to deliver microservices (shocking, I know) and therefore will prefer a more seasoned load balancing solution. Nevertheless, there's more to it than just deploying your favorite load balancer and calling it a day.
There are two basic ways to implement a server-side registry pattern:
1. The load balancer is the server-side registry.
In this scenario the imperative is to update the pools of microservices in real-time. This requires that the load balancer have an API (most do) and that a system external to the load balancer perform these updates.
In this pattern, the load balancer's native pool management acts as a service registry. When requests are received, it selects an appropriate service from the pool and delivers it.
The advantage to this pattern is load balancing solutions are generally capable of monitoring the status of the services they are scaling. This means if an instance disappears, the load balancer will automatically stop sending requests to it.
The disadvantage is that the instance still needs to be removed from the pool, since the nature of microservices is such that the IP address associated with that instance may shortly be assigned to a microservice in a different pool. Failure to keep the service registry up to date could cause the proxy to select the wrong instance of microservice and cause all sorts of interesting dilemmas.
A second implementation of the server-side service registry continues to rely on an external service registry.
2. The load balancer is directed by the service registry.
In this scenario, the load balancing proxy continues to serve as the distribution point within the architecture. Clients still communicate directly with it and not the service registry.
The load balancer in this configuration must query the service registry via its API to determine to which instance the request should be routed.
The disadvantages here are primarily around scale and performance. The act of querying an external resource is a blocking action; the request cannot be completed until a response is received. If the service registry is overwhelmed, or slow, the request will similarly be impacted.
In both patterns, consistency is a concern. In the first implementation, it is possible for the load balancer to select an instance that is already "dead" because of the time it takes to update. In the second implementation, if the service registry is being updated at the same time a query from the load balancer is being answered, the possibility that a "dead" service instance will be selected is possible. Careful attention to configuration of the load balancer can alleviate some consistency issues with proper monitoring and retry capabilities enabled.
Overall, scaling volumetric microservices requires the use of a service registry. Whether that registry is client-side or server-side, integrated with the load balancer or separate, will be a matter of architectural preference. But if you're one of the fairly significant number of organizations exploring microservices, it's time to start considering how to scale them now, because no matter your choice, there are pieces that need to be put in place before you get to production.
Lori MacVittie is a subject matter expert on cloud computing, cloud and application security, and application delivery and is responsible for education and evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she authored articles on a variety of topics aimed at IT professionals. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.