Dipping a Toe into Microservices with Search

By Justin Sweeney, October 10, 2019
Dipping a Toe into Microservices with Search

Microservices have been growing in popularity over the past several years, but changing the application development model at your company from using a monolith to using microservices can be a very scary thing.

Fortunately, it doesn’t have to be an all-or-nothing proposition. Because of the modular nature of microservices, companies can create one or two microservices in conjunction with an existing monolith to determine if the new model is right for them. ZoomInfo has chosen to do this with search to enable more rapid upgrades of the underlying technology.

What are Monoliths and Microservices?

A monolith is a traditional large application that rolls multiple features together into a single deployable system. Generally, the features are tangled together with code that calls across multiple features to accomplish specific tasks and the entire application must be tested and deployed as a unit whenever any part of it is updated. This takes significant time and resources and can limit the deployment rate of an application.

Microservices take each specific feature of a complex system and turns it into a separately designed, tested, and deployable component that’s a black box providing a single service to the collective whole. Each microservice advertises its function and presents APIs and other interfaces to allow the other components of the system to use the service without knowing implementation details or needing to integrate dependent libraries, direct code, or other extras into its own code base. Doing an upgrade of a feature deployed in a microservice can be done independently without testing or deploying anything else as long as the previously advertised interfaces remain unchanged.

More information on monoliths and microservices and the differences between the two can be found here.

Using a Hybrid Model

While entirely new development projects might have the luxury of deciding to use microservices for everything right from the start, most projects have an existing base of legacy code that’s large and very tightly interdependent. Moving all of that code to a microservices model can be a complex, time consuming project. Instead of doing this immediately, many companies adopt a hybrid model that allows for a slower, more measured migration to microservices. Other companies make the decision to retain their legacy monolith for existing functionality but use microservices on top of that monolith when new features are added to the mix.

These hybrid models are good way to try out microservices and see if the model fits your needs. More information on hybrid microservices can be found here.

The Upgrade Problem

ZoomInfo uses Apache Solr as its underlying search mechanism. Solr provides frequent upgrades and some of these offer improvements and bug fixes that would benefit our product. However, in the original implementation, each time we upgraded Solr we had an onerous process that took months and diverted significant development, testing, and operations resources away from new development. Because of this, at times we’ve found ourselves multiple major version behind the current release version of Solr and unable to use new features that would benefit our customers.

ZoomInfo decided that extracting the existing search functionality from our monolithic application and redesigning it as a microservice would make it much easier to upgrade Solr and permit us to take advantage of improvements and new features found in newer versions much more quickly. It would also minimize our dependence on Solr itself, allowing us to choose a different search technology in the future if such a decision makes sense from a technology standpoint. That type of tool substitution in the existing monolith would be a gigantic project that would bring all new development on everything to a screeching halt.

Planning the Microservice

ZoomInfo set several goals for the new microservice:

  1. Optimized for speed – we want the microservices and the resulting additional API layer to be at least as fast as internal application processing within the monolith
  2. Highly scalable – we want to handle as many search requests as received at the performance level outlined above.
  3. Future proof – we don’t want to scrap the microservice and start over every time we need to change its behavior or add new functionality. We also want to be able to deploy it on different platforms and with different supporting tools as our needs in these areas change.

We were open to using any technology that helped us meet these goals and researching what made sense turned into a project unto itself. Eventually we decided on RPC, Kubernetes, Docker, and Istio. Initially we decided to host everything on Google Cloud Platform (GCP) using their versions of these tools (GKE and a beta version of Istio available within GKE) as well as gRPC for remote procedure calls, but porting to a different hosting platform should be relatively straightforward (requirement #3).

Decoupling from the Monolith

The first step in the process of decoupling from the monolith is identifying all of the places and ways the existing monolith incorporates search. This serves two purposes:

  1. Identifying all of existing code that needs to be replaced with calls to the new microservice.
  2. Determining the specific search criteria that must be supported in the microservice interfaces. This becomes a minimum viable list of requirements; additional features and options may be supported as well if they seem useful (identifying some additional likely future needs might be part of future proofing the microservice per requirement #3 above).

Once these requirements have been set, an interface allowing each required request can be designed and implemented. This interface is the public face of the microservice, defining and describing the functionality it will provide to both the legacy monolith and any other microservices added to the system.

A remote call to the microservice (in our case, a remote procedure call containing protobuf data) replaces each existing bit of search code within the monolith, making sure that the options used in the call match the specifics of the type of search implemented by the removed code. Once this is done no further changes are needed in the monolith unless adding new search options to existing calls or new calls to the search microservice.

Building the Microservice Framework

The microservice itself is a completely independent miniature application that services requests sent via the defined interface. Those requests can come from anywhere on the network; in the case of ZoomInfo they’ll come from the updated monolith application. It can and should use the technology, programming languages, data stores, and other components that make the most sense specifically for the microservice without regard to the choices made in the legacy system (or in other microservices if they exist).

The microservice is loaded and added to the system inside a container that houses the service and any necessary support components, called an instance. Once determined, the exact configuration used is stored and is easily repeatable inside additional containers. In order to meet performance and scalability requirements, multiple instances of the microservice may be needed depending on the load at any given time. This is where Istio and Kubernetes comes in. Istio provides load balancing and network traffic routing as requests are received and responses to them sent. Kubernetes orchestrates bringing up new Docker containers as load increases and tearing them down as they become superfluous.

Final Thoughts on Experimenting with Microservices

Deciding to use microservices can be intimidating, especially when you have a large base of existing legacy code inside a monolith structure. The easiest way to experiment with microservices is to take an entirely new piece of functionality and develop it as a microservice from scratch. However, it may be more beneficial to break off a single existing feature of a monolith and redeploy it as a microservice. ZoomInfo chose to do this with its search feature in order to reduce the cost of upgrading to new versions of Apache Solr and to better handle new feature development. It also wanted to improve scalability and make it easier to adjust to future changes to development processes, tooling, environments, and languages.

Having completed this initial introduction of microservices, Zoominfo is now realizing the benefits. Over the past few months we have seen an accelerated development lifecycle for search features (including bug fixes being turned around in a matter of hours instead of days). There have been challenges too, but the benefits gained have significantly outweighed them (these benefits and challenges will be discussed in more detail in a future post). The bottom line: ZoomInfo will continue to pursue this strategy.

Related Content