Building Enterprise Grade APIs with GraphQL, MySQL and Node.js

by Thomas Moser

April 2019

There are a lot of getting started tutorials for building GraphQL APIs. But only little information on tackling more sophisticated problems within an enterprise environment.

Is GraphQL ready for production in big corporations?

Our API: Fast and stable

We have done it and we can full-heartedly say: Yes, it’s ready.

We have an enterprise grade GraphQL API in production for 6 months. Our API is super fast and very flexible. It reliably handles up to 100 requests/second. Our customer is happy and so are we.

In this series we will share what patterns to use when creating an enterprise grade GraphQL API. We will show the pros and cons of different approaches.

You are reading the first article in this series, which will focus on our architecture.

Our Goal: A Community API for User Generated Content

Our customer — the biggest retailer in Switzerland — wanted to replace its community software. Users can write reviews for products and recipes, ask questions and answers, discuss in a forum, share ideas for new products and a lot more.

The goal: A single API for user generated content. The API was going to be used by the publishing division as well as by several online shops. Each of these consumers had different requirements. GraphQL is the perfect pattern for such a scenario.

Our Architecture

At the heart of our solution is a Node.js-Application powered by Apollo Server and Express. The content generated by our users is stored in a MySQL database. User and product data is fetched by our GraphQL-API from pre-existing REST APIs.

From SDL to generating our GraphQL Schema

When creating a GraphQL-API with Apollo Server you heavily rely on graphql-tools. This means defining your field resolvers separately from the schema (usually written in SDL).

First we used SDL to create our schema. Over the time our schema was growing and growing and getting more and more redundant. This led us to a different, more programmatic approach instead of the SDL-first approach — A pattern which allows us to prevent duplicate code and compose schemas.

An Almighty Resolver to Get Data from MySQL

For query resolvers we created a generic GraphQL-to-SQL-Connector. With a pattern we call the Almighty Resolver our resolver delegates the query to a connector that takes GraphQL-Queries and translates them into SQL-Queries.

Accessing Existing REST-APIs

We consume some data from an existing REST API. Apollo Server provides a helpful class for that: REST Data Source. With REST Data Source it’s super simple to get data from a REST endpoint, including caching. Unfortunately REST Data Source doesn’t provide a cache delete function — the only thing we needed to extend for our case.

Caching GraphQL-Queries with Redis

Caching GraphQL-APIs is much more complex than caching REST endpoints. Since Caching adds a layer of complexity, we’ve chosen to cache data only for expensive queries. When a query matches one of our pre-defined queries, our API responses with cached data by Redis. The cache invalidates when data is updated.

Message Broker for Delayed Actions

To keep our mutations fast and clean we delegate all non-time-critical actions (e.g. notification mails) to RabbitMQ, our message broker.

Conclusion

This is the buildup of our API. We are very happy with our setup. We didn’t have any major incident with our API since launch and we’re constantly adding more features.

In the next few months we will cover in depth some of the topics mentioned. So stay tuned and subscribe to our blog.

Build Enterprise Grade GraphQL Applications

Want to become a GraphQL pro? Follow us and read our whole series on enterprise grade GraphQL applications.

Need an expert team to implement your advanced web application? Check out our portfolio.

Previous post
Back to overview
Next post