Getting Started with GraphQL and Apollo (Part 1)

NerdWallet recently prioritized implementation GraphQL as a major engineering initiative with the goal of standardizing our APIs, increasing our development efficiency and reducing code duplication. At NerdWallet, engineers developing mobile and web applications operate a number of shared services, ranging from a global authentication service to more product-specific services, such as providing personalized mortgage rates. These services are maintained by dedicated teams through engineering. Historically, in order to create a consistent user experience, we have relied on unique SDKs linked to specific data sources that power functionality across all platforms. GraphQL centralizes these service integrations and Apollo provides a complete ecosystem that supports our initiatives.

A main goal of GraphQL and Apollo in particular is to reduce the need for state management systems. That’s not to say that they can’t work in unison, but as we’ll see later in Part 2, Apollo Client provides various resources that can adequately replace a global store for managing server data. . At NerdWallet, our React apps heavily leverage Redux, with many selectors, reducers, and actions defined in many shared libraries. Each product team is responsible for updating and maintaining these libraries. So, cross product integrations can get too complicated. For example, when the form and location of bank data differs significantly from credit card data, it is difficult to provide a “one-size-fits-all” product review, and customer-side data retrieval and mutating is expensive. GraphQL helps break down silos and enables a better cross-functional development experience.

About three months ago, we identified our online shopping and rewards platform codebase as the first candidate for implementing end-to-end GraphQL integrations. This work also resulted in the phasing out of Redux and the replacement of existing API integrations with GraphQL. Let’s dive into our lessons from this experience in the hope that it improves understanding of the Apollo platform. We’ll also take a look at some common models we’ve found useful and implementation details to consider.

Apollo Server

Our work started in Apollo Server, which serves as a layer between backend services and front-end applications. It applies a shared “language” or protocol for requesting and formatting data and provides a variety of features such as caching, testing, authentication, and more. NerdWallet’s front-end rewards experience is built on a single backend service written in Python. This product was an ideal candidate to validate the technology because the code base was relatively young. To start understanding the mechanics of Apollo Server, let’s start by creating a data source.

Information source

The first step in implementing Apollo Server is to define data sources, which “are classes that encapsulate the retrieval of data from a particular service, with built-in support for caching.” , deduplication and error handling. You write the code specific to the interaction with your backend, and Apollo Server does the rest ”(by their docs).

The data source for this integration looks like this:

the API rewards The class encapsulates all requests made to our Python service endpoints. We provide simple-named methods that map directly to the backend (more on this soon), allowing ease of use for any number of queries to mine. These atomic methods are responsible for handling requests to a single endpoint.

As a result, the existing backend Python service exposes these two endpoints:

During our work with GraphQL, we decided that data source methods should strictly mirror the backend without any additional operations. This maintains the purity of the endpoints but allows flexibility because query and mutation resolvers can use as many endpoints as needed, depending on the complexity of the data requested. The big advantage here is that a query resolver can wrap queries to multiple services, allowing the client to retrieve all the data they need without having to make queries to each service individually and aggregate the results on the client side. Let’s take a look at an example query resolver to better understand this concept.

Query resolvers

Query resolvers retrieve the data by encapsulating the API requests through the aforementioned data source methods, then shaping the response data as defined by the schema resolvers. The top-level schema is often made up of nested bottom-level schema resolvers due to the depth of the data in the real world.

It is the resolver that exploits the get offers data source method defined above:

the offers the resolver passes the SubjectRequest argument directly to the source method making the request. We then analyze the response and retrieve the The data property which is then shaped in our OffersSearchResult diagram. Schema solvers are where the magic happens. Let’s take a look at the schema of this query.

Schema definitions

The schema of the query result is defined as follows, where OffersSearchResult is a type of schema:

Our resolvers object has a named schema resolver OffersSearchResult.

This resolver parses the response body for the count_total_matched and results the fields. These new fields are mapped to other schema objects, as shown below. Note: It may be safe to ignore some response properties if they are not relevant to your use cases.

the OffersSearchResult schema turns our search result objects into response objects that the client expects. This top-level scheme is made up of additional scheme types with their own resolvers. The complexity of a top-level query response scheme increases with the depth of the data. Finally, the whole response scheme is represented by properties which correspond to primitive scalars of type Int, Float, Chain, boolean, and username, which is used as a cache key.

In the example below, the results the property is defined as an array of a lower level schema, To offer. the To offer schema is made up of additional schema types. For example Actions is defined as a RewardsAction diagram and so on and so on. If you have services that return similarly structured data, your schemas are reusable. Queries can be customized for special use cases that can take advantage of modular schema definitions. Example: If we wanted to promote the top three pizza and pasta deals as a special caption, we can write a query that reuses our existing pieces to do so. The query resolver can filter all the results of these special offers without any client-side logic and the data is formatted according to the existing schema. In addition, since we have defined generic data source functions, we can reuse the get offers method to accomplish what we need!

This object represents the offers query with optional query arguments and the requested response fields. In this example, the nested properties accurately represent the structure of the response data provided by the underlying data source request. For example, the Actions is a list of objects with amount currency, et al properties as defined in RewardsAction.

The beauty of GraphQL is the ability of clients to specify the particular fields they need; nothing less, nothing more. Developers can proactively reduce their application’s CPU / memory footprint and optimize for slower internet speeds. Likewise, many of the fields listed above may not be relevant for a given feature. This allows the customer to only ask for the essentials. Fields like networkRank, Language, brandAssets.mimeType, brandAssets.file, etc. are all frivolous in the production application and we have no use for them. They are still supported by our API and in GraphQL, so if other applications possibly need this data, it is easily retrievable.

Finally, it is important to discuss mutation resolvers because GraphQL supports standard CRUD operations.

Mutation resolvers

Adding a mutation resolver is similar to adding a query resolver. Its structure is almost identical. In this case, define the shape of Offer ID is important.

Schema definitions

the activateOffer the mutation requires (as indicated by the!) an offer ID which is an Int. We are waiting for a string as a response. Keep in mind that an update mutation or a creative mutation may want to return the object. Similar to a query return type, it can be a custom schema.

Following these development techniques on Apollo Server will increase your capabilities when it comes to working with Apollo Client. Separating your data source classes, clearly defining your queries and mutations, and writing well-documented schemas will produce a documented, extensible, and cross-functional API that can be leveraged by teams in your engineering organization. In Part 2, I go over the details of working with Apollo Client and how the orchestration between these two reduces your reliance on state management systems and generates well-architected, maintainable, and modular client applications.

Source link

About Dwaine Pinson

Check Also

Marlette approaches market with third 2021 ABS for $ 319 million

Marlette Funding LLC returns to the securitization market with a deal that has many similarities …

Leave a Reply

Your email address will not be published. Required fields are marked *