react-relay-network-modern-es
v1.0.0
Published
Network Layer for React Relay and Express (Batch Queries, AuthToken, Logging, Retry)
Downloads
12
Maintainers
Readme
ReactRelayNetworkModern (for Relay Modern)
The ReactRelayNetworkModern
is a Network Layer for Relay Modern
with various middlewares which can manipulate requests/responses on the fly (change auth headers, request url or perform some fallback if request fails), batch several relay request by timeout into one http request, cache queries and server-side rendering.
Network Layer for Relay Classic can be found here.
Migration guide from v1 to v2 can be found here.
ReactRelayNetworkModern
can be used in browser, react-native or node server for rendering. Under the hood this module uses global fetch
method. So if your client is too old, please import explicitly proper polyfill to your code (eg. whatwg-fetch
, node-fetch
or fetch-everywhere
).
Install
yarn add react-relay-network-modern
OR
npm install react-relay-network-modern --save
What if Webpack errors with Error: Cannot find module 'core-js/modules/es6.*'
?
core-js
is not an explicit dependency as it adds 30kb for client bundles.
If this error occurs you can do one of the following:
- Explicitly install core-js v2. E.g.
- `yarn add [email protected]
- Try referencing one of the non default imports:
import { RelayNetworkLayer } from 'react-relay-network-modern/node8';
import { RelayNetworkLayer } from 'react-relay-network-modern/es';
Different builds
This library contains different builds for any purposes:
// Default import for using in modern browsers
// last 5 version and not dead, defaults
import { RelayNetworkLayer } from 'react-relay-network-modern';
// For IE11
import { RelayNetworkLayer } from 'react-relay-network-modern/ie11';
// For SSR on node 8 and above (native async/await)
import { RelayNetworkLayer } from 'react-relay-network-modern/node8';
Middlewares
Built-in middlewares
- your custom inline middleware - see example below where added
credentials
andheaders
to thefetch
method.next => req => { /* your modification of 'req' object */ return next(req); }
- urlMiddleware - for manipulating fetch
url
on fly via thunk.url
- string for single request. Can be Promise or function(req). (default:/graphql
).method
- string, for request method type (default:POST
)- headers - Object with headers for fetch. Can be Promise or function(req).
- credentials - string, setting for fetch method, eg. 'same-origin' (default: empty).
- also you may provide
mode
,cache
,redirect
options for fetch method, for details see fetch spec.
- cacheMiddleware - for caching same queries you may use this middleware. It will skip (do not cache) mutations and FormData requests.
size
- max number of request in cache, least-recently updated entries purged first (default:100
).ttl
- number in milliseconds, how long records stay valid in cache (default:900000
, 15 minutes).onInit
- function(cache) which will be called once when cache is created. As first argument you receiveQueryResponseCache
instance fromrelay-runtime
.allowMutations
- allow to cache Mutation requests (default:false
)allowFormData
- allow to cache FormData requests (default:false
)clearOnMutation
- clear the cache on any Mutation (default:false
)cacheErrors
- cache responses with errors (default:false
)updateTTLOnGet
- refresh cache ttl on queries on successful cache get (default:false
)
- authMiddleware - for adding auth token, and refreshing it if gets 401 response from server.
token
- string which returns token. Can be function(req) or Promise. If function is provided, then it will be called for every request (so you may change tokens on fly).tokenRefreshPromise
: - function(req, res) which must return promise or regular value with a new token. This function is called when server returns 401 status code. After receiving a new token, middleware re-run query to the server seamlessly for Relay.allowEmptyToken
- allow made a request without Authorization header if token is empty (default:false
).prefix
- prefix before token (default:'Bearer '
).header
- name of the HTTP header to pass the token in (default:'Authorization'
).- If you use
auth
middleware withretry
,retry
must be used beforeauth
. Eg. if token expired when retries apply, thenretry
can callauth
middleware again.
- retryMiddleware - for request retry if the initial request fails.
fetchTimeout
- number in milliseconds that defines in how much time will request timeout after it has been sent to the server again (default:15000
). Or it may be a function(attempt: number) => number
which returns a timeout in milliseconds (attempt
starts from 0).retryDelays
- array of millisecond that defines the values on which retries are based on (default:[1000, 3000]
). Or it may be a function(attempt: number) => number | false
which returns a timeout in milliseconds for retry or false for disabling retry (attempt
starts from 0).statusCodes
- array of response status codes which will fire up retryMiddleware. Or it may be a function(statusCode: number, req, res) => boolean
which makes retry if returned true. (default:status < 200 or status > 300
).beforeRetry
- function(meta: { forceRetry: Function, abort: Function, delay: number, attempt: number, lastError: ?Error, req: RelayRequest }) called before every retry attempt. You get one argument with following properties:forceRetry()
- for proceeding request immediatelyabort(abortMsg: string)
- for aborting retry request. (Default abort message is:"Aborted in beforeRetry() callback"
)attempt
- number of the attemp (starts from 1)delay
- number of milliseconds when next retry will be calledlastError
- will keep Error from previous requestreq
- retriable Request object
allowMutations
- by default retries disabled for mutations, you may allow process retries for them passingtrue
. (default:false
)allowFormData
- by default retries disabled for file Uploads, you may enable it passingtrue
(default:false
)forceRetry
- deprecated, usebeforeRetry
instead (default:false
).
- batchMiddleware - gather some period of time relay-requests and sends it as one http-request. You server must support batch request and return results in the same order they were requested. See how to setup your server.
batchUrl
- string. Url of the server endpoint for batch request execution. Can be function(requestList) or Promise. (default:/graphql/batch
)batchTimeout
- integer in milliseconds, period of time for gathering multiple requests before sending them to the server. Will delay sending of the requests on specified in this option period of time, so be careful and keep this value small. (default:0
)maxBatchSize
- integer representing maximum size of request to be sent in a single batch. Once a request hits the provided size in length a new batch request is ran. Actual for hardcoded limit in 100kb per request in express-graphql module. (default:102400
characters, roughly 100kb for 1-byte characters or 200kb for 2-byte characters)allowMutations
- by default batching disabled for mutations, you may enable it passingtrue
(default:false
)method
- string, for request method type (default:POST
)headers
- Object with headers for fetch. Can be Promise or function(req).credentials
- string, setting for fetch method, eg. 'same-origin' (default: empty).- also you may provide
mode
,cache
,redirect
options for fetch method, for details see fetch spec. - alternatively, you can use legacyBatchMiddleware, which sends a request ID with each query and expects the GraphQL server to include the request ID with each result.
- loggerMiddleware - for logging requests and responses.
logger
- log function (default:console.log.bind(console, '[RELAY-NETWORK]')
)- An example of req/res output in console:
- perfMiddleware - simple time measure for network request.
logger
- log function (default:console.log.bind(console, '[RELAY-NETWORK]')
)
- errorMiddleware - display
errors
data to console from graphql response. If you want see stackTrace for errors, you should provideformatError
toexpress-graphql
(see example below wheregraphqlServer
acceptformatError
function).logger
- log function (default:console.error.bind(console)
)prefix
- prefix message (default:[RELAY-NETWORK] GRAPHQL SERVER ERROR:
)
- progressMiddleware - enable onProgress callback for modern browsers with support for Stream API.
onProgress
- on progress callback function (function(bytesCurrent: number, bytesTotal: number | null) => void
, total size will be null if size header is not set)sizeHeader
- response header with total size of response (default:Content-Length
, useful whenTransfer-Encoding: chunked
is set)
- uploadMiddleware - extracts
File
,Blob
andReactNativeFile
instances from query variables to be consumed with graphql-upload
Standalone package middlewares
- react-relay-network-modern-ssr - client/server middleware for server-side rendering (SSR). On server side it makes requests directly via
graphql-js
and yourschema
, cache payloads and serialize them for putting to HTML. On client side it loads provided payloads and renders them in sync mode without visible flashes and loaders.
Example of injecting NetworkLayer with middlewares on the client side
import { Environment, RecordSource, Store } from 'relay-runtime';
import {
RelayNetworkLayer,
urlMiddleware,
batchMiddleware,
// legacyBatchMiddleware,
loggerMiddleware,
errorMiddleware,
perfMiddleware,
retryMiddleware,
authMiddleware,
cacheMiddleware,
progressMiddleware,
uploadMiddleware,
} from 'react-relay-network-modern';
const network = new RelayNetworkLayer(
[
cacheMiddleware({
size: 100, // max 100 requests
ttl: 900000, // 15 minutes
}),
urlMiddleware({
url: (req) => Promise.resolve('/graphql'),
}),
// Deprecated batch middleware
// legacyBatchMiddleware({
// batchUrl: (requestMap) => Promise.resolve('/graphql/batch'),
// batchTimeout: 10,
// }),
batchMiddleware({
batchUrl: (requestList) => Promise.resolve('/graphql/batch'),
batchTimeout: 10,
}),
__DEV__ ? loggerMiddleware() : null,
__DEV__ ? errorMiddleware() : null,
__DEV__ ? perfMiddleware() : null,
retryMiddleware({
fetchTimeout: 15000,
retryDelays: (attempt) => Math.pow(2, attempt + 4) * 100, // or simple array [3200, 6400, 12800, 25600, 51200, 102400, 204800, 409600],
beforeRetry: ({ forceRetry, abort, delay, attempt, lastError, req }) => {
if (attempt > 10) abort();
window.forceRelayRetry = forceRetry;
console.log('call `forceRelayRetry()` for immediately retry! Or wait ' + delay + ' ms.');
},
statusCodes: [500, 503, 504],
}),
authMiddleware({
token: () => store.get('jwt'),
tokenRefreshPromise: (req) => {
console.log('[client.js] resolve token refresh', req);
return fetch('/jwt/refresh')
.then((res) => res.json())
.then((json) => {
const token = json.token;
store.set('jwt', token);
return token;
})
.catch((err) => console.log('[client.js] ERROR can not refresh token', err));
},
}),
progressMiddleware({
onProgress: (current, total) => {
console.log('Downloaded: ' + current + ' B, total: ' + total + ' B');
},
}),
uploadMiddleware(),
// example of the custom inline middleware
(next) => async (req) => {
req.fetchOpts.method = 'GET'; // change default POST request method to GET
req.fetchOpts.headers['X-Request-ID'] = uuid.v4(); // add `X-Request-ID` to request headers
req.fetchOpts.credentials = 'same-origin'; // allow to send cookies (sending credentials to same domains)
// req.fetchOpts.credentials = 'include'; // allow to send cookies for CORS (sending credentials to other domains)
console.log('RelayRequest', req);
const res = await next(req);
console.log('RelayResponse', res);
return res;
},
],
opts
); // as second arg you may pass advanced options for RRNL
const source = new RecordSource();
const store = new Store(source);
const environment = new Environment({ network, store });
Advanced options (2nd argument after middlewares)
RelayNetworkLayer may accept additional options:
const middlewares = []; // array of middlewares
const options = {}; // optional advanced options
const network = new RelayNetworkLayer(middlewares, options);
Available options:
- subscribeFn - if you use subscriptions in your app, you may provide this function which will be passed to RelayNetwork.
- noThrow - EXPERIMENTAL (May be deprecated in the future) set true to not throw when an error response is given by the server, and to instead handle errors in your app code.
Server-side rendering (SSR)
See react-relay-network-modern-ssr for SSR middleware.
How middlewares work internally
Middlewares on bottom layer use fetch method. So req
is compliant with a fetch()
options. And res
can be obtained via resPromise.then(res => ...)
, which returned by fetch()
.
Middleware that needs access to the raw response body from fetch (before it has been consumed) can set isRawMiddleware = true
, see progressMiddleware
for example. It is important to note that response.body
can only be consumed once, so make sure to clone()
the response first.
Middlewares have 3 phases:
setup phase
, which runs only once, when middleware added to the NetworkLayercapturing phase
, when you may change request object, and pass it down vianext(req)
bubbling phase
, when you may change response promise, made re-request or pass it up unchanged
Basic skeleton of middleware:
export default function skeletonMiddleware(opts = {}) {
// [SETUP PHASE]: here you can process `opts`, when you create Middleware
return (next) => async (req) => {
// [CAPTURING PHASE]: here you can change `req` object, before it will pass to following middlewares.
// ...some code which modify `req`
const res = await next(req); // pass request to following middleware and get response promise from it
// [BUBBLING PHASE]: here you may change response of underlying middlewares, via promise syntax
// ...some code, which process `req`
return res; // return response to upper middleware
};
}
Middlewares use LIFO (last in, first out) stack. Or simply put - use compose
function. So if you pass such middlewares [M1(opts), M2(opts)] to NetworkLayer it will be work such way:
- call setup phase of
M1
with its opts - call setup phase of
M2
with its opts - for each request
- call capture phase of
M1
- call capture phase of
M2
- call
fetch
method - call bubbling phase of
M2
- call bubbling phase of
M1
- chain to
resPromise.then(res => res.json())
and pass this promise for resolving/rejecting Relay requests.
Batching several requests into one
Joseph Savona wrote: For legacy reasons, Relay splits "plural" root queries into individual queries. In general we want to diff each root value separately, since different fields may be missing for different root values.
Also if you use react-relay-router and have multiple root queries in one route pass, you may notice that default network layer will produce several http requests.
So for avoiding multiple http-requests, the ReactRelayNetworkModern
is the right way to combine it in single http-request.
Example how to enable batching
...on server
Firstly, you should prepare server to process the batch request:
import express from 'express';
import graphqlHTTP from 'express-graphql';
import { graphqlBatchHTTPWrapper } from 'react-relay-network-modern';
import bodyParser from 'body-parser';
import myGraphqlSchema from './graphqlSchema';
const port = 3000;
const server = express();
// setup standart `graphqlHTTP` express-middleware
const graphqlServer = graphqlHTTP({
schema: myGraphqlSchema,
formatError: (error) => ({
// better errors for development. `stack` used in `gqErrors` middleware
message: error.message,
stack: process.env.NODE_ENV === 'development' ? error.stack.split('\n') : null,
}),
});
// declare route for batch query
server.use('/graphql/batch', bodyParser.json(), graphqlBatchHTTPWrapper(graphqlServer));
// declare standard graphql route
server.use('/graphql', graphqlServer);
server.listen(port, () => {
console.log(`The server is running at http://localhost:${port}/`);
});
More complex example of how you can use a single DataLoader for all (batched) queries within a one HTTP-request.
If you are on Koa@2, koa-graphql-batch provides the same functionality as graphqlBatchHTTPWrapper
(see its docs for usage example).
...on client
And right after server side ready to accept batch queries, you may enable batching on the client:
const network = new RelayNetworkLayer([
// deprecated "legacy" batch middleware
// legacyBatchMiddleware({
// batchUrl: '/graphql/batch', // <--- route for batch queries
// }),
batchMiddleware({
batchUrl: '/graphql/batch', // <--- route for batch queries
}),
]);
How batching works internally
Internally batching in NetworkLayer
prepare list of queries [ {query, variables}, ...]
sends it to server. And server returns list of results [ {data}, ...]
. The server is expected to return the results in the same order as the requests.
As of v4.0.0, the batch middleware utilizing request IDs in queries and corresponding results has been renamed legacyBatchMiddleware
. The legacy middleware included a request ID with each query included in the batch and expected the server to return each result with the corresponding request ID. The new batchMiddleware
simply expects results be returned in the same order as the batched queries.
NOTE: legacyBatchMiddleware
does not correctly deduplicate queries when batched because query variables may be ignored in a comparison. This means that two identical queries with different variables will show the same results due to a bug (#31). It is highly encouraged to use the new order-based batchMiddleware
, which still deduplicates queries, but includes the variables in the comparison.
Contribute
I actively welcome pull requests with code and doc fixes. Also if you made great middleware and want share it within this module, please feel free to open PR.