npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

slonik-dataloaders

v5.1.1

Published

Utilities for creating DataLoaders using Slonik

Downloads

65

Readme

slonik-dataloaders

slonik-dataloaders is a set of utilities for creating DataLoaders using Slonik. These DataLoaders abstract away some of the complexity of working with cursor-style pagination when working with a SQL database, while still maintaining the flexibility that comes with writing raw SQL statements.

createNodeByIdLoaderClass

Example usage:

const UserByIdLoader = createNodeByIdLoaderClass({
  query: sql.type(User)`
    SELECT
      *
    FROM user
  `,
});

const pool = createPool("postgresql://");
const loader = new UserByIdLoader(pool);
const user = await loader.load(99);

By default, the loader will look for an integer column named id to use as the key. You can specify a different column to use like this:

const UserByIdLoader = createNodeByIdLoaderClass({
  column: {
    name: 'unique_id',
    type: 'text',
  }
  query: sql.type(User)`
    SELECT
      *
    FROM user
  `,
});

createConnectionLoaderClass

Example usage

const UserConnectionLoader = createConnectionLoaderClass<User>({
  query: sql.type(User)`
    SELECT
      *
    FROM user
  `,
});

const pool = createPool("postgresql://");
const loader = new UserByIdLoader(pool);
const connection = await loader.load({
  where: ({ firstName }) => sql.fragment`${firstName} = 'Susan'`,
  orderBy: ({ firstName }) => [[firstName, "ASC"]],
});

When calling load, you can include where and orderBy expression factories that will be used to generate each respective clause. These factory functions allow for type-safe loader usage and abstract away the actual table alias used inside the generated SQL query. Note that the column names passed to each factory reflect the type provided when creating the loader class (i.e. User in the example above); however, each column name is transformed using columnNameTransformer as described below.

Usage example with forward pagination:

const connection = await loader.load({
  orderBy: ({ firstName }) => [[firstName, "ASC"]],
  limit: first,
  cursor: after,
});

Usage example with backward pagination:

const connection = await loader.load({
  orderBy: ({ firstName }) => [[firstName, "ASC"]],
  limit: last,
  cursor: before,
  reverse: true,
});

Conditionally fetching edges and count based on requested fields

In addition to the standard edges and pageInfo fields, each connection returned by the loader also includes a count field. This field reflects the total number of results that would be returned if no limit was applied. In order to fetch both the edges and the count, the loader makes two separate database queries. However, the loader can determine whether it needs to request only one or both of the queries by looking at the GraphQL fields that were actually requested. To do this, we pass in the GraphQLResolveInfo parameter provided to every GraphQL resolver:

const connection = await loader.load({
  orderBy: ({ firstName }) => [[firstName, "ASC"]],
  limit: first,
  cursor: after,
  info,
});

Working with edge fields

It's possible to request columns that will be exposed as fields on the edge type in your schema, as opposed to on the node type. These fields should be included in your query and the TypeScript type provided to the loader. The loader returns each row of the results as both the edge and the node, so all requested columns are available inside the resolvers for either type. Note: each requested column should be unique, so if there's a name conflict, you should use an appropriate alias. For example:

const UserConnectionLoader = createConnectionLoaderClass<
  User & { edgeCreatedAt }
>({
  query: sql.unsafe`
    SELECT
      user.id,
      user.name,
      user.created_at,
      friend.created_at edge_created_at
    FROM user
    INNER JOIN friend ON
      user.id = friend.user_id
  `,
});

In the example above, if the field on the Edge type in the schema is named createdAt, we just need to write a resolver for it and resolve the value to that of the edgeCreatedAt property.

columnNameTransformer

Both types of loaders also accept an columnNameTransformer option. By default, the transformer used is snake-case. The default assumes:

By using the columnNameTransformer (snake case), fields can be referenced by their names as they appear in the results when calling the loader, while still referencing the correct columns inside the query itself. If your usage doesn't meet the above two criteria, consider providing an alternative transformer, like an identify function.