npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@ndlib/marble-models

v1.0.17

Published

This repo contains the various components of the backend infrastructure for the Marble project.

Downloads

277

Readme

Overview

This repo contains the various components of the backend infrastructure for the Marble project.

In this repo, you will find code for:

  • Our database models, located in the prisma subdirectory. These models are specified using the Prisma ORM.
  • Migrations for our database, located in the prisma/migrations subdirectory. These migrations are SQL migrations needed to set up and modify a database according to our schema.
  • Scripts for migrating data into a database from the legacy DynamoDB database, located in the src/scripts subdirectory
  • All the necessary code to run our GraphQL API, located in the src/ subdirectory.

Getting Started

Postgres Setup

To get started with this project, you will need to have access PostgreSQL database. You can set up a local database using homebrew, following the instructions here.

Reference and access to your postgres db is managed by setting the PG_DATABASE_URL environment variable. This variable should be formatted as follows:

postgres://<username>:<password>@<host>:<port>/<database>

For example, on your local machine, this might look like:

postgres://mynetid:[email protected]:5432/mynetid

You can also run postgres using docker if you have docker installed.

Start Postgres in Docker

docker-compose up -d

Stop Postgres in Docker

  1. find your container id: docker ps
  2. docker stop CONTAINER_ID

Once you have set up your database, you can run the following commands to get started:

npm install
npx prisma migrate dev

DynamoDB Setup

Since our graphql api also uses the legacy DynamoDB database for some operations, you will also need to set up your environment with permissions to access the database you want to use.

AWS_ACCESS_KEY_ID=""
AWS_SECRET_ACCESS_KEY=""
AWS_SESSION_TOKEN=""

These values can be found by following the Access keys link on your AWS IAM login page. Notes these values are regenerated every day, so you will need to copy them into your .env file every day.

You will also need to set the name of the DynamoDB table you want to use in your environment as well.

DYNAMO_TABLE="..."

Running the API

To run the API, you can run the following command:

yarn start

This will start the API on port 4000. You can access the GraphQL playground by visiting http://localhost:4000 in your browser.

Graphql

Graphql is a query language for APIs that allows clients to request only the data they need. This is a powerful tool for optimizing data fetching and reducing the number of requests needed to get the data you need.

In order to implement a graphql API, you need two things:

  • A schema that defines the types and queries that are available
  • Resolvers that implement the logic used to fetch the data for a given query or type

In this project, our schema is defined in the src/schema.graphql file. Our resolvers are defined in the src/resolvers directory. We use the apollo-server library to run our graphql server.

Resolvers

Resolvers are functions that implement the logic for fetching the data.

The fulfillment of any query in graphql is done by cascading sequentially through the structure of the query and calling the appropriate resolver for each field, as defined the parent type. The result of each field in the query is then passed to the resolver for the field's type in order to resolve that field's children.

Example

It is easiest to explain this with a simple example. Consider the following graphql schema, resolvers, and query:

Schema:

type Query {
  user(id: ID!): User
}

type User {
  id: ID!
  name: String!
  posts: [Post!]!
}

type Post {
  title: String!
}

Resolvers:

const resolvers = {
  Query: {
    user: (parent, args, context, info) => {
      return { id: args.id, name: "Alice" };
    }
  },
  User: {
    posts: (parent, args, context, info) => {
      return [{ title: "My first post", author: parent.name }];
    }
  }
}

Query:

{
  user(id: "1") {
    id
    name
    posts {
      title
      author
    }
  }
}

Execution

The first level of fields in a graphql query must be defined as field on the Query type or the Mutation type. In this case, the user field is defined on the Query type.

When the query is executed, the first operation to run will be the resolver for the user field on the Query type. This resolver is a function specified at the Query.user property path in our resolvers object.

Because Query.user maps to the User type, after the Query.user resolver function runs, its return value will then be passed to the resolvers for the User type.

Notice that some of the fields on the User type are scalar types (ID and String in this case). For these fields, the value of the field will resolve to the value on the parent object.

However, the posts field on the User type is a list of Post objects, ie not a scalar.

In this case, the resolver for the User.posts field will be called. When this function is called, it will receive the value of Query.user as it's first argument (parent).

This resolver will return an array of Post objects.

The last step of the execution is to resolve the fields on the Post type. However, since these are all scalar types, the resolvers for these fields will simply return the value of the parent object.

After the final layer resolves, the values get passed back based on the structure of the query

{ 
  user: { // executes Query.user resolver
    id: 1, // resolves to id property of Query.user resolver call
    name: "Alice" // resolves to name property of Query.user resolver call
    posts: [{ // executes User.posts resolver
      title: "My first post" // resolves to title property of User.posts resolver call
      author: "Alice" // resolves to author proprety of User.posts resolver call
    }]
  }
}