npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

calvium-node-api-lib

v0.4.1

Published

Utilities for Node APIs running on Azure

Downloads

36

Readme

calvium-node-api-lib

This repo contains common code for API servers that we write:

  • in node
  • using express
  • providing JSON APIs
  • running on Microsoft Azure

Historically, this code has all come from projects in which we provided the entire vertical - API server, webapp and mobile app.

This is not a framework, it's just a library. Users opt into features individually by calling them.

Tests

This library has two sets of tests:

  • Unit tests. You invoke them by just running npm install and npm run test.
  • Integration tests. They have some dependencies.
    • You need Docker. This is tested using Docker for Mac.
    • You need to make sure no other programs / other docker containers are listening on TCP port 1433.
    • Once you've sorted that out, invoke them by running npm run test-integration.
    • They're likely to be a bit flaky. Sorry.

Debugging simpleQuery

You can set the following settings to 'y' to get output for debugging:

  • SIMPLEQUERY_PRINT_QUERIES to print every SQL query you execute.
  • SIMPLEQUERY_PRINT_QUERY_ON_ERROR to print SQL queries which fails.
  • SIMPLEQUERY_PRINT_TIMINGS to print how long each query took.
  • SIMPLEQUERY_PRINT_TIMINGS_PARAMS to print parameter names with the times (makes it easier to see which timing is for which SQL query).
  • SIMPLEQUERY_DEBUG to turn all of the above on at once.

I would recommend setting SIMPLEQUERY_PRINT_QUERY_ON_ERROR, SIMPLEQUERY_PRINT_TIMINGS and SIMPLEQUERY_PRINT_TIMINGS_PARAMS for development, then turn on SIMPLEQUERY_PRINT_QUERIES when you're having specific problems with dynamically generated queries returning wrong results.

See the getDBConfigSync() bit in examples/db.example.js for how you'd could pass these in environment variables.

start-local-db

This library provides a binary called start-local-db. It is for starting up a local instance of SQL Server and then load your SQL schema into it.

If you are using a Mac with an "Apple Silicon" CPU, see the section "start-local-db on MacOS with Apple Silicon" below.

Initial setup:

To use it in your project:

  • create a directory sql/
  • create a shell script sql/startLocalClean.sh with contents like the following:
#!/bin/sh
set -e
cd "$(dirname "$0")"
export CONTAINER_NAME="mssql-FILL_IN_PROJECT_NAME-dev"
export SQL_DATA_VARIANT="${1:-main}"
export AZ_MSSQL_PASSWORD="FILL_IN_DEFAULT_PASSWORD"
export AZ_MSSQL_DATABASE="FILL_IN_DATABASE_NAME"
# uses default 1433 if not defined
export AZ_MSSQL_PORT="FILL_IN_DB_PORT"

../node_modules/.bin/start-local-db "$@"
  • Fill in the variables in that script.
  • Put the initial schema in sql/original.sql
  • For each migration, create a file in sql/migrations with a filename like sql/migrations/0001-describe-change-here.sql
  • Put "start-local-db": "sh sql/startLocalDB.sh" in the scripts section of your package.json.

Running it:

  • Run npm run start-local-db to start the DB server.
  • The script will start a local database, load original.sql, then load each of the migrations in ascending order.
  • If you're using multiple variants, run npm run start-local-db -- variant-name instead. (See below.)

Data files:

You will probably want to load test data, separately from the migrations.

The start-local-db script will load data files from several different subdirectories of sql/data.

The data files must be numbered, with numbers that match up with a migration. The data files will be run just after the migration with the same number.

e.g. You might put CREATE TABLE unicorn (id bigint IDENTITY PRIMARY KEY, name nvarchar(64) NOT NULL); in sql/migrations/0005-add-unicorns.sql, then INSERT INTO unicorn (name) VALUES (N'Sammy the Iridescent Unicorn King'); in sql/data/test/common/0005-add-unicorn-sammy.sql. The data file is run just after the migration, so that the table will be there for it to insert rows into

Files with the following patterns will be run:

  • sql/data/real/common/*.sql
  • sql/data/test/common/*.sql
  • sql/data/real/${variant-name}/*.sql
  • sql/data/test/${variant-name}/*.sql

Variants:

The variant-name is the first argument to start-local-db. If you don't supply one, it defaults to the value you set in SQL_DATA_VARIANT above. The idea of this is that you might have multiple variations of your test data.

For example, I might have two different test data files (say sql/data/test/unicorns/0005-add-unicorn-sammy.sql and sql/data/test/goblins/0005-add-goblin-king-greg.sql). When I run npm run start-local-db -- unicorns, the files in sql/data/test/unicorns/*.sql will be run and ones in sql/data/test/goblins/*.sql will be ignored. The files in sql/data/test/common/*.sql will always be run, regardless of which variant is selected.

Cached snapshots:

start-local-db may take snapshots of the SQL server instance partway through in order to save time next time you run it. The snapshots will show up in the output of docker images. It's harmless to delete them. The snapshots are named based on the hashes of all the contents that went into them (a bit like how git SHAs depend on the contents of the files in each commit), so the script won't attempt to reuse a snapshot if it becomes inapplicable due to the source sql files having been changed.

You can disable this caching by running start-local-db like env START_LOCAL_DB_IGNORE_CACHE=y npm run start-local-db - this will prevent existing snapshots being used and prevent new snapshots from being taken. You hopefully shouldn't ever need to do this, though. Existing snapshots won't be reused if the SQL code that was used to create them is changed.

Post-setup SQL code:

After running all other SQL, migration and data files, start-local-db will run one last file, if it exists:

  • sql/postStartLocalDB.sql

If your example data files create entities in the database that expire over time, use sql/postStartLocalDB.sql to reset all the timers on them.

start-local-db on MacOS with Apple Silicon

  • If your computer has an Intel or AMD CPU, you can just install Docker normally. Skip this section.

  • If your computer has an "Apple Silicon" CPU and is running on macOS Ventura (macOS 13) or later, you can just install Docker normally, then

    • Enable in Settings > General > Choose file sharing implementation > VirtioFS
    • Enable in Settings > Features in Development > Beta Features > Use Rosetta for x86/amd64 emulation on Apple Silicon
  • If you have an Apple Silicon Mac with an older macOS version you need an amd64 virtual machine to run Docker, follow the steps below.

There's an incompatiblity between current versions of MS SQL Server and Docker on "Apple Silicon" CPUs. This leads to SQL Server crashing with an error like:

Invalid mapping of address 0x40080f9000 in reserved address space below 0x400000000000

To fix this, use an amd64 virtual machine.

  • Install UTM:
    • You should be able to install it with brew, by running: brew install utm
    • (If the brew version doesn't work for you, instead install it by downloading the dmg from https://mac.getutm.app/ and dragging the UTM app to the Applications folder.)
  • Create an amd64 virtual machine with docker in it, listening on port 12376
  • Port forward ports 1433 and 12376 to the virtual machine
  • See instructions below for creating the VM image
  • (Start the VM by double clicking it in UTM. When it reaches the login prompt, Docker should be running.)
  • Tell docker to use the virtual machine to create containers:
  • docker context create slowvm --docker 'host=tcp://127.0.0.1:12376'
  • docker context use slowvm
  • now if you run docker images you should see output like the following:
$ docker images
REPOSITORY    TAG       IMAGE ID       CREATED        SIZE
hello-world   latest    feb5d9fea6a5   9 months ago   13.3kB
  • Now you should be able to use start-local-db.
  • Run npm run test-integration to verify that everything is working.

Creating the amd64 virtual machine

The following instructions were based on https://kitloong.medium.com/how-to-run-sql-server-2019-on-macbook-pro-m1-d1448525f805

  • install UTM
  • create a new "slow" emulated amd64 virtual machine
  • set the VM to have 2GB of RAM, 2 CPUs and 60GB of disk space
  • use debian-11.3.0-amd64-netinst.iso to install Debian 11
  • (say no to most stuff in the installer, use guided partitioning for the entire disk)
  • inside the Debian VM, run:
apt-get install ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
  $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update
apt-get install docker-ce docker-ce-cli containerd.io
docker run -t -i --rm hello-world
  • Edit /etc/systemd/system/sockets.target.wants/docker.socket
  • Comment out the lines under the [Socket] section by putting '#'s in front
  • Add a line ListenSocket=2376
  • Run /usr/sbin/shutdown -h now to switch the VM off
  • Add port forwards:
    • 12322 -> 22, for ssh
    • 12376 -> 2376, for the docker daemon
    • 1433 -> 1433, for the MS SQL server
  • Export the VM at this point
  • Now it works as a docker context from outside