npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

backbone-postgresql

v0.0.1

Published

A storage adapter for PostgreSQL when running Backbone.js on the server

Downloads

8

Readme

backbone-postgresql

Build Status

This is a sync library for server-side Backbone.js running on node.js that enables it to store data in PostgreSQL. It uses a hybrid approach to the schema whereby you can mix up old-school columns with all of their associated relational integrity and an hstore column that allows you to store arbitrary attributes with no schema changes.

See this article for some more details about hstore, and also the PostgreSQL docs here. To set up your database with the hstore extension you need to log into postgres as an admin and use the database you want to add it to, then run "CREATE EXTENSION hstore". It should then be ready for you to use.

The library uses all of the standard Backbone methods for syncing objects and works with both models and collections. Here's an example, it assumes you've created a database and set it up with the hstore extension and then set up a database with the following schema:

CREATE TABLE badgers (
  id SERIAL,
  name VARCHAR(128),
  attributes hstore
);

This won't really run as one script because otherwise it would all have to be nested to make sure things had executed before moving on to the next example, but this way is a lot easier to read.

// Require the library as Backbone so that it can extend its prototype
Backbone = require('backbone-postgresql');

// Tell the Postgres connector how to connect to the database
Backbone.pg_connector.config = {db: ''pg://username:password@localhost/backbone-pg-demo'};

// Define our models and collections. We're using the urlRoot parameter to define the table name
var Badger = Backbone.Model.extend({
  urlRoot: '/badgers'
});

var BadgerCollection = Backbone.Collection.extend({
  urlRoot: '/badgers',
  model: Badger
});

// Let's create a new badger
var badger = new Badger({name: 'Bodger', age: 2});
// and save it - with this schema the age attribute will be stored in hstore, whilst the name attribute will go in a proper column
badger.save();
// Here we use the event emitter approach
badger.on('sync', function(model){
  console.log('Badger saved: ' + model.name);
});
badger.on('error', function(model, err){
  console.log('Error saving badger');
});

//then we could change some attributes - obviously we should really put this in the success callback, but for keeping the examples simple I haven't
model.set('age', 3);
// then let's save it again but this time we can use callbacks as an argument
// the first argument is null because you can pass in new attributes to save. Always catches me out.
model.save(null, {
  success: function(model){
    console.log('Badger updated: ' + model.name);
  },
  error: function(model, err){
    console.log('Error updating badger');
  }
});

// Let's say that the id of a badger is 345 - here is how we retrieve it from the database:
var another_badger = new Badger({id: 345});
another_badger.fetch({
  success: function(model){
    console.log("Successfully retrieved badger: " + model.attributes.name + " (Age: " + model.attributes.age + ")");
  },
  error: function(model, err){
    console.log(err instanceof Error);
    console.log(err.message); // "Not found"
    console.log('Error retrieving badger');
  }
});

// If it can't be found, the error callback will be fired and the "err" parameter will return a "Not found" error

// If you want to put some constraints on what you're fetching (e.g. if you want to fetch a resource, but wanted to make sure it was owned by the correct person)
// you can add a filter parameter like so:

another_badger.fetch({
  filter: {owner_id: 123},
  success: ...
  error: ...
});

// The filter parameter can be an object, in which case each of the key:value pairs are turned into where constraints ANDed together, e.g.
// {owner_id: 123, name: 'bob'}
// will get turned into a snippet of SQL like:
// WHERE owner_id = 123 AND name = 'bob'

// If you want to apply more complex conditions, you can create a series of conditions which will be ANDed together as well, e.g.
// ['owner_id = 123', "name = 'bob'"]
// is equivalent to the same thing:
// WHERE owner_id = 123 AND name = 'bob'

// And if we wanted to delete it we could just do:
another_badger.destroy({
  success: function(model){
    console.log("You just destroyed an endangered animal");
  },
  error: function(model, err){
    console.log("No badger was harmed");
  }
});

Collections

You can also work with collections in the same way as you are used to in Backbone as well as being able to apply filters to the collection

// Let's make a collection of badgers
var badgers = new BadgerCollection();

badgers.fetch({
  success: function(collection){
    console.log("Successfully fetched badgers");
  },
  error: function(collection, err){
    console.log("Error fetching badgers");
  }
});

// You can also apply a filter to this fetch as in the fetch example for Models, e.g.

badgers.fetch({
  filter: {name: 'bob'},
  success: ...
  error: ...
});

// will return all badgers named bob

Relations

If you set the urlRoot attribute on a model or collection with a nested resource (e.g. /resource_a/3/resource_b ) then it will automatically add this as a query parameter when fetching the model and extract the table name from the last part of the urlRoot. The idea is that this should make the impedance between having a RESTful API and a model pretty low. Examples:

var badgers = new BadgerCollection();
badgers.urlRoot = '/location/3/badger';
badgers.fetch({success: function(){
  console.log("Successfully fetched");
}});

Also, if you add a new model to a collection, it will use that scoping to automatically set attributes on the model, e.g.

var new_badger = new Badger();
badgers.add(badger);
new_badger.save(null, {success: function(){
  console.log("New badger just got saved with a location_id of 3");
}});

You can also use it to fetch, save and destroy models. This is primarily as a way of enforcing things like ownership before doing something like deleting an object. You might set the urlRoot to /users/432/products and it would only work on a model whose user_id matched the expected value, e.g.

// saving
var badger = new Badger();
badger.urlRoot = '/location/32/badger';
badger.save(null, {success: function(){
  console.log("This will have set the location_id attribute to 32 before saving it");
}});

// updating
var badger = new Badger({id: 42});
badger.urlRoot = '/location/32/badger';
badger.save(null, {success: function(){
  console.log("This will only succeed if there is a badger with id: 42 and location_id: 32");
}});

// updating
var badger = new Badger({id: 42});
badger.urlRoot = '/location/32/badger';
badger.destroy({success: function(){
  console.log("This will only succeed if there is a badger with id: 42 and location_id: 32");
}});

By default, these relation constraints work by adding _id to the first level scope, however, if you like the Rails style of having pluralised APIs you can specify the pruralisations as follows:

Backbone.pg_connector.singulars = {
  locations: 'location'
}
var badgers = new BadgerCollection();
badgers.urlRoot = '/locations/3/badgers';
badgers.fetch({success: function(){
  console.log("Successfully fetched");
}});

(Aside: I always used to like the pluralised API style, but now I've implemented this I think it's a bad idea and just adds application complexity)

Contributions

The best contribution is a pull request! Let me know if there are things you'd like to see it do. If you do submit a pull request, make sure you add tests. It's pretty well covered so if there are any bugs let me know and I'll get them tested and fixed.

Credits

Obviously without Backbone this wouldn't even exist so thanks to Jeremy Ashkenas for creating that to begin with. I used the backbone-mongodb module as inspiration for the structure of the code and shamefully stole the hstore serialisation method from backbone-hstore. Thanks all.