npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

dataflower

v0.8.1

Published

DataFlower - The dataflow project

Downloads

2

Readme

DataFlower - The dataflow project

Build Status

DataFlower eases async programming in javascript.

Installation

npm install dataflower
bower install dataflower

Environment compatibility

This framework supports the same environments as the error polyfill lib.

I used Karma with Browserify to test the framework in browsers and I used Yadda to run the BDD tests.

Requirements

The ObjectZone and the ErrorZone libs are required.

Usage

In this documentation I used the framework as follows:

var df = require("dataflower"),
    Flow = df.Flow,
    Pump = df.Pump;

Flows

The main purpose of data flows is data delivery, but if you want you can use them for buffering data until somebody needs it.

Creating a flow is simple.

var flow = new Flow();

But you cannot extract data from an unsustained flow

flow.extract(); // DryExtract: Attempting to extract data from a dry flow.

So you need to sustain it somehow.

flow.sustain(123);
console.log(flow.extract()); // 123

By extracting the data, the flow releases it, so you won't be able to extract it again.

You can check whether a flow is dry.

while(!flow.isDry())
    console.log(flow.extract());

If you need to drain a flow, you don't have to write loops every time, you can use the drain method instead.

console.log(flow.drain());

It will return always a data array.

If you want to stop a flow, then you can block it.

flow.block();
console.log(flow.isBlocked()); // true
console.log(flow.isSustainable()); // false

After blocking the flow, you won't be able to sustain it again. All you can do is extracting the remaining data and removing the flow after that.

Pumps

To use flows in an async way you need to use pumps. Creating a pump is simple as well.

var pump = new Pump();

By default the pump creates a new flow, but you can inject an existing flow if you want.

var pump = new Pump(flow);

You can always replace the current flow by overriding the flow property.

pump.flow = newFlow;
pump.merge({flow: newFlow});

Refresh and transactions

As I already mentioned the pump is for handling async code. That means it maintains a queue of callbacks, which you can add with await or pull.

These callbacks are called when data is available, but in order to notify them we need to refresh the pump.

pump.await(function (){
    // console.log("called");
});
flow.sustain(1, 2, 3);
pump.refresh(); // called

Ofc. calling refresh manually is something not so convenient, that's why we need to use the transaction method.

pump.await(function (){
    // console.log("called");
});
pump.transaction(function (theFlow){
    theFlow.sustain(1, 2, 3);
});
// called

Every Pump method with a callback (including await) runs in such a transaction, so they refresh the pump automatically.

That's why it is recommended to use these pump methods if you want the pump to work properly. If you are not able to do that, because you have multiple pumps on a single flow, then you need to call refresh manually.

Push - pushed - await loops

If you know there can be people waiting for your data, then you can sustain your flow with push. Push means, that you decide when you send the data.

pump.await(function (flow){
    console.log(flow.extract());
});

pump.push(function (flow){
    flow.sustain(1);
});

You could do the same with a transaction, so how is push different?

It triggers a pushed event, which you can listen to. So if you want to be notified about arriving data permanently, then you can listen to this event.

pump.on("pushed", function (flow){
    console.log(flow.drain());
});

pump.push(function (flow){
    flow.sustain(1, 2, 3);
});

The drain method uses a sync loop to extract all the available data from the flow. You can do it with an async loop too if you want to.

pump.on("pushed", function (flow){
    flow.await(function next(flow){
        console.log(flow.extract());
        setTimeout(function (){
            if (!flow.isDry())
                flow.await(next);
        }, 10);
    });
});

Just be sure, that nobody else is extracting data from the flow parallel, because if the flow goes blocked, then unsustainable awaits will throw errors.

Pull - pulled loops

If you waited for data almost forever and nothing happened, that may be because you need to use pull instead of await. Pull means you decide when you get the data and the pump waits for a sign to start sustaining the flow.

pump.pull(function (flow){
    console.log(flow.extract());
});

To start the process you need a pulled event handler on the pump.

pump.on("pulled", function (flow){
    flow.sustain(Math.floor(Math.random() * 10));
    if (Math.random() < 0.1)
        flow.block();
});

You can call pull recursively to get a continuous data flow.

pump.pull(function again(flow){
    console.log(flow.extract());
    if (flow.isSustainable())
        pump.pull(again);
});

This kind of solution can be very useful by handling for example file streams.

License

MIT - 2014 Jánszky László Lajos