npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

ppl

v0.1.8

Published

Promise implementation with advanced flow control capabilities.

Downloads

40

Readme

ppl (Promise Pipeline)

Promise implementation with advanced flow control capabilities.

Getting Started

Install ppl:

npm install ppl --save

Overview

The ultra high-level description of ppl is another Promise implementation. ppl adds a number of useful additions to the default Promise behaviors. The additions allow you to support a lot of different flow patterns. The additions will be described below, but thinking of this thing as a toolset for creating data pipes that can be split, mapped, joined and passed around while continuing to be Promise compatible at every turn.

Should you use ppl as your normal Promise implementation if you don't need any advanced flow control? Nope. It would be another library to manage without little value. So when should you use it? You should use it when you want to create complex functional flows and want to switch data cardinality while still maintaining the Promise like result and failure handling. It may also come in handy when you want to create promise chains with predictable this context management.

Usage

Import ppl

pipeline = require 'ppl'

Creating a pipeline

You have a couple options for creating pipelines. The suggested option to create a pipeline with .source with some seed data. You can also .resolve (just and alias for source), or you can use the good ol' promise constructor.

Using a source:

pl = pipeline.source {some: 'data'}

Using resolve:

pl = pipeline.resolve {some: 'data'}

Using the constructor:

pl = new pipeline (fulfill, reject) ->
  fulfill {some: 'data'}

Hmmm. That looks familiar...

Promise = require 'ppl'
pl = new Promise (fulfill, reject) ->
  fulfill {some: 'data'}

Creating pipe segments

One of the most common tasks when working with a pipeline is to create pipe segment. A segment is a function that will act on the data flowing through the pipe. At any point in the pipeline you can .pipe to another function. This is similar to how .then works in a traditional Promise. There are a couple difference called out below, but it is generally safe to assume you can use .pipe and .then interchangeably. The goal usage for the library would be to use .pipe for intermediate steps and use .then or .done for final step in the change, but this is just a preference for how the code reads.

Each pipe segment will receive incoming data and is expected return outgoing data on completion. The outgoing data from a segment is used as the incoming data for the next segment in the pipeline.

A basic pipe:

data = 1
pipeline
  .source data
  .pipe (data)-> data + 1
  .then (data) -> console.log data # 2

The example above is very much just promise behavior with the exception of the function execution context (covered below) and the face that the .pipe method does not support a reject callback like .then.

A multi step pipeline:

Pretty much what you expected. Right?

data = 1
func1 = (data) data + 1
func2 = (data) data + 10
func3 = (data) data + 100

pipeline
  .source data
  .pipe func1
  .pipe func2
  .pipe func3
  .then (data) -> console.log data # 112

Unlike .then the .pipe method also supports an array of functions as a parameter. Each function in the array becomes an additional segment in the pipeline and maintains execution order. This just helps keep things tidy and reduces the number of lines of code.

# functionally equivalent to the last example

data = 1
func1 = (data) data + 1
func2 = (data) data + 10
func3 = (data) data + 100
pipeline
  .source data
  .pipe [func1, func2, func3]
  .then (data) -> console.log data # 112

Error handling:

Much like traditional promises, errors can be piped through to an instance of the .catch method. Any error that occurs in processing the pipeline will propagate down the pipeline until is handled by a .catch or it runs out of segments.

data = 1
pipeline
  .source data
  .pipe (data)-> Promise.reject 'Some Error'
  .then (data) -> # Not called
  .catch (err) -> console.log err # 'Some Error'
data = 1
pipeline
  .source data
  .pipe (data)-> data + 1
  .pipe (data)-> data + 1
  .pipe (data)-> data + 1
  .pipe (data)-> Promise.reject 'Some Error'
  .pipe (data)->  # Not called
  .then (data) -> # Not called
  .catch (err) -> console.log err # 'Some Error'

Splitting pipelines

One of the powerful additions ppl provides is the concept of splitting pipelines into multiple pipes when processing array data. As an example you may have a segment that returns an array of items and you want those items to be manipulated by a series of functions before continuing down the standard pipeline. With traditional Promise implementations you would need to write your functions to iterate over the data and return the results in each segment. So the segment function itself needs to deal with the array. This isn't the end of the world, but could get tedious. It also doesn't allow you to reuse segments that are assumed to act on a single item when working with arrays.

This is where the .split and .map functionally comes in handy. You can split the pipeline and apply segments that are intended to run on a single item. When you are done processing on single items, you can join back to the parent pipeline and continue working with the array in the data flow.

Basic split and join:


data = [1, 2, 3, 4]
pipeline
  .source data
  .pipe (data) ->
    console.log data # [1, 2, 3, 4]
    data
  .split()
  .pipe (item) ->
    console.log item # 1 # 2 # 3 # 4
    item + 1
  .pipe (item) ->
    console.log item # 2 # 3 # 4 # 5
    item + 1
  .join()
  .then (data) ->
    console.log data # [3, 4, 5, 6]

Multi-level split and join:


data = [
  {items: [1, 2, 3]}
  {items: [1, 2, 3]}
  {items: [1, 2, 3]}
]
total = 0
pipeline
  .source data
  .split()
  .pipe (item) ->
    item.items
  .split()
  .pipe (item) ->
    console.log item # 1 # 2 # 3 # 1 # 2 # 3 # 1 # 2 # 3
    total += item
  .join()
  .join()
  .then ->
    console.log total # [3, 4, 5, 6]

Setting the context

Every pipeline has a context for executing functions while processing the pipe. By default this is set to an empty object. Segments in the pipeline can use this object as they see fit. This is useful for acting as a shared context for storing shared state throughout the processing. It is also super useful for piping to detached object functions, but still running them in the context of their object.

Setting the context:

context = some: 'context'
pipeline
  .source req
  .context context

Using the context:

context =
  some: 'context'
  func: ->
    'bar'
pipeline
  .source req
  .context context
  .then (req) ->
    @foo = @func()
    console.log @  # { some: 'context', foo: 'bar' }

It is possible to change the context at any point in the pipeline.

Changing context:

context1 = some: 'context'
context2 = some: 'other context'
pipeline
  .source req
  .context context1
  .pipe -> console.log @  # { some: 'context'}
  .context context2
  .pipe -> console.log @  # { some: 'other context'}