npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@undercat/esutil

v1.1.0

Published

Grab-bag of useful ES6 convenience functions.

Downloads

17

Readme

Miniature power tools for ECMAScript.

A collection of powerful utility functions that are generally too small to merit implementation as stand-alone modules.

Installation

npm install @undercat/esutil

Or clone this repository and include util.js directly in an HTML file. Although the module works in the browser, it will not benefit from module isolation if loaded directly, and the code required to use it differs somewhat:

Node.js:

const util = require('@undercat/esutil'); // load the whole wrapper
util.Type(x);
const { Type } = require('@undercat/esutil'); // load a component (or several)
Type(x);

Browser:

<script src='util.js'></script>
<script>
  const { Type } = $UC_esutil;
  Type(x);
</script>

The file itself is written as a CommonJS module, and when it is used in the browser it suffers from unavoidable load-order dependencies. Most of my other modules require esutils, so if you're going to use it directly, you should generally include it first.

Incidentally, it is impossible to write a single file that loads as both a CommonJS module and an ECMAScript module because export is a privileged keyword that cannot be conditionally executed or overloaded and it throws if it is not satisfied or is encountered in a file not loaded as a module and the error cannot be caught because the keyword is restricted to the global scope (and therefore cannot be wrapped in a try/catch block). If export failed with a non-fatal warning, like much of the rest of ECMAScript that deals with resources, there would be no problem writing actual modules that could load as CommonJS and ES6 simultaneously.

API Description

Returns a string describing the type of the supplied variable. This string is similar to the type tag that can be retrieved from an object using Object.prototype.toString.call(), but it is not wrapped in superfluous text and cannot be fooled by defining custom string tags on an object.

object|typeof|toString.call|Type() -|-|-|- null|object|[object Null]|Null new Date()|object|[object Date]|Date /re/|object|[object RegExp]|RegExp async function(){}|function|[object AsyncFunction]|AsyncFunction new ArrayBuffer(8)|object|[object ArrayBuffer]|ArrayBuffer a = []; a[Symbol.toStringTag] = 'Bogus';|object|[object Bogus]|Array


Similar to Object.prototype.toString.call(), but reads directly from the relevant symbol on the object or its prototype. It does not wrap the value returned in [object ...], preferring to return just the tag itself ('Map', 'Array', etc.). Use this when you want to be 'fooled' by toStringTag overloads.


Equivalent to Object.getOwnPropertyDescriptors(obj).


# | C | W | E :-:|:-:|:-:|:-: 0 | - | - | - 1 | - | - | ✓ 2 | - | ✓ | - 3 | - | ✓ | ✓ 4 | ✓ | - | - 5 | ✓ | - | ✓ 6 | ✓ | ✓ | - 7 | ✓ | ✓ | ✓

The default attributes can be overridden on a per-property basis by appending a colon and digit to the property-name-string (see example below).

The source object may contain special key-like-strings that specify...

const o = Type.create(Object.prototype, {
  'foo,alias:0': 123,
  'bar:5': function() { return this.foo; },
  'cat,dog,mammal,animal': true,
  'paws,mits*': { value: 4, set: "if (typeof v == 'number') $v = v;" },
  raw: 'no aliases'
});

console.log(Object.getOwnPropertyDescriptors(o));
{
  raw: {
    value: 'no aliases',
    writable: true,
    enumerable: true,
    configurable: false
  },
  foo: {
    value: 123,
    writable: false,
    enumerable: false,
    configurable: false
  },
  alias: {
    get: [Function: get],
    set: [Function: set],
    enumerable: false,
    configurable: false
  },
  bar: {
    value: [Function: bar:5],
    writable: false,
    enumerable: true,
    configurable: true
  },
  cat: {
    value: true,
    writable: true,
    enumerable: true,
    configurable: false
  },
  dog: {
    get: [Function: get],
    set: [Function: set],
    enumerable: true,
    configurable: false
  },
  mammal: {
    get: [Function: get],
    set: [Function: set],
    enumerable: true,
    configurable: false
  },
  animal: {
    get: [Function: get],
    set: [Function: set],
    enumerable: true,
    configurable: false
  },
  '$': {
    value: { paws: 4 },
    writable: false,
    enumerable: false,
    configurable: false
  },
  paws: {
    get: [Function (anonymous)],
    set: [Function: setter],
    enumerable: true,
    configurable: false
  },
  mits: {
    get: [Function (anonymous)],
    set: [Function: setter],
    enumerable: true,
    configurable: false
  }
}

...as you can see, the resulting object 'o' has four properties and four aliases:

  • o.raw ⇒ an ordinary property that inherits the default attributes (enumerable+writable)
  • o.foo ⇒ a 'hidden' number that is not enumerable, writable or configurable
  • o.alias ⇒ an alias to o.foo
  • o.bar ⇒ an enumerable+configurable function reference
  • o.cat ⇒ a Boolean value
  • o.dog | o.mammal | o.animal ⇒ aliases to o.cat
  • o.paws | o.mits ⇒ aliases to o.$.paws

The first property name in a key-list is used to label the actual value; all the other names in a list are accessors (getters/setters) of that first property name. If the optional '*' suffix is added to a name list, the value for all the names in the list is instead stored in a “hidden” object—the first name in the list winds up being just another accessor to that value. These '*'-lists take objects as arguments, and the sub-properties of those objects are very similar to those used by Object.defineProperty(), except that they are restricted to...

Getters and setters defined using the '*' syntax need not actually be functions, although that is still an option. Simple strings giving the relevant logic of the getter or setter will suffice. In those strings, the value supplied to a setter will be identified by the variable v. The actual backing variable itself can be accessed through $v. If you write an actual function, no such translation will be performed, and you will have to use the “real” object path to the backing variable, so you might have to write something like, 'foo,bar*': { value: 123, set(x) { this.$.foo = x; } } instead of 'foo,bar*': { value: 123, set: '$v = v;' }. Note that the path can change if you supply a container parameter, which sets the identifier name for the backing object.

Performance-critical code may elect to bypass accessors and handle values directly, even for objects defined with the '*' syntax. Ordinarily, the first name in the property name list is used to label the actual value, but if the '*' syntax is used, all the names will be aliases, and if you want direct access to the value, you will need to do it through the backing object reference, something like myObject.$.myValue. If you invoke Type.fixPD(), Type.create() or Type.add() with a null value for the container parameter, no reference to the backing object will appear in the resulting object, so there will be no “fast path” access to any such values.

The drawback to this property label processing scheme is that it precludes the possibility of defining “ordinary” property names containing commas, colons or asterisks. Since most people probably do not consider such property names to be “ordinary” anyway, this was considered to be an acceptable trade-off.

Another example will help clarify all this.

const o = Type.create(Array.prototype, {
  // The first property uses a functional style setter, so the data must be explicitly referenced.
  'foo,a1*7': { value: 'text', set(x) { this.$$.foo = x + ' text'; }},
  'bar,a2*1': { value: 123 },
  // Using a string as a setter results in automatic source/target translation.
  'a3*': { value: new Date(), set: "$v = (typeof v == 'string') ? new Date(v) : v;" }
}, null, null, '$$');

console.log(Object.getOwnPropertyDescriptors(o));

-------
// The data object gets reflected into the result with the name '$$' instead of the default '$'
// It is never made enumerable, writeable or configurable. Moreover, it is sealed. (Not the values.)
// If null had also been used for the 'container' parameter, the data would not have been reflected
// and direct access would be impossible.
  '$$': {
    value: { foo: 'text', bar: 123, a3: '2020-04-30T14:17:25.866Z' },
    writable: false,
    enumerable: false,
    configurable: false
  },
// 'foo' and 'a1' are both accessors for the value in '#.$$.foo'
// The first name in the list is always used to label the value in the reflected data structure.
  foo: {
    get: [Function (anonymous)],
    set: [Function: set],
    enumerable: true,
    configurable: true  // defined with '7' attribute flag := all options, including configurable
  },
  a1: {
    get: [Function (anonymous)],
    set: [Function: set],
    enumerable: true,
    configurable: true
  },
// 'bar' and 'a2' are accessors for '#.$$.bar'
// Since no setter was declared for these, the value is read-only (unless read directly)
  bar: {
    get: [Function (anonymous)],
    set: undefined,
    enumerable: true,
    configurable: false  // attribute flag was '1', so only enumerable
  },
  a2: {
    get: [Function (anonymous)],
    set: undefined,
    enumerable: true,
    configurable: false
  },
// 'a3' accesses '#.$$.a3'
  a3: {
    get: [Function (anonymous)],
    set: [Function: setter],
    enumerable: true,
    configurable: false  // default attribute flag is '3' := enumerable+writeable
  }
// Note that accessors never have a 'writeable' attribute, so that flag value is irrelevant.
-------

console.log(o.foo);
// => 'text'

o.foo = 'hello'; console.log(o.a1);
// => 'hello'

o.a3 = '1980-10-01'; console.log(o.a3);
// => 1980-10-01T00:00:00.000Z

If you do not reflect the data object into the result (i.e., if you use null as the container parameter), then you must declare all getters and setters using string style, as there will not be any way to access the data with 'this'. Since string-style getters and setters are translated at construction time, there is no need to use 'this' to reference any data.


Assigns selected properties of source, which may be an Object or Map, to target, which must be an Object. Properties are selected either be being in the target object already (presumably with a different value), or by being present in a filter object, which may itself be an Array, Object, Map or Set. Only the keys of Maps and Objects are referenced in a filter.

If the target object has aliases defined on it (see above), multiple keys from source may map to the same actual value in target, in which case the last property encountered in source as it is being iterated will prevail. This ambiguity is actually quite useful for allowing options to be satisfied by multiple keywords, as the following example shows:

const prog_opt = Type.create(Object.prototype, {
  'color,fgColor,foregroundColor': 'black',
  'bgColor,backgroundColor': 'white'
});
let user_opt1 = { color: 'red', bgColor: 'yellow', foo: 'purple' };
let user_opt2 = { fgColor: 'red', bgColor: 'yellow', bar: 'purple' };
let user_opt3 = { foregroundColor: 'red', backgroundColor: 'yellow' };

Type.fill(prog_opt, user_opt1);
Type.fill(prog_opt, user_opt2);
Type.fill(prog_opt, user_opt3);

In this example, all of the user option structures produce the same program option object. Essentially, the user can choose between label aliases in specifying a given option instead of being forced to use a canonical form. Any 'invalid' properties will be ignored.


console.log(Type.merge(new Map([['foo',64]]), { bar: 23 })); // Map(2) { 'foo' => 64, 'bar' => 23 }
console.log(Type.merge([1,2,3], { foo: 4, bar: 6 })); // [ 1, 2, 3, 4, 6 ]
console.log(Type.merge([1,2,3], { foo: 4, bar: 6 }, true)); // [ 1, 2, 3, 'foo', 'bar' ]
console.log(Type.merge([1,2,3], Object.defineProperty([4,5,6], 'foo', { enumerable: true, value: 23 }), null, true));
// [ 1, 2, 3, 4, 5, 6, foo: 23 ]

Copies all the elements from a source Array, Map or Set, or copies all the enumerable, intrinsic ("own") properties from a source Object, into a new stand-alone Map object. Useful for allowing code to be written for a single interface (namely, Map's) while accepting data from any container type. Arrays elements supply keys for the resulting Map entries, while their indices (i.e., order) supplies the values. Set elements are used as both the keys and values.

If the recursive flag is set to a Number or true, Type.asMap() will recursively convert containers in the appropriate number of object levels (or all of them) to Maps. Because Maps can use object instances, like Arrays and Sets, as actual keys, it would be difficult to distinguish between objects intended to be keys and objects intended to be collections on a level-by-level basis, so they are never recursively converted.

console.log(Type.asMap({ foo: 12, bar:64 }));    // Map(2) { 'foo' => 12, 'bar' => 64 }
console.log(Type.asMap(['foo', 'bar']));         // Map(2) { 'foo' => 0, 'bar' => 1 }
console.log(Type.asMap(new Set(['foo','bar']))); // Map(2) { 'foo' => 'foo', 'bar' => 'bar' }
console.log(Type.asMap({ foo: 12, bar: { yes: 23, no: 34 }}, true));
// Map(2) { 'foo' => 12, Map(2) { 'yes' => 23, 'no' = 34 } }

let a = [10,20,30,40,50], i = 45;
console.log(a[Type.asIndex(i, 5, 0)]); // 10
console.log(a[Type.asIndex(i - 41, 5, 0)]); // 50

If no argument is supplied, a function is returned that increments a counter from zero each time it is called (the default is to increment by one, unless an argument is supplied giving the increment value). It returns the pre-increment value. This allows post-increment expressions to be created in-line, like pointers in C++.

If a Number is supplied as the initial argument, that value is used to set the counter's initial value. If an optional integer mask is supplied, it will be ANDed with the counter after each operation...cheap modulus wrapping for rings.

If an Array is supplied, it may take two forms: [block_size, base1, base2, base3, ...] or [[base1, length1], [base2, length2], ...]. The first form uses a uniform block size built on any number of block bases; the second form supplies both the base and length for each block explicitly. Index() will thread blocks transparently, updating the counter when space in one block is exhausted with the next block base. If an increment is too large to fit in the remaining 'space' within a block, the next block of suitable size is returned.

If a Function is supplied as the initial argument, it is invoked both to supply the base and length of the initial block and each following block. The callback function is invoked with the following parameters: block_threader(null, null, ...threader_arguments) ...to supply the first block, and... block_threader(block_base, block_length, ...threader_arguments) to supply all subsequent blocks. It should return an array [block_base, block_length] that describes the base and length of the next block, which will be supplied to it again, in turn, when another block is need (i.e., they are 'stable' and can be used as keys).

The object returned by Index() is itself a function. That function takes one optional parameter: the number by which to increment (or decrement, for negative numbers) the value of the counter. It returns the unincremented value, however, turning the function into a sort of 'pseudo-post-incremented-pointer' for pointerless languages, like ECMAScript.

There are also a few properties defined on the returned function-object:

  • value — returns the current value of the internal index
  • octets — returns the total number of increments (minus decrements) applied to the internal index; this is not that same as the difference between starting and ending index values, owing to block spanning, and that fact that some blocks may be left partially (or completely) empty if they are not big enough to satisfy a request
  • reset() — sets the internal index value to zero

A few examples will be much more instructive than further verbiage expended in explanation:

let t = Index();
console.log(t.value); // 0
console.log(t());     // 0
console.log(t(8));    // 1
console.log(t(-1));   // 9
console.log(t.value); // 8
console.log(t.octets);// 8

t = Index(128, 0xff);
console.log(t.value); // 128
console.log(t(64));   // 128
console.log(t(64));   // 192
console.log(t(64));   // 0
console.log(t(64));   // 64
console.log(t.value); // 128
console.log(t.octets);// 256

t = Index([100, 0, 200]);
console.log(t.value); // 0
console.log(t(50));   // 0
console.log(t(50));   // 50
console.log(t(50));   // 200
console.log(t.value); // 250
console.log(t.octets);// 150

t = Index([[100, 100],[300, 100]]);
console.log(t.value); // 100
console.log(t(50));   // 100
console.log(t(50));   // 150
console.log(t(50));   // 300
console.log(t.value); // 350
console.log(t.octets);// 150

let base = [1000, 2000], i = 0;
t = Index(() => [base[i++], 256]);
console.log(t.value); // 1000
console.log(t(128));  // 1000
console.log(t(128));  // 1128
console.log(t(128));  // 2000
console.log(t.value); // 2128
console.log(t.octets);// 384

A very simple object pooler that stores references to freed objects, then provides them again in preference to invoking the allocator the next time one is needed. Obviously, this only works for objects that can be reused (not Promises, for instance) and presumably the object is expensive to reconstruct. It basically functions as a push-down-stack that keeps discarded objects "out of your way" until needed again, with the benefit that it automatically allocates new objects if the cache of freed objects is empty.

function alloc(...s) { this.version = s[0]; }
const pool1 = Pool(4, alloc, 'first');  // holds sixteen freed objects, parameterized with 'first'
const pool2 = Pool(4, alloc, 'second'); // another sixteen objects, parameterized with 'second'

let x = pool1(), y = pool2();
console.log(x.version, y.version); // 'first' 'second'
pool1.free(x); pool2.free(y);

Run the validate.js or validate.html files to obtain some time comparisons.


Creates a ring buffer with 2n elements (default 210 = 1024), of which (2n – 2) elements are available for data (the remaining two elements are used as the 'head' and 'tail' elements, which cannot overlap). The resulting object supports the following methods:

const r = RingBuffer();
r.queue(42); r.queue(18); console.log(r.pop(), r.pop()); // 42 18
r.queue(10); r.queue(20); console.log(r.skip()); // 20
r.queue(35); r.queue(45); console.log(r.unqueue(), r.unqueue()); // 45 35

Because the RingBuffer never reallocates its storage or copies its elements, it benefits from constant insertion/removal time at both ends of the ring. For push()/pop() operations, it is as fast as an Array at ring sizes of about 512 elements, and faster for larger stacks; for queue()/unqueue() operations, it is faster than an Array at any size.

Of course, since it never reallocates, you can exhaust the ring's buffer space if you do not size it sufficiently. An Array, on the other hand, will be dynamically and automatically resized as the number of elements increase, though it will get slower as its reallocation window gets larger.


Remembers 2n (default: 25 = 32) invocations of a function. Both the call parameters and result generated are recorded and can be recalled at any subsequent time through the history virtual property added to the resulting function-object.

This feature is implemented by defining a Proxy on the wrapped function, and proxies are slow. Do not use this on functions that support critical loops in production code!

const f = callHistory(s => 'result of ' + s);
f('first call'); f('second call'); f('third call'); f('fourth call');
console.log(f.history);
[
  [ 4, 'result of fourth call', 'fourth call' ],
  [ 3, 'result of third call', 'third call' ],
  [ 2, 'result of second call', 'second call' ],
  [ 1, 'result of first call', 'first call' ]
]

Applies regular_expression to string and returns an array containing only the captures found, if any. Default values provided in the 'rest' parameter will substitue for any missing captures.

If the default value for a particular capture is a Number or a Date, the string from that capture position will be converted to a Number or Date value before it is returned. The default will also be returned if the capture cannot be converted to the default's type, even if the capture matches something.

console.log(capture('12345 number 2000-01-01 date', /(\d+) (number) ([^ ]*) (date)/,
    999, 999, new Date(), new Date()));
[ 12345, 999, 2000-01-01T00:00:00.000Z, 2020-02-14T21:02:15.021Z ]

Notice that the second capture returns the default value, because 'number' cannot be converted to an actual Number. Likewise, since 'date' is not convertible to a real Date, the fourth capture returns the default (read the date values).


log(forEach(['foo','bar'], (v,k,a) => a += `<li>${v}</li>`, '', a => `<ul>${a}</ul>`));
<ul><li>foo</li><li>bar</li></ul>

const { partition } = require('./util.js');
const grade_partition = [[59, 'F'], [69, 'D'], [79, 'C'], [89, 'B']];
const score = [-1, 59, 60, 69, 70, 79, 80, 89, 90, 99, 100, 110];
const grade = new Map();
for (const e of score) grade.set(e, partition(e, grade_partition, 'A'));
console.log(grade);
Map(12) {
  -1 => 'F',
  59 => 'F',
  60 => 'D',
  69 => 'D',
  70 => 'C',
  79 => 'C',
  80 => 'B',
  89 => 'B',
  90 => 'A',
  99 => 'A',
  100 => 'A',
  110 => 'A'
}

This function is implemented as a binary search on partition_array, so it can scale to very large partitions with little run-time penalty. The downside, if any, is that partition_array must be sorted in strictly-increasing order by cut-off values.

Minifier Utility

Included in this distribution is a modest (read that as: "not very robust") minification utility. Unlike most minifiers, line numbers are preserved between the source and target files. The newline character is itself a statement terminator in ECMAScript and can often substitute for a semicolon, so there isn't much to be gained from eliminating newlines.

The benefit of preserving line structure in the minified file is that you can use it without a code map. You can simply open up the un-minified file and navigate directly to the reported line. There's very little difference in compressability between files that use newlines as terminators and files that do not...provided that you do not leave in huge, vertical blocks of comments that minify to vast, empty expanses of whitespace. Comments that are placed at the end of a line will minify out completely, however.

To run the minifier, use node minify my_source_file.js my_target_file.js. If you leave out the target, the result will be dumped to standard output.

Help Undercat buy kibble to fuel his long nights of coding! Meow! 😺