npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@solveq/path-decimation

v1.1.0

Published

Provides a few of the path decimation algorithms we use when handling large GPS datasets

Downloads

1

Readme

path-decimation

At SolveQ, a large number of our projects deal in some way with GPS data. Between tracking lost pets to the recent cool bike trip you've done, we're no strangers to having to deal with very large geo datasets.

Unfortuately, these datasets pose unique problems, both when storing, transmitting and retrieving geo datasets. After all, when a one-hour walk recorded for your dog ranges in the tens of thousands of records, retrieving this very quickly turns into unnecessary megabytes of data and time spent watching a loader. Worse still, displaying all these in most map components on mobile devices causes a very large amount of memory to be used, and a large amount of processing power when panning to other areas.

Some approaches such as segmenting tracks and occluding segments that are out of view of the current zone can help, and we've used those aggressively wherever possible. However, sometimes, it may just be worth to remove unnecessary points from a track to condense it. After all, if you're walking in a straight line for a while, do you absolutely need to transfer all the intermediate points?

Algorithms

We've summarized the algorithms in this repository, their performance and their drawbacks below:

| Algorith | Temporal? | Batch | Online | |-----------------|-----------|----------------------------------|----------------| | Douglas-Peucker | No | Yes (O(n log n)) | No | | STTrace | Yes | Yes (O(1/n log N/M log M)) | Yes (O(n^2)) | | Bellman's | No | Yes (O(n^2)) | Yes (O(n^2)) |

Interfaces

Two interfaces, DecimateOnline and DecimateBatch, are provided; these allow developers to easily identify algorithms that are designed to be used on a fixed set of data (Batch), and the algorithms that allow on-the-fly insertion of points (Online).

Implementation

STTrace

STTrace was originally described in M. Potamias, K. Patroumpas, and T. Sellis. Sampling Trajectory Streams with Spatiotemporal Criteria. In 18th Intl. Conf. on Scientific and Statistical Database Management (SSDBM’06), pages 275–284, 2006., and uses an Euclidean norm as sampling input. Whenever a point is inserted, the Euclidean norm of each point with respect to its neighbors is calculated. This norm is then used to decimate the buffer, with the smallest norms (= the closest point to their respective neighbors) are removed.

For our implementation, we use a static compression value as input; for our use cases, this allowed us to quickly and efficiently derive a path with a targeted length, while removing every extraneous and near-duplicate point, both stemming from GPS jitter, and from people just taking breaks.