npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

git-imitate

v0.0.16

Published

This is the working draft solution specification of `git-imitate`. Its goal is to show you the why, what and how of the solution.

Downloads

7

Readme

git-imitate

This is the working draft solution specification of git-imitate. Its goal is to show you the why, what and how of the solution.

Motivation

Working on multiple devices sometimes puts you into the situation, where you

  1. can't remember where you left off (branch)
  2. are not sure if you have commited/pushed everything
  3. have to boot another system just to commit and push some changes before you can carry on with your work

In those cases you have the option to use git fetch, git log or check what's in progress in the issue tracker. While this makes it absolutely possible to recover from situation "1.", it is time consuming and does not help you with situation "2." or "3.".

How can git-imitate help you?

Example 1:

On your laptop, you are working on a bug and decide to switch to a temporary branch. The next day you start working at your desktop. While booting your machine, git-imitate checks out the new branch in the background. You start your dev environment and immediately continue on where you left off. Smoothly.

Example 2:

It's late in the afternoon and youre setting up for your way home. That's when you college asks you wether you have seen "that bug". You open your laptop again, opening a suspected file, and there it is: A missing "$" in the regex. The next day you start working at your desktop. While booting you machine, git-imitate detects, that you were naughty and did not care to commit your hotfix. It will inform you (politely) about your failure and (if possible) allows you to pull in your uncommited changes where you left off. Nice.

Both examples play in the ideal situation, where you have no modifications on your desktop. This specification is trying to sort out all those nitty gritty details and tries to draw the picture of a possible architecture for the software.

Detecting Checkouts

Working on different systems often means working on different checkouts. How can we detect a checkout? There is an easy way in terms of easy to implement and a hard way in terms of hard to implement, but easy to use. Let me expand on both solutions, which I think are both viable approaches at different stages of the development of this software.

Easy Implementation - Bad UX

The principle of least astonishment: Applied. Git hooks, especially post-checkout allow us to run a script, after we change to another branch or commit (e.g. when you git pull). While easily implemented, git hooks bear the following problems:

  1. you have to get them into your repositories
  2. you cannot easily update them

So preparing a repository for git-imitate means the hook must be installed/updated manually. And with every new repository you add, you have to remember to install the darn hook again. What can we do to mitigate all this manual effort?

We could scan a certain folder, provided while installing git-imitate, for git repositories. And we might do so again and again to detect new repositories. I hardly dare to say it: Polling. While polling surely is in most cases not a very good idea, there is another point, that almost weighs heavier: Statefulness (is this a real word?). We have to litter our system with state: A configuration parameter with the path that has to be scanned for repository changes. A configuration parameter that has to be migrated on updates and might get outdated.

What's the alternative?

Hard Implementation - Better UX

Why not watch and hook into git invocations, retrieve the current working directory and deduct whatever change happened to the repo? Tools like WMI (windows) or forkstat (linux) allow us to tap into process invocations. From there it is easy to find out where the executable was called and then query the repository about e.g. the current branch or its status. This solution requires no polling and no configuration, but is hard to implement. The platform dependency and the dependency and integration of those tools increase the complexity of the solution a lot.

And that was our keyword ladies and gentleman: Complexity. Nothing sophisticated nor optimal goes without complexity. That's why we're aiming for good enough in the first run. We can always switch to the more sophisticated solution, when we identify problems with the easy solution or have plenty of time (never). But we have to embrace this possibility in terms of software architecture.

A another hard problem, which we have to take into account, when we want to preserve work in progress files, is the situation, where you just close your laptop and rush out of the office, because you are late. This means we have to detect, when the system decides to suspend it's operation, which is again platform dependant and thus difficult to examine.

Detection Architecture

Talking about change detection and synchronization, the primary goal git-imitate is clearly set in the automation domain. We want to avoid tedious and repetative tasks and we want this automation to be unobstrusive and invisble. (But not intransparent.) That's why we need an autonomous software component running somewhere in the background, supporting us without interception. In terms of operating systems: A service.

This service will take on the task of detecting and synchronizing checkouts or working copies. While users will not interact with services directly, we need other user facing components, like a cli or even a gui, to give users an interface for configuration, information, notification, conflict resolution, etc. Due to it's autonomous nature, let's call this service the git-imitate-agent.

Storing Repository State

If we are looking to act on the changes we have detected, we have to make sure that each agent on each workstation is able to communicate those changes. While most of the time only one of our agents will run (because you hopefully don't work on two workstations concurrently) this communication is async. Communication and even worse async or pull communication will have to face us with a couple of challenges:

  • contracts for data exchange formats
  • inability to communicate (offline)
  • concurrency

While those are problems important to the implementation, the more interesting question to the user is: Where is my data stored? And the definitive answer is: It depends. Or better: You have a choice! Using async pull communication results to a transport that can be as simple as a json file accesed via ftp, ssh, http, git, google drive, dropbox, s3, etc. This means we need an architecture, flexible enough to support various storage solutions and friendly enough to allow users to extend the means of transport.

Syncing Repository State

WIP:

  • how to deal with work in progress? just switch? whats with conflicts`?
  • introduce the idea of a gui-client posting notifications

Sync Work in Progress

WIP:

  • how to effectively store work in progress files
  • when to sync? after successful switch to current branch?
  • how to sync them? automatically/manually: git stash, copy, git stash pop?
  • how to use the gui client and notifications?