npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

envoy-node-with-grpc-js

v1.13.0

Published

[![Travis](https://api.travis-ci.org/Tubitv/envoy-node.svg?branch=master)](https://travis-ci.org/Tubitv/envoy-node) [![Coverage Status](https://coveralls.io/repos/github/Tubitv/envoy-node/badge.svg?branch=master)](https://coveralls.io/github/Tubitv/envoy-

Downloads

5,698

Readme

Envoy Node

Travis Coverage Status npm version npm license

This is a boilerplate to help you adopt Envoy.

There are multiple ways to config Envoy, one of the convenience way to mange different egress traffic is route the traffic by hostname (using virtual hosts). By doing so, you can use one egress port for all your egress dependencies:

static_resources:
  listeners:
  - name: egress_listener
    address:
      socket_address: 
        address: 0.0.0.0
        port_value: 12345
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          codec_type: AUTO
          stat_prefix: ingress
          use_remote_address: true
          stat_prefix: http.test.egress
          route_config:
            name: egress_route_config
            virtual_hosts:
            - name: foo_service
              domains:
              - foo.service:8888  # Do not miss the port number here
              routes:
              - match:
                  prefix: /
                route:
                  cluster: remote_foo_server
            - name: bar_service
              domains:
              - bar.service:8888  # Do not miss the port number here
              routes:
              - match:
                  prefix: /
                route:
                  cluster: remote_bar_server
          http_filters:
          - name: envoy.router
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
              dynamic_stats: true

But it will bring you new problem, your code is becoming verbose:

  1. routing traffic to 127.0.0.1:12345 where egress port is listening
  2. setting host headers for each request
  3. propagating the tracing information

And this library is going to help you deal with these things elegantly.

First, let's tell the library where the egress port is binding. A recommended way is to set the information on the ingress header by request_headers_to_add:

request_headers_to_add:
- header:
    key: x-tubi-envoy-egress-port
    value: "12345"
- header:
    key: x-tubi-envoy-egress-addr
    value: 127.0.0.1

You can also set this by the constructor parameters of EnvoyContext.

High level APIs

HTTP

For HTTP, you can new the client like this:

const { EnvoyHttpClient, HttpRetryOn } = require("envoy-node");

async function awesomeAPI(req, res) {
  const client = new EnvoyHttpClient(req.headers);
  const url = `http://foo.service:10080/path/to/rpc`
  const request = {
    message: "ping",
  };
  const optionalParams = {
    // timeout 1 second
    timeout: 1000,
    // envoy will retry if server return HTTP 409 (for now)
    retryOn: [HttpRetryOn.RETRIABLE_4XX],
    // retry 3 times at most
    maxRetries: 3,
    // each retry will timeout in 300 ms
    perTryTimeout: 300,
    // any other headers you want to set
    headers: {
      "x-extra-header-you-want": "value",
    },
  };
  const serializedJsonResponse = await client.post(url, request, optionalParams);
  res.send({ serializedJsonResponse });
  res.end();
}

gRPC

For gRPC, you can new the client like this:

General RPC

const grpc = require("grpc");
const { envoyProtoDecorator, GrpcRetryOn } = require("envoy-node");

const PROTO_PATH = __dirname + "/ping.proto";
const Ping = grpc.load(PROTO_PATH).test.Ping;

// the original client will be decorated as a new class
const PingClient = envoyProtoDecorator(Ping);

async function awesomeAPI(call, callback) {
  const client = new PingClient("bar.service:10081", call.metadata);
  const request = {
    message: "ping",
  };
  const optionalParams = {
    // timeout 1 second
    timeout: 1000,
    // envoy will retry if server return DEADLINE_EXCEEDED
    retryOn: [GrpcRetryOn.DEADLINE_EXCEEDED],
    // retry 3 times at most
    maxRetries: 3,
    // each retry will timeout in 300 ms
    perTryTimeout: 300,
    // any other headers you want to set
    headers: {
      "x-extra-header-you-want": "value",
    },
  };
  const response = await client.pathToRpc(request, optionalParams);
  callback(undefined, { remoteResponse: response });
}

Streaming API

But they are also decorated to send the Envoy context. You can also specify the optional params (the last one) for features like timeout / retryOn / maxRetries / perTryTimeout provided by Envoy.

NOTE:

  1. For streaming API, they are not implemented as async signature.
  2. The optional params (timeout etc.) is not tested and Envoy is not documented how it deal with streaming.
Client streaming
const stream = innerClient.clientStream((err, response) => {
  if (err) {
    // error handling
    return;
  }
  console.log("server responses:", response);
});
stream.write({ message: "ping" });
stream.write({ message: "ping again" });
stream.end();
Sever streaming
const stream = innerClient.serverStream({ message: "ping" });
stream.on("error", error => {
  // handle error here
});
stream.on("data", (data: any) => {
  console.log("server sent:", data);
});
stream.on("end", () => {
  // ended
});
Bidirectional streaming
const stream = innerClient.bidiStream();
stream.write({ message: "ping" });
stream.write({ message: "ping again" });
stream.on("error", error => {
  // handle error here
});
stream.on("data", (data: any) => {
  console.log("sever sent:", data);
});
stream.on("end", () => {
  stream.end();
});
stream.end();

Low level APIs

If you want to have more control of your code, you can also use the low level APIs of this library:

const { envoyFetch, EnvoyContext, EnvoyHttpRequestParams, EnvoyGrpcRequestParams, envoyRequestParamsRefiner } = require("envoy-node");

// ...

const context = new EnvoyContext(
  headerOrMetadata,
  // specify port if we cannot indicate from
  // - `x-tubi-envoy-egress-port` header or
  // - environment variable ENVOY_DEFAULT_EGRESS_PORT
  envoyEgressPort,
  // specify address if we cannot indicate from
  // - `x-tubi-envoy-egress-addr` header or
  // - environment variable ENVOY_DEFAULT_EGRESS_ADDR
  envoyEgressAddr
);

// for HTTP
const params = new EnvoyHttpRequestParams(context, optionalParams);
envoyFetch(params, url, init /* init like original node-fetch */)
  .then(res => {
    console.log("envoy tells:", res.overloaded, res.upstreamServiceTime);
    return res.json(); // or res.text(), just use it as what node-fetch returned
  })
  .then(/* ... */)

// you are using request?
const yourOldRequestParams = {}; /* url or options */
request(envoyRequestParamsRefiner(yourOldRequestParams, context /* or headers, grpc.Metadata */ ))

// for gRPC
const client = new Ping((
  `${context.envoyEgressAddr}:${context.envoyEgressPort}`, // envoy egress port
  grpc.credentials.createInsecure()
);
const requestMetadata = params.assembleRequestMeta()
client.pathToRpc(
  request,
  requestMetadata,
  {
    host: "bar.service:10081"
  },
  (error, response) => {
    // ...
  })

Check out the detail document if needed.

Context store

Are you finding it's too painful for you to propagate the context information through function calls' parameter?

If you are using Node.js V8, here is a solution for you:

import { envoyContextStore } from "envoy-node"; // import the store

envoyContextStore.enable(); // put this code when you application init

// for each request, call this:
  envoyContextStore.set(new EnvoyContext(req.headers));

// for later get the request, simply:
  envoyContextStore.get();

IMPORTANT

  1. according to the implementation, it's strictly requiring the set method is called exactly once per request. Or you will get incorrect context. Please check the document for more details. (TBD: We are working on a blog post for the details.)
  2. according to asyn_hooks implementation, destroy is not called if the code is using HTTP keep alive. Please use setEliminateInterval to set a time for deleting old context data or you may have memory leak. The default (5 mintues) is using if you don't set it.

For dev and test, or migrating to Envoy

If you are developing the application, you may probably do not have Envoy running. You may want to call the service directly:

Either:

new EnvoyContext({
  meta: grpcMetadata_Or_HttpHeader,

  /**
   * For dev or test environment, we usually don't have Envoy running. By setting directMode = true
   * will make all the traffic being sent directly.
   * If you set directMode to true, envoyManagedHosts will be ignored and set to an empty set.
   */
  directMode: true,

  /**
   * For easier migrate service to envoy step by step, we can route traffic to envoy for those service
   * migrated. Fill this set for the migrated service.
   * This field is default to `undefined` which means all traffic will be route to envoy.
   * If this field is set to `undefined`, this library will also try to read it from `x-tubi-envoy-managed-host`.
   * You can set in envoy config, like this: 
   * 
   * ``yaml
   * request_headers_to_add:
   * - key: x-tubi-envoy-managed-host
   *   value: hostname:12345
   * - key: x-tubi-envoy-managed-host
   *   value: foo.bar:8080
   * ``
   * 
   * If you set this to be an empty set, then no traffic will be route to envoy.
   */
  envoyManagedHosts: new Set(["some-hostname:8080"]);

})

or:

export ENVOY_DIRECT_MODE=true # 1 works as well

Contributing

For developing or running test of this library, you probably need to:

  1. have an envoy binary in your PATH, or:
$ npm run download-envoy
$ export PATH=./node_modules/.bin/:$PATH
  1. to commit your code change:
$ git add . # or the things you want to commit
$ npm run commit # and answer the commit message accordingly
  1. for each commit, the CI will auto release base on commit messages, to allow keeping the version align with Envoy, let's use fix instead of feature unless we want to upgrade minor version.

License

MIT

Credits