@pathable/meteor-galaxy-auto-scaling
v0.0.31
Published
NodeJS command line tool to monitor and auto-scale Meteor Galaxy
Downloads
37
Keywords
Readme
Meteor Galaxy Auto Scaling (mgas)
NodeJS command line tool to monitor and auto-scale Meteor Galaxy
⚠️ Important: we are already using it in production for at least two months but please monitor your containers to be sure everything is behaving as expected and please report any issue. ⚠️
Features
- Monitoring: reads Galaxy and Meteor APM metrics
- Alerts: reports containers and apps not running as expected
- Auto-scaling: take actions when specific conditions are met
- Send slack notifications: updates, alerts and autos-scaling messages;
How it works
We use puppeteer
to read Galaxy and Meteor APM dashboards and then we execute actions based on
your
desired settings.
How to use
Set up in your CI server or equivalent a job to call mgas
(meteor-galaxy-auto-scaling) command
from
time to time, it's probably a good idea to use a very short period, like every 2 minutes then
you will be always monitoring your containers.
If you need help configuring your CI check our configurations examples. Please open an issue if you are having a hard time then we can improve the documentation. If you have already configured your CI please open a Pull Request including the instructions in the configuration examples.
First you need to install mgas
(meteor-galaxy-auto-scaling) using yarn or npm:
yarn global add @pathable/meteor-galaxy-auto-scaling
npm install -g @pathable/meteor-galaxy-auto-scaling
Then you run it informing a configuration file with your alerts and auto-scaling rules:
mgas --settings settings.json
You can have different settings for different purposes.
Updates
Check the log of changes.
Settings
{
"appName": "your app host (required)",
"username": "your Galaxy username (required)",
"password": "your Galaxy password (required)",
"slackWebhook": "your Slack webhook URL",
"silentSlack": false,
"simulation": false,
"persistentStorage": "full path to where we want to storage scrapped info",
"infoRules": {
"send": true,
"channel": "#galaxy-updates"
},
"alertRules": {
"channel": "#alerts",
"messagePrefix": "@channel",
"maxInContainers": {
},
"maxInApp": {
"pubSubResponseTime": 200,
"methodResponseTime": 300,
"cpuUsageAverage": 1,
"memoryUsageByHost": 1500,
"sessionsByHost": 5
}
},
"autoscaleRules": {
"channel": "#auto-scaling",
"containersToScale": 2,
"minContainers": 2,
"maxContainers": 10,
"addWhen": {
"pubSubResponseTimeAbove": 300,
"methodResponseTimeAbove": 300,
"cpuAbove": 50,
"memoryAbove": 70,
"sessionsAbove": 50
},
"reduceWhen": {
"pubSubResponseTimeBelow": 300,
"methodResponseTimeBelow": 300,
"cpuBelow": 25,
"memoryBelow": 25,
"sessionsBelow": 30
}
},
"minimumStats": 5,
"puppeteer": {
"headless": true
}
}
Auto scale rules
The autoscaling (autoscaleRules
) behavior is meant to adjust smartly the containers on the
Galaxy server taking
into account the data got from there and a predefined configuration.
Three actions are supported:
add
containers (conditions are configured onaddWhen
json key);reduce
containers (conditions are configured onreduceWhen
json key);
The conditions available are: "[pubSubResponseTime|methodResponseTime|cpu|memory|sessions][Above|Below]". Check out to which values refer for each: from Galaxy Panel and APM panel.
The conditions express the property average on the active containers. The active containers are those that are running, the ones starting or stopping are ignored.
Multiple conditions can be informed and they are evaluated in different ways depending on the action
add
action evaluates withOR
, one match is enough to add new containerreduce
action evaluates withAND
, one miss match is enough to not remove one container
The
addWhen
andreduceWhen
behaviors check to not go beyond a containers count range. This range is described by theminContainers
andmaxContainers
configuration.The
addWhen
andreduceWhen
behaviors won't run if a scaling is happening. If any other condition passes it will run on the next run.An slack message is sent anytime a scaling behavior is triggered if you set a Slack Webhook, the messages are sent to the default webhook channel. You will receive messages like this
Alerts
You can set maximum limits for container metrics (CPU, memory and connected clients) and also for Meteor app metrics (response time for publications and methods).
Example:
"alertRules": {
"maxInContainers": {
"cpu": 1,
"memory": 10,
"clients": 5
},
"maxInApp": {
"pubSubResponseTime": 200,
"methodResponseTime": 300
}
},
"minimumStats": 5,
You will receive an alert like this when at least minimumStats
times in a row the current value
was above the maximum expected.
Then if you run mgas
every 2 minutes and use minimumStatus
as 5 you will get an alert when your metric is at least 10 minutes above the maximum expected
.
minimumStats
is set in the first level of the settings because maybe we will use this
information in
the future also for auto-scaling, for now auto-scaling is not considering the minimumStats
value.
Info rules
- Set the channel, by default will go to default webhook channel
- You will receive messages like this
Advanced
Remote settings
If you need to have dynamic settings coming from a external location, like an API, you can configure:
"remote": {
"url": "https://yourapi.com/v1/auto-scaling?yourkey=XXX&anySetting=YYY"
}
Then the JSON returned by this API will be merged (using lodash.merge
) with your local settings
. If the request to this URL throws an error then the local settings will be used anyway and a
console.error
(Error getting remote options from ${url}
)
will be printed.
Developing
If you want to include new features that includes reading new data from Galaxy or Meteor APM you
will probably want to run puppeteer
watching the actions, then change headless
setting to
false
.
"puppeteer": {
"headless": false
}
Troubleshooting
Fixing Puppeteer on Ubuntu 16.04
sudo apt-get install libx11-xcb1 libxcomposite1 libxi6 libxext6 libxtst6 libnss3 libcups2 libxss1 libxrandr2 libasound2 libpangocairo-1.0-0 libatk1.0-0 libatk-bridge2.0-0 libgtk-3-0
Contributions
Please open issues to discuss improvements and report bugs. Also feel free to submit PRs, it is always a good idea to discuss first your PR idea in the issues.
Contributors ✨
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!