runba
v1.1.2
Published
[Instructions for how to build a basic Broadcaster Application](https://a3fa.atlassian.net/servicedesk/customer/portal/4/topic/06a35a2b-437b-4253-8506-283cf4e7e542/article/261193759)
Downloads
5
Readme
RunBA
Instructions for how to build a basic Broadcaster Application
Installation
If this is your first time doing a fresh install
# Create a blank directory for your application
$ mkdir my-a3fa-app
# Change current working directory
$ cd my-a3fa-app
# Optional, login into npmjs.com
$ npm login --scope=@a3fa
# Init package
$ npm init -y
# Get NPM Access Token from Package Owner (Albo)
# Enter the following into your '.npmrc' file in your Home folder (Users/your-username)
$ @a3fa:registry=https://registry.npmjs.org/
$ //registry.npmjs.org/:_authToken=YourAuthToken
# Import dependencies
$ npx @a3fa/a3fa-broadcaster-starter-kit@latest
# Install dependencies
$ npm i
# Setup permissions
$ chmod +x node_modules/@a3fa/a3fa-atsc3-emulator/atsc3-receiver-emulator-macos-x64
If you have already configured your .npmrc file and imported dependencies
- Clone this repository to your local machine via SSH.
- Run
npm i
to install the packages - Run
chmod +x node_modules/@a3fa/a3fa-atsc3-emulator/atsc3-receiver-emulator-macos-x64
to grant the execute permission to the emulator
Launch Command {env:local,int,prod}
Windows:
npm run win conf="./emulator/sinclair-atscCmd-2024-{env}.mc.json"
macOS:
npm run macos conf="./emulator/sinclair-atscCmd-2024-{env}.mc.json"
Linux:
npm run linux conf="./emulator/sinclair-atscCmd-2024-{env}.mc.json"
Overview of RunBA Infrastructure
The infrastructure for RunBA is minimal and outlined below. For the backend powering alerts, see Klaxon.
RunBA uses three main environments (INT, QA, and PROD) across two AWS accounts (ATSC3_NonProd, ATSC3_Prod). INT and QA belong to ATSC3_NonProd, and PROD belongs to ATSC3_Prod. There are also INT environments for individual developers or developer groups which can be used as part of a workflow to test changes in isolation before merging to INT.
Each of these environments is linked to a unique CloudFront distribution that acts as the entry point for the corresponding S3 origin. An S3 origin is a deployment bucket containing the contents of the public
directory in the source code. Each environment will also pull assets and station logos from the corresponding S3 buckets.
Our CI pipeline defines several jobs, some of which are available to be run on Merge Request, and others which are only made available on Merge. We do not use Continuous Deployment. Merging to the master branch will not automatically trigger a deployment, and deployments to all environments always require a manual action.
Infrastructure changes should always be made using Terraform. Our remote Terraform backend is stored in S3 with a state-lock table in DynamoDB, which is a common configuration for AWS. For an additional layer of security, the state-lock table is server-side encrypted with a KMS key. If you encounter an error where you are unable to acquire the lock file, make sure that no other plan or apply jobs are running simultaneously. If this still doesn't work, you can add the -lock=false
flag to your command in order to disable locking or use force-unlock
on the LockID to manually unlock the state.
Deployments with ATSC 1.0 versus ATSC 3.0
The CloudFront domain URLs are the base URLs for our deployments, and the path to the entry page is <CloudFront_Distribution_URL>/run3tv-common/index.html.
Deploying via 1.0
The run-ba-watermark-PROD deployment URL is embedded into the audio stream using Verance's VP1 Watermark technology and injected into the signal and injected into packagers at stations. We simply supply Verance with this URL along with GSIDs for each service and other optional information to hook up Content Recovery.
Deplying via 3.0
The run-ba-PROD deployment URL is entered by Sinclair employees into the HELDs inside of the packagers of broadcast air chain software owned by Digicaster, Enensys, and Triveni. A HELD entry follows an XML schema and contains information about the entry page of the app and the appContextId, which is used to link services with the bridge file in the source code. Here is an example:
<?xml version="1.0" encoding="UTF-8"?>
<HELD
xmlns="tag:atsc.org,2016:XMLSchemas/ATSC3/AppSignaling/HELD/1.0/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<HTMLEntryPackage bbandEntryPageUrl="https://d3o3enj75dempy.cloudfront.net/run3tv-common/index.html" appContextId="tag:sinclairplatform.com,2020:SEALAB1:2165" />
</HELD>
The Run3TV framework that RunBA is built on establishes the WebSocket connection it needs when it is being run on an ATSC3 receiver environment or emulator.
Canary Deployments for ATSC 3.0
The QA environment acts as our KOMO PROD environment (and it should be renamed soon to reflect this). The purpose of this environment is to minimize the chances of deploying a BA with issues across the nation. Once a deployment has been tested in INT, the next step is to push a deployment to QA where our canary testers can verify that things are working according to plan. If something breaks, we can easily revert to a previous working deployment simply by running the pipeline against a prior version of the codebase. As QA is actually KOMO PROD, analytics will roll up to our production analytics property in GA.
CloudFront Cache Invalidation
One way to invalidate the BA's CloudFront cache is to use the AWS Management Console. Navigate to the CloudFront distribution, select the Invalidations tab and select Create invalidation. Add the specified path for the object(s) you want to invalidate. To invalidate the entire cache use the path "/*"
Using Home Assistant to test the BA in SEALAB
Here is a short video on how to use Home Assistant (HA) in the Seattle Lab. Make sure Zscaler is turned on to use HA.