@joergmittaglawo/dmvconfig
v0.0.0-alpha.5
Published
DMV Configuration scripts for Lawo V__matrix Distributed Multiviewers.
Downloads
362
Maintainers
Readme
DMV-Configuration
I rewrote the DMV scripts completely from scratch, without basing them on Aleksei's scripts at all.
This version is based on three fundamental ideas:
- The configuration should be complete, there is no need for any extra steps such as setting up IP addresses, choosing the FPGA, etc.
- The configuration should be only data, no scripting required.
- The DMV will be controlled by VSM (or another control system) and theWALL.
The only thing that needs to be pre-configured are the IP addresses that the script uses to connect to the cluster nodes. Everything else, including the IP addresses of the media interfaces, the hostname, etc. are done by the script.
All the configuration can be done purely in JSON. It is, however, still possible to use scripts, e.g. to generate hostnames and IP addresses for large clusters.
There is no magic for generating SDPs, for assigning sources to MOs, for creating auto-layouts on heads, etc. It is assumed that this will be taken care of by VSM (or another control system) and theWALL.
You should never need to modify anything outside of the config
directory.
If you take a look into the examples
directory, you will see a file named current_config.json
. This file defines the current configuration, which is just a subdirectory. Also in the examples
directory, you will see that I basically scoured the SSE SharePoint to find some existing DMV configurations and I converted them to these new scripts, including the Videohouse Tokyo 2020, WDR Euro 2020, and Gamecreek Gridiron and Celtic ones. There's also an example of a single node DMV as replacement for a Classic MV, and a configuration that converts Roland's whole testkit into a DMV.
What I would like from you, is to look at those configurations, and tell me whether the way I structured the configurations, and the way that you can seamlessly choose between either static JSON or dynamic scripts, makes sense for you or not. Also, if you see any missing features or flexibility that you need. (Other than documentation, I know that is missing as well.)
If you want to, you can also test it, of course. All you need is a somewhat recent version of Node.js installed. Then just edit config/current_config.json
to point to the right configuration (e.g. "biskit"
or "gamecreek/ob4celtic"
), and run
node bin/dmv.js
And that's (hopefully) all there is to it.
You will notice that I have not implemented a lot of functionality from the existing scripts. No SDP patching into the sources, no routing of sources to MOs, no routing of MOs to PIPs, no automatic layouts. This is very nice and useful for development labs, testlabs, PoCs, and demos, but I do not deem it necessary for commissioning. What is still missing, but I will definitely put back in, is the head-to-SDI routing.
So, that was what is missing. What is new? The main thing I put in for now, is that there is a distinction between the IP address that the script uses to connect to the nodes, and the IP addresses that the nodes use to connect to each other. We have a couple of customers now that use an out-of-band control network, but want the intra-cluster communication to be on the streaming network because their control network is not set up for multicast. This is very easy: in the cluster.json
, you configure which interface is going to be used for scripting, and which interface is going to be used for the cluster.
Another thing that I did is naming. In nodes.json
, you can set a hostname for every node. The script will set the hostname (which is also used for LLDP and as the prompt when you log in via SSH), and it will also use this for the system.usrinfo.short_desc
, which is used in syslog messages, the little banner on top of the advanced web GUI, and also in the window title of the browser window. Also, all the sources have names like "2110-20 Video 1", "2022-6 Video 1", "Head 1", "Mipmap 1", and so on. This one will cause some trouble, since the CoolTool is currently searching for "source #0", "source #1", etc., but ultimately, I believe this will make configuration easier, and also make the cluster easier to support for the support guys. Note: the Gadget Server uses this name only for the EmBER+ description, not for the parameter name. So, this will not break any existing configuration, because the gadget path is still the same (source[0]
, etc.) It only affects the display name in the gadget explorer, which is however what the CoolTool is based on.
Just a reminder: do not use this for any deployment. The only tool allowed for deployments are the scripts from the DMV Bundle, and the vscript version that comes with them. In particular, I have not been able to test these scripts on any cluster larger than 3 blades, and in particular, I have not been able to test whether the DMV actually works, since I was only connected via VPN and had no VSM, no theWALL, no sources, and no way of seeing the heads.
Anyway, please let me know what you think!