by Jason Dusek
Server inventory systems provide a way to manage and query servers en masse. As part of their function, systems like Puppet, Bcfg2 and Chef Server provide server inventories that group servers together and associate them with config data and basic state information. Those familiar with enterprise environments may think of Remedy, a purpose-built inventory system, or the humble, LDAP machine inventories that are part of directory services products like OpenDirectory or Active Directory.
For platform automation, a server inventory provides a way to find servers by type or function, support for running commands en masse on a group of servers and a way to associate helpful mnemonics with servers as needed. In the past, I’ve used YAML files for describing small clusters, where each nickname, IP address and grouping was assigned by hand. A simple CLI tool maps names and groupings to resolvable addresses. To connect as the admin user to the utility0 server, you might use:
ssh admin@$(little-tool utility0)
The utility may even provide a way to run commands on multiple machines, get uptimes across a cluster, &c.
little-tool utility-servers --run cat /var/log/cron.log
Here, you find yourself writing a lot of code around job management, SSH option handling and general systems plumbing. The helpful name-based configuration offered by SSH is of no help to you, since the tool resolves the nicknames to IPs or some kind of generic, provider assigned domain name. Using a job automation tool like GNU Parallel requires thoughful filtering and reformatting of the generated names. Your browser, also, doesn’t resolve nicknames; you end up deploying and maintaining DNS in addition to your tool’s system of nicknames so that developers and staff can easily link to staging environment(s) and internal services, like issue tracking or reporting tools.
You may also find that deploying your little tool is not so simple: you wisely chose a high-level language and a variety of libraries when authoring it; but these all complicate deployment relative to an all-in-one shellscript.
Why We Made Zonify
These considerations led us in the direction of DNS as a server inventory storage system. The little tool can be split in two halves: something to regenerate the DNS zone and push it in to service (probably needs to be run in only one or two places) and little shell wrapper around `dig’ for querying server groups in an idiomatic way. Zonify, a Ruby library, provides the first part of the system: it translates EC2 instance metadata like tags, security groups and load balancer membership in to DNS records, using SRV records to store groups of servers. Each EC2 instance gets a unique CNAME; and simple rewrite rules allow you to ensure that the “Name” tag is translated to a short name just above the root of your zone.
Zonify creates a prototype zone from EC2 information and constructs a changeset against a zone stored in Route 53, adding and removing records as needed to bring a chosen domain suffix into conformance with what’s in EC2. The zone information, from Route 53 as well as EC2, can be stored to YAML files; as can the changeset. Zonify is helpful for working with your Route 53 zone even without generating records from EC2. Although we haven’t tried it, the YAML format could be easily translated to the zone file format for BIND, TinyDNS or another DNS server.
Check out the Zonify Readme for more information.
Making Life Easier for Everyone
While not a substitute for a true server inventory system, DNS records provide the basics: getting a listing of all your servers, finding servers in particular groups and working with nicknames. On your way to true automation and management — and as a fallback — these facilities help a lot. By dynamically generating the zone from EC2 instance metadata, Zonify makes it easy for everyone in your organization to contribute to maintaining your server inventory: use the built-in EC2 tagging facility and your record is there.
Have a look at Zonify on Github