The problem and the solution
I work at a company that creates web sites and applications. I joined them in 2008 when there were 2 front-end developers and 3 back-end, including me. Company slowly grew to 7 back-end and 6 front-end developers. The following problem started to get in our way more and more: we were working on one single development server, which means that from time to time we would step on each other toes. What happened is, one guy would open a file, the other guy would open the same file and whoever saves and closes first, is a loser. Also, that server had to have every possible service, program and PHP extension imaginable because it needed to support a lot of diverse projects.
If you ever worked in that kind of environment, you know exactly how we felt. So, what can one do to solve that problem? Best thing to do is to set up a local environment using virtual machines, and configure them for each project. For creating virtual machines we use Vagrant, which is a tool to build an appliance for VirtualBox (amongst others), and it gives us a default, tabula rasa machine. Next we need to configure that machine with all the needed packages and configuration to run a certain app.
So, how does one do that? Well, you could SSH into your box, run a couple of commands, edit a couple of files, than run some more commands, than edit some more files. By the time you are running files and editing commands, you start to look for some better solution. We developers tend to automate things, so why not automate setting up the server, which is called provisioning.
There are a number of provisioning systems out there, most common being:
Ansible differs from the other three by being agentless, that is to say, no software is needed to be installed on the server we’re configuring.
I have started to learn and use Puppet at the beginning but it threw me off with its verbose configuration and Ruby. Not that I don’t like Ruby, but I am more of a YAML guy.
While searching for alternative for Puppet, I came across SaltStack, and it was love at first sight. Lets dive in!
Saltstack – what is it made of?
Ok, now that we know the major players in this field, lets take a closer look at SaltStack.
- Saltstack is comprised of a main server that is communicating with all servers we’re configuring. That server is called “Master”
- All the servers we are configuring are called “Minions”
- Execution modules are the heart of Saltstack – those guys actually run stuff. There are a lot of modules, for example, modules for installing packages, running commands, configuring apache or running composer, to name a few
- State modules describe what state those execution modules must create
- Pillars provide variables for states
- Grains provide information about Minions
Yes, a lot to grasp, so let’s take deeper look at each of these.
Master server keeps states, pillars and top files. It talks to Minions via ZeroMQ, SSH or, RAET. ZeroMQ is the default message transport responsible for SaltStack being able to provision hundreds and thousands of servers at no time, asynchronously. SSH is good for talking to systems where you can not install the agent, for example routers, printers and such. RAET (Reliable Asynchronous Event Transport) is still in early development, as of this writing.
Minions provide information about themselves (Grains), they listen for commands that the Master sends and execute modules.
Top files are configuration files that provide structure for pillars and states; they tell Master what states or pillars need to be sent to what Minion for each of the environment.
State files describe a set of desired states that a system needs to be in, for example “apache needs to be installed and running”.
Pillars are a set of variables that are used in state files. For example, apache on redhat based distributions is named httpd, while on debian based it is named apache2. It is nice to be able to put that name in a variable so state configuration is agnostic to the name of packages.
Lastly, we have Grains, which are sets of variables provided by the Minions. Grains provide us with information like the operating system name and family, disk and memory information, number of CPU cores and so on.
I will constrain this tutorial to Ubuntu, but you can reference the documentation for your linux distribution.
Add saltstack PPA:
sudo add-apt-repository ppa:saltstack/salt
Install master on the server that is going to be our main server which communicates with minions:
sudo apt-get update && sudo apt-get install salt-master
Install minion on servers we’re going to provision with master:
sudo apt-get update && sudo apt-get install salt-minion
Configure master and minion for communication:
# Tell minions where the master is by editing /etc/salt/minion on the minion machine master: 10.10.0.1 # Run master master $ sudo salt-master # or (re)start salt-master service # Run minions minion $ sudo salt-minion # or (re)start salt-minion service
After successful connection, minion will report:
[ERROR ] The Salt Master has cached the public key for this node, this salt minion will wait for 10 seconds before attempting to re-authenticate
When you see this, go to your master, stop salt-master service and list keys with
sudo salt-key. Then you can accept the minion key with
sudo salt-key --accept minion.dev. Start the service again. You have now enabled and secured connection between master and minions. To test the communication, ensure the minion is still running, and on master server, with the salt-master service running, issue
sudo salt '*' test.ping. You should see True for all connected minions.
There is an option to run SaltStack in masterless mode which is handy for working with your Vagrant box, or even setting up your own system. To setup the masterless environment, stop the minion daemon and edit minion configuration file:
# Tell minion that there is no remote file client by editing /etc/salt/minion on the minion machine file_client: local
From this point we are going to write states configuration on the master, but you can opt for writing states on the masterless minion, just remember to replace commands from
salt '*' to
By default, configuration is YAML, which is great, but at first configuration files are written in Jinja templating language, which is even greater because that gives us the support of conditional statements, for loops and variables. And Jinja is for Python what Twig is for PHP, so if you have worked with Twig templating language or you use Jinja, guess what: you already know how to write SaltStack configuration!
There are other ways to write the configuration: for example, you can use JSON instead of YAML or write Python DSL, Pyobjects, or even plain Python. Best part: XML is not supported! Go, SaltStack!
Lets see a couple of examples.
Basic example – install Apache
In our /srv/salt folder, create a top file named top.sls. Also, create a folder apache and create init.sls. Structure should be this:
Put contents from gist in top.sls and init.sls respectively, and run
salt '*' state.highstate. This way, you are telling all the minions (
'*') to put themselves in to highstate, that is to execute all states.
Pillars and grains to the rescue
We know that packages are not named the same way across different distributions. Apache is the first example that comes to mind, on RedHat based dists it is httpd, on Debian based dists it is apache2, and on some it is simply apache. Let’s rewrite our basic example with this in mind
Including / extending states
It is a great practice to be DRY, and with SaltStack you can write your states to be extendable. That way your elaborate state for installing apache that manages package repositories, sets up services, watchers and what not can be reused across projects, enabling you to specify only the custom, project specific configuration directives:
Whole example: Nginx
In this example we have two states and an extra file. First state is nginx, where we use pkgrepo to setup the PPA for nginx and that that is a requirement for pkg module. Next, we tell pkg module to install nginx and lastly, we tell service module to run the nginx service, and also to watch for changes to package,
sites-enabled directory and
nginx.conf file. If anything should change in those places, service will be restarted. Neat!
Second state is the nginx.conf file. With this state, we’re telling salt that it should create a file in said path, with provided source. Source is a file on a file server, which points to a location on our master server. Additionally, that file is also a Jinja template.
Lastly we have nginx.conf.jinja file itself where we can see how pillars and grains can also be used in configuration files, setting number of worker processes in this example.
To wrap it up, lets see what is possible when using SaltStack. You can install and configure packages, start and stop services, execute commands on remote servers. This gives you the possibility to set up your deployment procedure by:
- pulling code from git
salt '*' git.pull /var/www/application
- installing dependencies with composer
salt '*' composer.install /var/www/application no_dev=True optimize=True
- and doing a cache warm-up (Symfony example)
salt '*' cmd.run 'cd /var/www/application && php app/console cache:clear --env=prod --no-debug'
You can set up servers in all major clouds, such as AWS, Linode, DigitalOcean, Rackspace and others with Salt Cloud, so if you are running Salt for your Vagrant based dev environment, you can use that configuration for production also, with certain environment based differences, of course. Furthermore, since configuration is just a bunch of files, you can version them with git and roll back your server to any previous state. You can even fork server configuration, https://github.com/saltstack-formulas holds a lot of formulas for various services such as Redis, Postgres and so on.
I hope this post helped you understand SaltStack and encouraged you to provision your dev box.