My previous entry described how to test ansible roles with molecule and gitlab-ci however most of the times you would probably like to test just a role that you’ve edited. To help you with that I’ve edited my repo on github and I will show you how to make it working.
I’ve edited my .gitlab-ci config file in the way that it will test one ansible role with molecule if the commit description would contain name of the role to test, without commit description it will test all the molecule enabled roles and when there is no role named as a commit description it will fail.
The most important change in the file is shown below:
script:
- |
cd playbooks/roles/
if [[ ! -z "$CI_COMMIT_DESCRIPTION" ]]; then
cd $CI_COMMIT_DESCRIPTION/ && molecule test
else
for role in find . -type d -name "molecule" | cut -d'/' -f 2; do cd $role && molecule test cd .. done fi
As you might see I’ve just replaced basic script that was used to test just one role with this more complex one that will do the job.
How to use it? It is simple as just commiting to your repo, however when you would like to test just one particular role you have to add new line at the end of the commit message and in the new line add the name of the role, like in this example (just a test commit message and description):
This is just a test commit basic
In the above example text in the first line is the commit message (This is just a test commit) and the second line is the commit description that tells gitlab to test only basic role.
Another example could be found here. So Testing new pipieline config with commit message is the commit message and basic is the commit description that tells gitlab to test just a basic role.
Hopefully it is somehow useful and someone could make use of this trick. Of course feel free to clone and play with repo and molecule.
When you are developing ansible playbooks you will come to the point when you have to (or want to) automate testing of the playbooks/roles before getting them to the production. Or even you would like to setup CI/CD system that plays your playbook against your production environment automatically, but of course previously tested them. Just follow this blog entry and I will try to show you in couple of simple steps how to fully automate testing ansible playbooks using gitlab-ci pipeline and molecule.
Molecule is designed to aid in the development and testing of Ansible roles.
Molecule provides support for testing with multiple instances, operating systems and distributions, virtualization providers, test frameworks and testing scenarios.
Molecule encourages an approach that results in consistently developed roles that are well-written, easily understood and maintained.
Why to use molecule instead of just ansible-linter, ansible syntax checking and yamllinter? Because with molecule you can spin up a docker image with a given operating system and test playbooks against it. It will also do linting, syntax checking but it will also take care of checking if the playbooks/roles are idempotent. What is more you can write your own scripts so that they will be executed to test if the playbooks were run properly and if the system is configured in the way you wanted to. In my opinion just a basic tests that are run with molecule are enough for the beginning. I will try to include more complex tests in the next articles. So please stay tuned.
Installing gitlab-ci runner and docker components
In order to test ansible playbooks you will have to install gitlab-ci runner and docker components on the hosts that will be running all the tests. The simplest way to install and register gitlab runner is to follow the instructions on the gitlab’s webpage. Please be sure to use shell executoras other types of gitlab runners are not suitable to run docker containers (I know there is docker-machine and docker executor, but we will not be able to run docker images inside docker environment, so we have to run docker images on top of the operating system).
The next stop is to install docker components on your freshly installed and registered gitlab runner. To install docker please just visit docker’s webpage and follow instructions.
I am using CentOS 7 host as a gitlab runner and after installing above software you will not be able to run gitlab runner with molecule out of the box. To be able to tests you have to fix/install some dependencies.
First of all python-six package is quite out of date in the CentOS 7 installation so you have to update it manually:
pip install six --upgrade
The second thing is that you have to install docker python bindings so that molecule can spin up new docker containers:
pip install docker-py
After that you have fully working gitlab runner that is able to run molecule tests. Of course you can configure above things in the VM, which would be nicer than messing with a system’s python libriaries, or even use virtualenv to run tests (idea for next blog post?:) ).
Adding molecule tests to your roles
Hopefully you’ve followed official way to install molecule on the host (pip or in virtualenv), otherwise you will end up with old molecule installation (version 1.x) which is not compatible with newest version. All the setup and configurations steps that are presented here are done on the newest available version (at the time being 2.19.0).
To start testing we have to either init new role with molecule witihin your ansible setup or add molecule config to existing role.
Init new molecule role
If you are creating new role from scratch you can use:
molecule init --role my-new-role --driver docker
Where my-new-role is the name of the new role and docker is a driver that we will be using. It will also create directories for tasks/vars/handlers and it will set up basic config file for the molecule testing.
Add molecule config to existing role
If you already have a role and just want to add molecule testing for it please use the following command:
It will set up molecule testing files for your my-role-name with a docker driver (as we are going to use docker containers to test our playbooks/roles).
Configuring molecule
After we’ve added basic config files for the roles in the above section, we have to configure molecule. Whole configuration is of molecule tests are done in the molecule.yml file (for example in ansible/playbooks/roles/my-role-name/molecule/molecule.yml). Here is an example of configuration file (not original one, it’s modified by me):
driver: as you can see the driver is set up to use docker for testing playbooks on
platorms: in this section we can set up as many platforms as we wish molecule to test on.
In the above example I am setting up systemd enabled CentOS 7 docker image (that’s why there are a lot options under centos7 section) to test my roles against. Of course you can add another platforms like: CentOS6, Ubuntu (any version) or Debian. It all depends on your roles and against what would like to test them.
Basically with this setup we can run molecule manually to test our role:
cd ansible/playbooks/roles/my-role-name molecule test
After invoking above command, molecule will do:
linting all the *.yml files found in the given location/role using following techniques:
yamllint
flake8
ansible-lint
try to destroy all orphaned docker images that could potentially interrupt with tests
try to cope with all the dependencies using ansible-galaxy
check syntax
prepare docker images
converge docker images using given roles
test idempotence of the given role
execute verification scripts from molecule/default/tests/
destroy all created docker images
If all tests passed without any issues then your role is written correctly and can be considered as production ready.
Configuring gitlab runner (.gitlab-ci.yml file)
To execute molecule tests automatically when new changes are pushed to the git repo on our gitlab server we have to configure gitlab runner. It is as simple as just putting one file in the main directory of your repository. The file should be named .gitlab-ci.yml.
Here is an example of the gitlab runner configuration file:
molecule: stage: molecule tags: - shell script: - cd playbooks/roles/basic/ && molecule test
Most important setting/sections are:
stages: we are saying what stages we are going to have in our pipeline. In this case just testing using molecule, but we could add more stages to it (every stage is run after preceding one has been succesful, so stages can be used to play role against production servers)
molecule: this describes and configures the molecule stage. The most important subsection is script, the rest could be ommited. Script is just a set of commands used to run tests in this stage.
All other setting are not so important for our purpose and you can read about them here
Summary
After following all the configuration steps your gitlab pipeline should be configured properly so after pushing new changes to git repo you should see green tick in the web gui near your latest commit, or red cross if the tests didn’t pass.
I’ve prepared small repository on github showing all the functionality described here. The repo can be found here
Please feel free to clone/comment/play with it.
P.S. Hopefully this entry would be helpful. I have some ideas for the next blog entries so stay tuned. VY 73!
Check_mk is quite nice to monitor hosts in your own network, however if you have remote server that you would like to monitor it’s not so secure, because check_mk agent is sending all its data as clear text. Of course you can limit connection to only one remote ip with firewall, or even with xinetd, but what about monitoring hosts running on dynamic external IP or even scuring data transfer between hosts. You simply cannot put DNS name to firewall or xinetd, that’s why you can use stunnel to secure connection.
It will act in two ways:
securing data transfer through the internet
adding authentication layer in front of check_mk agent
Please follow this how-to, it will show you how to secure connection between check_mk server running CentOS 7 and check_mk agent running on CentOS 7. This how to is not describing the way to install cehck_mk nor check_mk agent on the hosts.
Both sites (remote site and check_mk site)
First of all install install stunnel form CentOS 7 base repo:
yum install stunnel
yum install stunnel
After installation was completed you have to create systemd unit file for this service in /etc/systemd/system/stunnel.service:
[Unit]Description=SSL tunnel for network daemons
After=syslog.target network.target
[Service]ExecStart=/usr/bin/stunnel
Type=forking
PrivateTmp=true[Install]WantedBy=multi-user.target
The cert file that is mentioned in config file will be generated on the remote site, so just copy the cert after you’ve generated it and then you can enable and start service.
Basically you should add host as usual just configuring some additional parameters:
IPv4 Address should be changed to the localhost and you should create the rule for tcp port of agent for this host:
Adding another hosts this way
You can add as many hosts as you wish, all you need to do is just multiply section [check_mk_remote] on monitoring hosts (of course changing name and port for each section) and add more rules in check_mk.
You can even add another services this way. Just modify stunnel.conf file accordingly.
If You have a problem with deleting tons of files in linux (when for example You’ve came across this problem: /bin/rm: Argument list too long.) You can try find, perl scripts and so on (you can find some more info here), but if described there ways to do this are not successful You can try my way to cope with this problem using rsync.
First of all You need to tune Your system a little bit (put those commands as root in console):
Now use rsync to copy empty directory over Your directory (attention it will delete all the contents of destination directory):
rsync -a--delete/empty//todelete/
rsync -a --delete /empty/ /todelete/
It can take a lot of time, and maybe You want to use ionice to be nice for other processes and services that are using disk during this operation. It is also good to set this command on screen.
After above command will end its job just do:
rmdir/todelete
rmdir /todelete
Of course it can also be I/O consuming operation so You may also want to use ionice command.
The above way helped me to delete almost 100 millions of files in one directory where find, perl and other ways were not successful.
Recently my Zabbix MySQL database was corrupted. Unfortunately I’ve needed historical data (database backup was too old), so there was only one way: restore everything I can from corrupted database. On the other hand I had every table in the separate file (innodb_file_per_table=1 in my.cnf), which was very helpful.
There are three ways to restore corrupted InnoDB databases (you should decide which one to choose, sometimes You will need to use not only one):
manually importing files to newly created database
using Percona InnoDB recovery tools
using innodb_force_recovery
For above methods You will need to have files from Your datadir (for example: /var/lib/mysql), so copy it somwhere.
Manually importing files
For this method You need to have ibd files from MySQL’s datadir and You need to know how was the table created (whole create command).
First step is to create new database, so login to MySQL and create it:
create database corrupted;
create database corrupted;
Now create table:
use corrupted;
CREATE TABLE `maintenances`(`maintenanceid` bigint unsigned NOT NULL,
`name` varchar(128) DEFAULT '' NOT NULL,
`maintenance_type` integer DEFAULT '0' NOT NULL,
`description` text NOT NULL,
`active_since` integer DEFAULT '0' NOT NULL,
`active_till` integer DEFAULT '0' NOT NULL,
PRIMARY KEY (maintenanceid))ENGINE=InnoDB;
use corrupted;
CREATE TABLE `maintenances` (
`maintenanceid` bigint unsigned NOT NULL,
`name` varchar(128) DEFAULT '' NOT NULL,
`maintenance_type` integer DEFAULT '0' NOT NULL,
`description` text NOT NULL,
`active_since` integer DEFAULT '0' NOT NULL,
`active_till` integer DEFAULT '0' NOT NULL,
PRIMARY KEY (maintenanceid)
) ENGINE=InnoDB;
And here is a tricky part – You need to discard tablespace by invoking this command in MySQL:
use corrupted;
ALTER TABLE maintenances DISCARD TABLESPACE;
use corrupted;
ALTER TABLE maintenances DISCARD TABLESPACE;
Next step is to copy old file to correct place (using OS shell, not MySQL):
After that You need to login to MySQL again and import new tablespace:
use corrupted;
ALTER TABLE maintenances IMPORT TABLESPACE;
use corrupted;
ALTER TABLE maintenances IMPORT TABLESPACE;
In same cases after above steps You will be able to dump this table using mysqldump tool, but it is very often that MySQL will produce this error:
ERROR 1030(HY000): Got error -1 from storage engine
ERROR 1030 (HY000): Got error -1 from storage engine
After that simple go to MySQL log file and see why it is happening. In my case it was:
InnoDB: Error: tablespace idinfile'./zabbix/maintenances.ibd' is 263, but in the InnoDB data dictionary it is 5.
InnoDB: Error: tablespace id in file './zabbix/maintenances.ibd' is 263, but in the InnoDB data dictionary it is 5.
If the above error occurred You need to start from the beginning but with another method.
Percona InnoDB recovery tools
First You need those tools – simply visit percona site and download it, unpack it and build those tools (You will find more info how to do this inside this archive). After that You are ready to repair above MySQL error. To do this follow next steps:
Drop table from corrupted database, and create it again (the same way as it was created before).
If You are facing with one of the following problems after upgrading CentOS and Zabbix agent to the latest stable realese (CentOS 6.5 and zabbix agent 2.2.1):
You cannot get autodiscovered items data (for examle: network interfaces bandwidth)
zabbix cannot collect data from MySQL (using provided by zabbix authors MYSQL template
Recently I’ve needed to configure more advanced monitoring of Dell servers with hardware Raid, becasue of disk failure in one of my servers. I’ve came across a lots of information how to configure OMSA in Centos, but every description and howto shows that You need to install every component from OMSA to run it properly, so I’ve decided to write my own note how to do it without installing unnecessary software from OMSA repository. Here are the steps to install OMSA snmpd support:
First of all install OMSA repository on Your CentOS system. To do this follow instructions on this site:
Of course You need to set up snmpd to run as You wish (You will find how to install and configure snmpd on google ;) ), and as dependency You need to configure snmpd to get values from OMSA – just edit this file:
/etc/snmp/snmpd.conf
/etc/snmp/snmpd.conf
And add this line on the end:
smuxpeer .1.3.6.1.4.1.674.10892.1
smuxpeer .1.3.6.1.4.1.674.10892.1
You are almost ready to read OMSA variabled from snmp, just fire up all needed services:
And add those services to start automatically during boot:
chkconfig dataeng on
chkconfig snmpd on
chkconfig dataeng on
chkconfig snmpd on
Now You can check if everything works as You wish (this command is for default configuration of snmpd on CentOS, and of course You need to have net-snmp-utils package installed):
snmpwalk -v 2c -c public 127.0.0.1 .1.3.6.1.4.1.674.10892.1
snmpwalk -v 2c -c public 127.0.0.1 .1.3.6.1.4.1.674.10892.1
If everything is working as we needed You should see a lot of lines with informations.
Now it is time to configure zabbix to read this data. I’ve found nice zabbix template on zabbix forums. Just download it from here: https://www.zabbix.com/forum/showthread.php?t=22054 and import in zabbix.
Last step to get it working in zabbix is to Link this template to servers You need to be monitored.
As I mentioned earlier (Zabbix templates for Raspberry PI), I want to develop Zabbix template for Raspebrry PI, but unfortunatelly my Raspberry PI could no be used to do such things now, so I decided to run raspbian on QEMU.
Here are the steps to run raspbian in QEMU on Fedora 19.
First of all You need install some packages and its dependencies:
yum install qemu-system-arm
yum install qemu-system-arm
Now we are ready to download things that we will need to run raspbian in QEMU.
# Linux Kernel for QEMUwget http://xecdesign.com/downloads/linux-qemu/kernel-qemu
# Raspbian Wheezywget http://raspberry.mythic-beasts.com/raspberry/images/raspbian/2013-07-26-wheezy-raspbian/2013-07-26-wheezy-raspbian.zip
# Linux Kernel for QEMU
wget http://xecdesign.com/downloads/linux-qemu/kernel-qemu
# Raspbian Wheezy
wget http://raspberry.mythic-beasts.com/raspberry/images/raspbian/2013-07-26-wheezy-raspbian/2013-07-26-wheezy-raspbian.zip
Now we are redy to set up everything. It will take few simple steps:
# unzip Raspbian Wheezy imageunzip2013-07-26-wheezy-raspbian.zip
# put everything in one place:mkdir rpi-qemu
cp kernel-qemu rpi-qemu/cp2013-07-26-wheezy-raspbian.img rpi-qemu/cd rpi-qemu
# before mountin image You need to figure some thins:file2013-07-26-wheezy-raspbian.img
# now get "startsector" value for parition 2 and multiply by 512mount ~/qemu_vms/2013-02-09-wheezy-raspbian.img -ooffset=<multiplied_value>/mnt
# modify some filescd/mnt/etc/# edit this file: ld.so.preloadnano ld.so.preload
# comment out the line that is there by putting "#" before it# save this file# umount image:umount/mnt
# unzip Raspbian Wheezy image
unzip 2013-07-26-wheezy-raspbian.zip
# put everything in one place:
mkdir rpi-qemu
cp kernel-qemu rpi-qemu/
cp 2013-07-26-wheezy-raspbian.img rpi-qemu/
cd rpi-qemu
# before mountin image You need to figure some thins:
file 2013-07-26-wheezy-raspbian.img
# now get "startsector" value for parition 2 and multiply by 512
mount ~/qemu_vms/2013-02-09-wheezy-raspbian.img -o offset=<multiplied_value> /mnt
# modify some files
cd /mnt/etc/
# edit this file: ld.so.preload
nano ld.so.preload
# comment out the line that is there by putting "#" before it
# save this file
# umount image:
umount /mnt
After above steps we are ready to fire Raspbian up.
After issuing above command You should see smth like that:
After some time booting QEMU will give You root shell to fsck corrupted filesystem, just enter those command:
# check filesystem
fsck -y/dev/sda2
# reboot system
shutdown -r now
# check filesystem
fsck -y /dev/sda2
# reboot system
shutdown -r now
After those steps You are ready to power it up again and start using it almost like normal Raspberry PI.
Default username is pi and password is raspberry.
Of course there are some points that must be mentioned:
ping will not work, but networking will
You cannot use commands from /opt/vc – it is just not working (why?? – because all of them are hardware related commands)
You can login to Your Raspbian via ssh from localhost:
Hello,
recently I was searching some Zabbix templates for Raspberry PI, but I didn’s succeed. I’ve decided I will do some Zabbix templates for Raspberry PI. Now I am preparing LAB for it. So stay tuned – I will post templates soon.