Before jumping to scripts/udev there are few things I’m taking “as is” from my vagrant setup and this Vagrantfile example will help you to understand/recreate what I’m doing.
When OS is booting up all of the services are starting. I’m using php5-fpm
and nginx
and both of those services require paths to exist for proper running. NFS shares are mounted in vagrant guest os after starting the services.
I’m using very simple script to restart services:
I’m saving this to /root/.udev-mount-restart-services.sh
file with ansible provisioning.
But to make it work we have to run this script after our directory is mounted over NFS. This is where I used udev
.
First of all I wanted to know what event and subsystem is triggered when I mount directory. To get this information I opened 2 terminals and used vagrant ssh
to get to guest os.
In one terminal I started udevadm monitor
to get informations about events that were triggered by mount/umount of /project directory.
In second terminal I got root (with sudo su
in my case), checked what is mounted with df -h
(cause I’m too lazy to type this):
So now what I did and what was the effect:
With this informations (and a wiki how to create udev rules) I could finally create rule and make it run the script to restart nginx and php5-fpm:
With provisioning I’m putting this script in /root/.udev-mount-restart-services.sh
and udev rule in /etc/udev/rules.d/50-vagrant-mount.rules
.
After vagrant up
the default vagrant user has exactly the same uid and gid as my local user. No more problems with reading/writing in /project
.
There are also prerequisits for this method:
- udev event is not started after getting machine up after
vagrant suspend
- if you’re provisioning this within
Vagrantfile
you might need to do additionalvagrant reload
to make it work after provisioning (not sure because I’m using packer to build our own boxes with preinstalled scripts like this)