more virtual big data love w/ gluster, vagrant, and mattf‘s little fake disk hack 🙂 |
For those of you who need to spin up virtual gluster clusters for development and testing:
It uses fedora19 but since its all vagrant powered, you dont need to grab or download a distro or iso or anything, just clone the git repo, run vagrant up, and let vagrant automagically pull down and manage your base box and set up the rest for you.
clone it here: https://forge.gluster.org/vagrant
So what does this do? This basically means that you can spin up 2 vms , from scratch, by installing vagrant and then, literally, typing:
git clone git@forge.gluster.org:vagrant/fedora19-gluster.git
cd fedora19-gluster
ln -l Vagrantfile_cluster Vagrantfile
vagrant up –provision
vagrant ssh gluster1
And destroy the same two node cluster by running:
vagrant destroy
How it works
To grok the basics of how it works:
– First: Check out the Vagrantfile. That file defines some of the basic requirements, in particular, the static ips that the peers have.
– How does the cluster install get coordinated? Now, look at the provisioning scripts referenced from the vagrant file. Those run after the basic box is set up. There is a little bit of hacking to ensure that, when the final box is setup, only then is the peer probing done… but otherwise its pretty straight forward (actually that could be dehacked by simply having a second provision script in the 2nd node of the Vagrantfile… but I only just learned on vagrant irc that you could have multiple provisioners.
– What about installing gluster? Thankfully, Fedora now has all the gluster rpms as supported for standard F19. So it was super easy to yum install them.
– Finally: the bricks: And you will also see that there is a special methodology for creating gluster “bricks” on each machine. That simple little trick for setting up a fake disk (using the truncate) command [+1 to spinningmatt.wordpress.com at redhat for showing me this]!
………………………… Example …………………………………..
[vagrant@gluster1 ~]$ sudo touch /mnt/glusterfs/a
[vagrant@gluster1 ~]$ ssh 10.10.10.12
[vagrant@gluster2 ~]$ ls /mnt/glusterfs/
2020 has not been a year we would have been able to predict. With a worldwide pandemic and lives thrown out of gear, as we head into 2021, we are thankful that our community and project continued to receive new developers, users and make small gains. For that and a...
It has been a while since we provided an update to the Gluster community. Across the world various nations, states and localities have put together sets of guidelines around shelter-in-place and quarantine. We request our community members to stay safe, to care for their loved ones, to continue to be...
The initial rounds of conversation around the planning of content for release 8 has helped the project identify one key thing – the need to stagger out features and enhancements over multiple releases. Thus, while release 8 is unlikely to be feature heavy as previous releases, it will be the...