{% img right http://www.hastexo.com/system/files/imagecache/sidebar/20120221105324808-f2df3ea3e3aeab8_250_0.png %} As I promised on Twitter, this is how I automate a GlusterFS deployment. I’m making a few assumptions here:
{% img https://docs.google.com/drawings/d/1XA7GH3a4BL1uszFXrSsZjysi59Iinh-0RmhqdDbt7QQ/pub?w=673&h=315 ‘simple gluster architecture’ %}
The diagram above shows the basic layout of what to start from in terms of hardware. In terms of software, you just need a basic CentOS 6 install and to have Puppet working.
I use a pair of Puppet modules (both in the Forge): thias/glusterfs and puppetlabs/lvm. The GlusterFS module CAN do the LVM config, but that strikes me as not the best idea. The UNIX philosophy of “do one job well” holds up for Puppet modules as well. You will also need my yumrepos module.
Clone those 3 modules into your modules directory:
cd /etc/puppet/
git clone git://github.com/chriscowley/puppet-yumrepos.git modules/yumrepos
puppet module install puppetlabs/lvm --version 0.1.2
puppet module install thias/glusterfs --version 0.0.3
I have specified the versions as that is what was the latest at the time of writing. You should be able to take the latest as well, but comment with any differences if any. That gives the core of what you need so you can now move on to you nodes.pp
.
“`
class basenode {
class { ‘yumrepos’: }
class { ‘yumrepos::epel’: }
}
class glusternode {
class { ‘basenode’: }
class { ‘yumrepos::gluster’: }
volume_group { “vg0”:
ensure => present,
physical_volumes => "/dev/sdb",
require => Physical_volume["/dev/sdb"]
}
physical_volume { “/dev/sdb”:
ensure => present
}
logical_volume { “gv0”:
ensure => present,
require => Volume_group['vg0'],
volume_group => "vg0",
size => "7G",
}
file { [ ‘/export’, ‘/export/gv0’]:
seltype => 'usr_t',
ensure => directory,
}
package { ‘xfsprogs’: ensure => installed
}
filesystem { “/dev/vg0/gv0”:
ensure => present,
fs_type => "xfs",
options => "-i size=512",
require => [Package['xfsprogs'], Logical_volume['gv0'] ],
}
mount { ‘/export/gv0’:
device => '/dev/vg0/gv0',
fstype => 'xfs',
options => 'defaults',
ensure => mounted,
require => [ Filesystem['/dev/vg0/gv0'], File['/export/gv0'] ],
}
class { ‘glusterfs::server’:
peers => $::hostname ? {
'gluster1' => '192.168.1.38', # Note these are the IPs of the other nodes
'gluster2' => '192.168.1.84',
},
}
glusterfs::volume { ‘gv0’:
create_options => 'replica 2 192.168.1.38:/export/gv0 192.168.1.84:/export/gv0',
require => Mount['/export/gv0'],
}
}
node ‘gluster1’ {
include glusternode
file { ‘/var/www’: ensure => directory }
glusterfs::mount { ‘/var/www’:
device => $::hostname ? {
'gluster1' => '192.168.1.84:/gv0',
}
}
}
node ‘gluster2’ {
include glusternode
file { ‘/var/www’: ensure => directory }
glusterfs::mount { ‘/var/www’:
device => $::hostname ? {
'gluster2' => '192.168.1.38:/gv0',
}
}
}
“`
What does all that do? Starting from the top:
basenode
class does all your basic configuration across all your hosts. Mine actually does a lot more, but these are the relevant parts.glusternode
class is shared between all your GlusterFS nodes. This is where all your Server configuration is./dev/sdb
/export/gv0
/export/gv0
This is now all ready for the GlusterFS module to do its stuff. All this happens in those last two sections.
glusterfs::Server
sets up the peering between the two hosts. This will actually generate a errors, but do not worry. This because gluster1 successfully peers with gluster2. As a result gluster2 fails to peer with gluster1 as they are already peered.glusterfs::volume
creates a replicated volume, having first ensured that the LV is mounted correctly.gluster1
and gluster2
.All that creates the server very nicely. It will need a few passes to get everything in place, while giving a few red herring errors. It should would however, all the errors are there in the README for the GlusterFS module in PuppetForge, so do not panic.
A multi-petabyte scale-out storage system is pretty useless if the data cannot be read by anything. So lets use those nodes and mount the volume. This could also be a separate node (but once again I am being lazy) the process will be exactly the same.
glusterfs::mount
using any of the hosts in the cluster.Voila, that should all pull together and give you a fully automated GlusterFS set up. The sort of scale that GlusterFS can reach makes this sort of automation absolutely essential in my opinion. This should be relatively easy to convert to Chef or Ansible, whatever takes your fancy. I have just used Puppet because of my familiarity with it.
This is only one way of doing this, and I make no claims to being the most adept Puppet user in the world. All I hope to achieve is that someone finds this useful. Courteous comments welcome.
2020 has not been a year we would have been able to predict. With a worldwide pandemic and lives thrown out of gear, as we head into 2021, we are thankful that our community and project continued to receive new developers, users and make small gains. For that and a...
It has been a while since we provided an update to the Gluster community. Across the world various nations, states and localities have put together sets of guidelines around shelter-in-place and quarantine. We request our community members to stay safe, to care for their loved ones, to continue to be...
The initial rounds of conversation around the planning of content for release 8 has helped the project identify one key thing – the need to stagger out features and enhancements over multiple releases. Thus, while release 8 is unlikely to be feature heavy as previous releases, it will be the...