Amazon Web Services provides an highly available hosting for our applications but are they prepared to run on more than one server?
When you design a new application, you can follow best practices’ guides on AWS but if the application is inherited, it requires many modifications or to work with a POSIX shared storage as if it’s local.
That’s where GlusterFS enters the game, beside adding flexibility to storage with horizontal growth opportunities in distributed mode, it has a replicated mode, which lets you replicate a volume (or a single folder in a file system) across multiple servers.
Before realizing a proof of concept with two servers, in different availability zones, replicating an EBS volume with an ext4 filesystem, we will list the cases where GlusterFS should not be used:
Amazon Linux AMI includes GlusterFS packages in the main repository so there’s no need to add external repositories. If yum complains about the GlusterFS packages just enable the EPEL repo.We can install the packages and start services in each of the nodes:
yum install fuse fuse-libs glusterfs-server glusterfs-fuse nfs-utils chkconfig glusterd on chkconfig glusterfsd on chkconfig rpcbind on service glusterd start service rpcbind start
Fuse and nfs packages are needed to mount GlusterFS volumes, we recommend using NFS mode for compatibility.
We prepare an ext4 partition, though we might use any compatible POSIX filesystem; in this case the partition points to an EBS volume, we could also use ephemeral storage, bearing in mind that we need to keep at least one instance running to keep data consistent. These commands must be run on each node:
mkfs.ext4 -m 1 -L gluster /dev/sdg echo -e "LABEL=gluster\t/export\text4\tnoatime\t0\t2" >> /etc/fstab mkdir /export mount /export
Now select one of the nodes to execute the commands to create the GlusterFS volume. Instances should have full access between them, no firewalls o security group limitations:
gluster peer probe $SERVER2 gluster volume create webs replica 2 transport tcp $SERVER1:/export $SERVER2:/export gluster volume start webs gluster volume set webs auth.allow '*' gluster volume set webs performance.cache-size 256MB
We must replace $SERVER1 and $SERVER2 for the instances’ DNS names, being 1 the local instance and 2 the remote. We can use either the public or the internal DNS since Amazon returns the internal IP in any case. If we do not work with VPC then we don’t have fixed internal IPs, so we’ll have to use a dynamic DNS or assign Elastic IPs to instances.
Two non-standard options were defined, the first is auth.allow which allow access to all the IPS, as we will restrict access by Security Groups, and the second is performance.cache-size that allows us to allocate part of the cache memory to improve performance.
Volume it’s already created, now we have to select a mount point or create it if it doesn’t exist, mount the partition and modify the fstab if we want it automatically mounted on reboot. What must be done on both nodes:
mkdir -p /home/webs mount -t nfs -o _netdev,noatime,vers=3 localhost:/webs /home/webs # If we want to mount it automatically, we need to modify /etc/fstab echo -e "localhost:/webs\t/home/webs\tnfs\t_netdev,noatime,vers=3\t0\t0" >> /etc/fstab chkconfig netfs on
Now we can store content in /home/webs, it will be automatically replicated to the other instance. We can force an update by running a simple ls -l on the folder to be updated, since stat() forces GlusterFS to check the health of the reply.
http://www.gluster.org/community/documentation/index.php/Main_Page
2020 has not been a year we would have been able to predict. With a worldwide pandemic and lives thrown out of gear, as we head into 2021, we are thankful that our community and project continued to receive new developers, users and make small gains. For that and a...
It has been a while since we provided an update to the Gluster community. Across the world various nations, states and localities have put together sets of guidelines around shelter-in-place and quarantine. We request our community members to stay safe, to care for their loved ones, to continue to be...
The initial rounds of conversation around the planning of content for release 8 has helped the project identify one key thing – the need to stagger out features and enhancements over multiple releases. Thus, while release 8 is unlikely to be feature heavy as previous releases, it will be the...