We’ve just wrapped up a great week at LinuxCon Europe 2016 in Dublin with a great showing from the Gluster community! BitRot Detection in GlusterFS – Gaurav Garg, Red Hat & Venky Shankar Advancements in Automatic File Replication in Gluster – Ravishankar N Gluster for Sysadmins – Dustin Black Open Storage in the Enterprise with …Read more
We organized Docker Global Hack Day at Red Hat Office on 19th Sep’15. Though there were lots RSVPs, the turn up for the event was less than expected. We started the day by showing the recording of kick-off event. The … Continue reading →
The Oh-My-Vagrant project became public about one year ago and at the time it was more of a fancy template than a robust project, but 188 commits (and counting) later, it has gotten surprisingly useful and mature. james@computer:~/code/oh-my-vagrant$ git rev-list … Continue reading →
The Gluster Community currently provides GlusterFS packages for the following distributions: 3.5 3.6 3.7 Fedora 21 ¹ × × Fedora 22 × ¹ × Fedora 23 × × ¹ Fedora 24 × × ¹ RHEL/CentOS 5 × × RHEL/CentOS 6 × × × RHEL/CentOS 7 × × × Ubuntu 12.04 LTS (precise) × × Ubuntu …Read more
Git submodules are actually a very beautiful thing. You might prefer the word powerful or elegant, but that’s not the point. The downside is that they are sometimes misused, so as always, use with care. I’ve used them in projects … Continue reading →
Once again, time for the annual trek to Portland, Oregon for OSCON — perhaps for the last time! Next year, OSCON is going to be in Austin, TX — which seems like a bit of a mistake to me. Portland and OSCON go together like milk and cookies. If you’re going to be at OSCON, […]
The Gluster community is please to announce the release of GlusterFS-3.4.7. The GlusterFS 3.4.7 release is focused on bug fixes: 33608f5 cluster/dht: Changed log level to DEBUG 076143f protocol: Log ENODATA & ENOATTR at DEBUG in removexattr_cbk a0aa6fb build: argp-standalone, conditional build and build with gcc-5 35fdb73 api: versioned symbols in libgfapi.so for compatibility 8bc612d …Read more
The release source tar file and packages for Fedora {20,21,rawhide}, RHEL/CentOS {5,6,7}, Debian {wheezy,jessie}, Pidora2014, and Raspbian wheezy are available at http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.2/ (Ubuntu packages will be available soon.) This release fixes the following bugs. Thanks to all who submitted bugs and patches and reviewed the changes. 1184191 – Cluster/DHT : Fixed crash due to null …Read more
Recently I got myself an APC NetShelterCX mini. It is a 12U rack, with integrated fans for cooling. At the moment it is populated with some ARM boards (not rack mounted), their PDUs, a switch and (for now) one 2U server. Surprisingly, the fans of the N…
In the past I used to test with RAM-disks, provided by /dev/ram*. Gluster uses extended attributes on the filesystem, that makes is not possible to use tmpfs. While thinking about improving some of the GlusterFS regression tests, I noticed that Fedora 20 (and possibly earlier versions too) does not provide the /dev/ram* devices anymore. I could not find the needed kernel module quickly, so I decided to look into the newer zram module.
Getting zram working seems to be pretty simple. By default one /dev/zram0 is made available after loading the module. But, if needed, the module offers a parameter num_devices to create more devices. After loading the module with modprobe zram, you can do the following to create your high-performance volatile storage:
# SIZE_2GB=$(expr 1024 * 1024 * 1024 * 2)
# echo ${SIZE_2GB} > /sys/class/block/zram0/disksize
# mkfs -t xfs /dev/zram0
# mkdir /bricks/fast
# mount /dev/zram0 /bricks/fast
With this mountpoint it is now possible to create a Gluster volume:
# gluster volume create fast ${HOSTNAME}:/bricks/fast/data
# gluster volume start fast
Once done with testing, stop and delete the Gluster volume, and free the zram like this:
# umount /bricks/fast
# echo 1 > /sys/class/block/zram0/reset
Of course, unloading the module with rmmod zram would free the resources too.
It is getting more important for Gluster to be prepared for very fast disks. Hardware like Fusion-io Flash drives and in future Persistent Memory/NVM will get more available in storage clouds, and of course we would like to see Gluster staying part of that!
With the release of RHEL-6.6 and CentOS-6.6, there are now glusterfs packages in the standard channels/repositories. Unfortunately, these are only the client-side packages (like glusterfs-fuse and glusterfs-api). Users that want to run a Gluster Server…
On 1st Nov’14, Red Hat offices in Bangalore and Pune hosted Docker meetups and Hackathon. ~40 people attended Bangalore meetup. Before the hackathon we had following presentations :- Docker Global Hackday opening by Avi Cavale, Co-founder and CEO, Shippable. Introduction to … Continue reading →
I had an itch to scratch, and I wanted to get a bit more familiar with Openshift. I had used it in the past, but it was time to have another go. The app and the code are now available. … Continue reading →
NFS-Ganesha is a user-space NFS-server that is available in Fedora. It contains several plugins (FSAL, File System Abstraction Layer) for supporting different storage backends. Some of the more interesting are:
Exporting a mounted filesystem is pretty simple. Unfortunately this failed for me when running with the standard nfs-ganesha packages on a minimal Fedora 20 installation. The following changes were needed to make NFS-Ganesha work for a basic export:
When these initial things have been taken care of, a configuration file needs to be created. The default configuration file mentioned in the environment file is /etc/ganesha.nfsd.conf. The sources of nfs-ganesha contain some examples, the vfs.confis quite usable as a starting point. After copying the example and modifying the paths to something more suitable, starting the NFS-server should work:
# systemctl start nfs-ganesha
In case something failed, there should be a note about it in /var/log/ganesha.log.
This assumes you have a working Ceph Cluster which includes several MON, OSD and one or more MDS daemons. The FSAL_CEPH from NFS-Ganesha uses libcephfs which seems to be the same as the ceph-fuse package for Fedora. the easiest way to make sure that the Ceph filesystem is functional, is to try and mount it with ceph-fuse.
The minimal requirements to get a Ceph client system to access the Ceph Cluster, seem to be a /etc/ceph/ceph.conf with a [global]section and a suitable keyring. Creating the ceph.conf on the Fedora system that was done the ceph-deploy:
$ ceph-deploy config push $NFS_SERVER
In my setup I scp‘d the /etc/ceph/ceph.client.admin.keyring from one of my Ceph servers to the $NFS_SERVER. There probably are better ways to create/distribute a keyring, but I’m new to Ceph and this worked sufficiently for my testing.
When the above configuration was done, it was possible to mount the Ceph filesystem on the Ceph client that is becoming the NFS-server. This command worked without issues:
# ceph-fuse /mnt
# echo 'Hello Ceph!' > /mnt/README
# umount /mnt
The first write to the Ceph filesystem took a while. This is likely due to the initial work the MDS and OSD daemons need to do (like creating pools for the Ceph filesystem).
After confirming that the Ceph Cluster and Filesystem work, the configuration for NFS-Ganesha can just be taken from the sources and saved as /etc/ganesha.nfsd.conf. With this configuration, and restarting the nfs-ganesha.service, the NFS-export becomes available:
# showmount -e $NFS_SERVER
Export list for $NFS_SERVER:
/ (everyone)
NFSv4 uses a ‘pseudo root’ as mentioned in the configuration file. This means that mounting the export over NFSv4 results in a virtual directory structure:
# mount -t nfs $NFS_SERVER:/ /mnt
# find /mnt
/mnt
/mnt/nfsv4
/mnt/nfsv4/pseudofs
/mnt/nfsv4/pseudofs/ceph
/mnt/nfsv4/pseudofs/ceph/README
Reading and writing to the mountpoint under /mnt/nfsv4/pseudofs/ceph works fine, as long as the usual permissions allow that. By default NFS-Ganesha enabled ‘root squashing’, so the ‘root’ user may not do a lot on the export. Disabling this security measure can be done by placing this option in the export section:
Squash = no_root_squash;
Restart the nfs-ganesha.service after modifying /etc/ganesha.nfsd.conf and writing files as ‘root’ should work too now.
For me, this was a short “let’s try it out” while learning about Ceph. At the moment, I have no intention on working on the FSAL_CEPH for NFS-Ganesha. My main interest of this experiment with exporting a Ceph filesystem though NFS-Ganesha on a plain Fedora 20 installation, was to learn about usability of a new NFS-Ganesha configuration/deployment. In order to improve the user experience with NFS-Ganesha, I’ll try and fix some of the issues I run into. Progress can be followed in Bug 1144799.
In future, I will mainly use NFS-Ganesha for accessing Gluster Volumes. My colleague Soumya posted a nice explanation on how to download, build and run NFS-Ganesha with support for Gluster. We will be working on improving the out-of-the-box support in Fedora while stabilizing the FSAL_GLUSTER in the upstream NFS-Ganeasha project.
If you’re a reader of my code or of this blog, it’s no secret that I hack on a lot of puppet and vagrant. Recently I’ve fooled around with a bit of docker, too. I realized that the vagrant, environments … Continue reading →
When I’m enjoying the sun/wind/rain on the balcony, I tend to use my XO-1.75 for duties where most people would use a tablet. Reading/writing emails, browsing the internet, bug triaging or writing small fixes, release notes and all can be done fine on …
Earlier this year, R.I.Pienaar released his brilliant data in modules hack, a few months ago, I got the chance to start implementing it in Puppet-Gluster, and today I have found the time to blog about it. What is it? R.I.’s … Continue reading →
Vagrant has become the de facto tool for devops. Faster iterations, clean environments, and less overhead. This isn’t an article about why you should use Vagrant. This is an article about how to get up and running with Vagrant on … Continue reading →
GlusterFS 3.5 has not been released yet, but that should happen hopefully anytime soon (currently in beta). The RPM-packaging in this version has changed a little, and now offers a glusterfs-cli
package. This package mainly contains the gluster
commandline interface (and pulls in any dependencies).
On of the very useful things that is now made possible, is to list the available volumes on Gluster Storge Servers. This similar functionality is used by the /etc/auto.net
script to list NFS-exports that are available for mounting. The auto.net
script is by default enabled after installing and starting autofs
:
# yum install autofs
# systemctl enable autofs.service
# systemctl start autofs.service
Checking, and mounting NFS-exports is made as easy as:
$ ls /net/nfs-server.example.net
archive media mock_cache olpc
$ ls /net/nfs-server.example.net/mock_cache/fedora-rawhide-armhfp/
yum_cache
Making this functionality available for Gluster Volumes is simple, just follow these steps:
install the gluster
command
# yum install glusterfs-cli
save the file below as /etc/auto.glfs
#!/bin/bash
# /etc/auto.glfs -- based on /etc/auto.net
#
# This file must be executable to work! chmod 755!
#
# Look at what a host is exporting to determine what we can mount.
# This is very simple, but it appears to work surprisingly well
#
key="$1"
# add "nosymlink" here if you want to suppress symlinking local filesystems
# add "nonstrict" to make it OK for some filesystems to not mount
opts="-fstype=glusterfs,nodev,nosuid"
for P in /usr/local/bin /usr/local/sbin /usr/bin /usr/sbin /bin /sbin
do
if [ -x ${P}/gluster ]
then
GLUSTER_CLI=${P}/gluster
break
fi
done
[ -x ${GLUSTER_CLI} ] || exit 1
${GLUSTER_CLI} --remote-host="${key}" volume list | \
awk -v key="$key" -v opts="$opts" -- '
BEGIN { ORS=""; first=1 }
{ if (first) { print opts; first=0 }; print " \\\n\t/" $1, key ":/" $1 }
END { if (!first) print "\n"; else exit 1 }' | \
sed 's/#/\\#/g'
make the script executable
# chmod 0755 /etc/auto.glfs
add an automount point to the autofs configuration
# echo /glfs /etc/auto.glfs > /etc/auto.master.d/glfs.autofs
reload the autofs configuration
# systemctl reload autofs.service
After this, autofs
should have created a new /glfs
directory. The directory itself is empty, but a ls /glfs/gluster.example.net
will show all the available volumes on the gluster.example.net server. These volumes can now be accessed through the autofs mountpoint. When the volumes are not used anymore, autofs will automatically unmount them after a timeout.
I’ve been afraid of RPM and package maintaining [1] for years, but thanks to Kaleb Keithley, I have finally made some RPM’s that weren’t generated from a high level tool. Now that I have the boilerplate done, it’s a relatively … Continue reading →