[Gluster-users] Migration questions

Scott Barber scott at imemories.com
Mon Mar 7 23:43:23 UTC 2011


I'm looking for some help on a few technical questions about setting up
Gluster. I have setup a few tests using Gluster v3.1.2 on CentOS 5 with 2
bricks using distributed volumes and then again with distributed striped
volumes. The install and initial configuration is very straight forward and
simple. I'm now looking to start setting up a Gluster production environment
and have a few questions.

We currently use Lustre and would like to move to Gluster.

Current Setup:
  Our environment consists of about 200 grid nodes reading and writing files
to a ~210TB Lustre volume. In addition we have 30-40 clients mounting the
Lustre volume via 6 Lustre/NFS reshare servers. The Lustre servers and grid
node clients are CentOS 5 and the desktop clients are a mix of mostly Mac /
Linux desktops with a few Windows desktops. The desktop clients usually only
read from the volume. The grid node clients read and write to the volume
heavily, although typically each file on the volume will only be read by one
grid node at a time. The files on the volume are currently striped across
about 5 OSTs (Lustre "bricks") to help speed up the writing and reading of
the files. Our file stripe size is 1MB. Our file size ranges from ~5MB to
100GB, but average size is a 2-3 GBs. In the Lustre tradition we run RAIDed
volumes behind the scenes and worry about data integrity outside of the FS.

- I've not seen anyone running a volume of this size with this many clients
on Gluster. It seems as though it shouldn't have issues at this scale. Am I
missing anything?

- With the latest version of Gluster is there still a concept of a
"translators"? I could find little help explaining it in the 3.1 docs, but
some article alluded to it.

- In the documentation it says "For best results, you should use distributed
striped volumes only in high concurrency environments accessing very large
files." Even though in our current environment each file is typically
accessed by only 1 client at a time, wouldn't you still get some speed-ups
by striping? Besides the risk of data is there another reason why striping
is not encouraged? What do you consider "very large files"? Do our 100GB
files meet that criteria?

- How do I know if the grid nodes should be native clients? What metrics are
us to determine that choice?

- In my tests it seems that on a client I can mount the volume via NFS using
any of the "brick" machines with the same result. When mounting the volume
via NFS, which host should the client mount? If I want to mount 60 desktops
to the volume via NFS would I point them all to the same gluster server
hostname? or setup something like round-robin dns so the clients mount all
the gluster servers equally? I'm just worried that if all 60 desktops mount
the volume using the same server hostname / ip address it could cause that
server to slow to a crawl.

Thanks
Scott Barber
Senior Sysadmin
iMemories.com


More information about the Gluster-users mailing list