[Gluster-users] Inter-Continent Challenge

Count Zero countz at gmail.com
Wed Apr 7 21:36:26 PDT 2010


Hi Guys,

I have an interesting situation, and I'm wondering if there's a solution for it in the glusterfs realm or if I will have to resort to other solutions that complement glusterfs (such as rsync or unison).

I have 9 servers in 3 locations on the internet (3 servers per location). Unfortunately, the network distance between them is such that setting up a Distribute or NUFA cluster between them all is difficult (I'm not saying impossible, because it may be possible and I just don't know how to pull it off).

There are 3 servers in each data center, and they are all clustered via NUFA:

DC-A
-+ NUFA-Cluster
---+ SRV-A1
---+ SRV-A2
---+ SRV-A3

DC-B ( >> rsync from A)
-+ NUFA-Cluster
---+ SRV-B1
---+ SRV-B2
---+ SRV-B3

DC-C ( >> rsync from B)
-+ NUFA-Cluster
---+ SRV-C1
---+ SRV-C2
---+ SRV-C3

The reason I did it like this, so far:

1) I needed file reads to be fast on each local node, so I have the "option local-volume-name `hostname`" trick in my glusterfs.vol file (like in the cookbook).

2) Bandwidth between DC-A and DC-B and DC-C is kinda low... and since glusterfs waits for the last server to finish, this severely slows down the entire cluster for any operation, including just listing the files in a directory.


Question 1:

Is there a better way to implement this? All the examples I find are about 4 node replication, etc.
What about inter-continent replication of data between NUFA Clusters?
Any advice would be greatly appreciated :-)
At the moment, out of lack of options, I plan to sync between the 3 NUFA clusters with Unison or Rsync.


Question 2:

Sometimes when I mount this cluster, I get two sets of dot files, like so:

root at srv-a1:/mnt/gfs# ls -la
total 188
drwxr-xr-x 12 root        root        12288 2010-04-08 06:45 .
drwxr-xr-x 12 root        root        12288 2010-04-08 06:45 .
drwxr-xr-x  4 root        root         4096 2010-04-08 06:14 ..
drwxr-xr-x  4 root        root         4096 2010-04-08 06:14 ..

A reboot of the machine fixes it... but I'm wondering why this happens and how to avoid it?


Thanks,
Count Zero

P.S. Below is my configuration file, from /etc/glusterfs/glusterfs.vol:

---------------------8<--------------------8<------------------

volume posix
 type storage/posix
 option directory /data/export
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume brick
 type performance/io-threads
 subvolumes locks
end-volume

volume server
 type protocol/server
 option transport-type tcp
 option auth.addr.brick.allow *
 subvolumes brick
end-volume

volume srv-a1
 type protocol/client
 option transport-type tcp
 option remote-host srv-a1
 option remote-subvolume brick
end-volume

volume srv-a2
 type protocol/client
 option transport-type tcp
 option remote-host srv-a2
 option remote-subvolume brick
end-volume

volume srv-a3
 type protocol/client
 option transport-type tcp
 option remote-host srv-a3
 option remote-subvolume brick
end-volume

volume nufa
 type cluster/nufa
 option local-volume-name `hostname`
 subvolumes srv-a1 srv-a2 srv-a3
end-volume

volume writebehind
  type performance/write-behind
  option cache-size 1MB
  subvolumes nufa
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

--------------------->8-------------------->8------------------



More information about the Gluster-users mailing list