[Gluster-users] [Gluster3.2 at Grid5000] 128 nodes failure and rr scheduler question

Pavan T C tcp at gluster.com
Fri Jun 10 14:38:02 UTC 2011


On Wednesday 08 June 2011 06:10 PM, Francois THIEBOLT wrote:
> Hello,
>
> I'm driving some experiments on grid'5000 with GlusterFS 3.2 and, as a
> first point, i've been unable to start a volume featuring 128bricks (64 ok)
>
> Then, due to the round-robin scheduler, as the number of nodes increase
> (every node is also a brick), the performance of an application on an
> individual node decrease!

I would like to understand what you mean by "increase of nodes". You 
have 64 bricks and each brick also acts as a client. So, where is the 
increase in the number of nodes? Are you referring to the mounts that 
you are doing?

What is your gluster configuration - I mean, is it a distribute only, or 
is it a distributed-replicate setup? [From your command sequence, it 
should be a pure distribute, but I just want to be sure].

What is your application like? Is it mostly I/O intensive? It will help 
if you provide a brief description of typical operations done by your 
application.

How are you measuring the performance? What parameter determines that 
you are experiencing a decrease in performance with increase in the 
number of nodes?

Pavan

> So my question is : how to STOP the round-robin distribution of files
> over the bricks within a volume ?
>
> *** Setup ***
> - i'm using glusterfs3.2 from source
> - every node is both a client node and a brick (storage)
> Commands :
> - gluster peer probe <each of the 128nodes>
> - gluster volume create myVolume transport tcp <128 bricks:/storage>
> - gluster volume start myVolume (fails with 128 bricks!)
> - mount -t glusterfs ...... on all nodes
>
> Feel free to tell me how to improve things
>
> François
>




More information about the Gluster-users mailing list