[Gluster-users] Failure migration/recovery question

Graeme Davis graeme at graeme.org
Wed Feb 2 16:16:14 UTC 2011


Rookie question.  I've been tinkering with a 10-node 
distributed-replicated setup and I wanted to test what would happen if 1 
machine died and I had to rebuild it.

gluster> volume info all
Volume Name: data
Type: Distributed-Replicate
Status: Started
Number of Bricks: 5 x 2 = 10
Transport-type: tcp
Bricks:
Brick1: dl180-101:/data
Brick2: dl180-102:/data
Brick3: dl180-103:/data
Brick4: dl180-104:/data
Brick5: dl180-105:/data
Brick6: dl180-106:/data
Brick7: dl180-107:/data
Brick8: dl180-108:/data
Brick9: dl180-109:/data
Brick10: dl180-110:/data

I took down dl180-102 (dl180-101 is its replicate buddy) and reinstalled 
the machine, as if we had some horrible failure and just had to start 
over again.

What would be the best method to get the new 102 back in the cluster 
without data loss?  I tried to remove the 101 and 102 bricks thinking it 
would migrate the data (on 101) to other nodes but it didn't do that.  
Do I manually have to copy data from 101:/data onto the glusterfs and 
then add the 101/102 bricks and rebalance?  Could I have used 
replace-brick to move the data to other existing bricks?

Thanks,

Graeme





More information about the Gluster-users mailing list