[Gluster-users] How to convert a Distribute volume to Distribute-Replicate when adding a new brick

Jacob Shucart jacob at gluster.com
Mon Feb 14 17:00:19 UTC 2011


James,

If you mount the volume as a GlusterFS even to one of the storage nodes,
you should be able to run:

find /mnt/gluster -print0 | xargs --null stat

This should stat every file and force a self-heal to get the files over to
the new host.

Jacob Shucart | Gluster
Systems Engineer
E-Mail	: jacob at gluster.com
Direct	: (408)770-1504

-----Original Message-----
From: Burnash, James [mailto:jburnash at knight.com] 
Sent: Monday, February 14, 2011 8:55 AM
To: 'Jacob Shucart'; gluster-users at gluster.org
Subject: RE: [Gluster-users] How to convert a Distribute volume to
Distribute-Replicate when adding a new brick

Jacob.

Thank you for the response. I think I didn't ask clearly my key desire -
which is to turn a volume that is currently Distribute only in to
Distribute-Replicate, using a newly added server with new (but identical
to the original server) bricks.

I've done the peer probe, and that's fine.

I've added the bricks, and they're seen.

But how can I get mirroring started between the previously solo node and
the new one?

Thanks,

James Burnash, Unix Engineering
T. 201-239-2248 
jburnash at knight.com | www.knight.com

545 Washington Ave. | Jersey City, NJ


-----Original Message-----
From: Jacob Shucart [mailto:jacob at gluster.com] 
Sent: Monday, February 14, 2011 11:39 AM
To: Burnash, James; gluster-users at gluster.org
Subject: RE: [Gluster-users] How to convert a Distribute volume to
Distribute-Replicate when adding a new brick

James,

The problem is that there are a bunch of extended attributes that are
present on the files/directories that point to the volume name.  If you
were taking bricks and putting them on the new volume, but the volume ID
was the same it would be much easier.  You could scrub the extended
attributes off of the files and then add it to the new volume it would
probably work.  Ideally, you would add the new bricks freshly to the new
volume and then copy the data to the Gluster mount point so it puts
everything where it should be.

-Jacob

-----Original Message-----
From: gluster-users-bounces at gluster.org
[mailto:gluster-users-bounces at gluster.org] On Behalf Of Burnash, James
Sent: Monday, February 14, 2011 7:18 AM
To: gluster-users at gluster.org
Subject: Re: [Gluster-users] How to convert a Distribute volume to
Distribute-Replicate when adding a new brick

If anybody at all has any ideas about this they would be warmly welcomed.
The key thing I'm trying to accomplish is have no service interruption on
the volume already in use ...

Thanks!

James Burnash, Unix Engineering


-----Original Message-----
From: gluster-users-bounces at gluster.org
[mailto:gluster-users-bounces at gluster.org] On Behalf Of Burnash, James
Sent: Thursday, February 10, 2011 3:42 PM
To: gluster-users at gluster.org
Subject: [Gluster-users] How to convert a Distribute volume to
Distribute-Replicate when adding a new brick

Well, I've searched through the mailing list and have been unable to find
an answer to the problem I'm currently facing.

I am currently in the middle of migrating from my old 6 server storage
pool running GlusterFS 3.0.4 to a new pool that will eventually consist of
the same machines all running GlusterFS 3.1.1.

My initial setup was all 6 servers setup as Distribute-Replicate, so that
I had 3 pairs of mirrors.

To effect the conversion and minimize downtime, I took down 1 of each of
the mirrors and converted them (with different storage hardware) to run
GlusterFS 3.1.1. This all worked fine, and production is currently running
on the following setup:

(1) Server running a read-write storage pool in Distribute mode configured
like this:
# gluster volume info

Volume Name: pfs-rw1
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: jc1letgfs16-pfs1:/export/read-write/g01
Brick2: jc1letgfs16-pfs1:/export/read-write/g02
Options Reconfigured:
performance.stat-prefetch: on
performance.cache-size: 2GB

Then I have:

(2) Servers running the read-only pools in this configuration:
gluster volume info

Volume Name: pfs-ro1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 10 x 2 = 20
Transport-type: tcp
Bricks:
Brick1: jc1letgfs17-pfs1:/export/read-only/g01
Brick2: jc1letgfs18-pfs1:/export/read-only/g01
Brick3: jc1letgfs17-pfs1:/export/read-only/g02
Brick4: jc1letgfs18-pfs1:/export/read-only/g02
Brick5: jc1letgfs17-pfs1:/export/read-only/g03
Brick6: jc1letgfs18-pfs1:/export/read-only/g03
Brick7: jc1letgfs17-pfs1:/export/read-only/g04
Brick8: jc1letgfs18-pfs1:/export/read-only/g04
Brick9: jc1letgfs17-pfs1:/export/read-only/g05
Brick10: jc1letgfs18-pfs1:/export/read-only/g05
Brick11: jc1letgfs17-pfs1:/export/read-only/g06
Brick12: jc1letgfs18-pfs1:/export/read-only/g06
Brick13: jc1letgfs17-pfs1:/export/read-only/g07
Brick14: jc1letgfs18-pfs1:/export/read-only/g07
Brick15: jc1letgfs17-pfs1:/export/read-only/g08
Brick16: jc1letgfs18-pfs1:/export/read-only/g08
Brick17: jc1letgfs17-pfs1:/export/read-only/g09
Brick18: jc1letgfs18-pfs1:/export/read-only/g09
Brick19: jc1letgfs17-pfs1:/export/read-only/g10
Brick20: jc1letgfs18-pfs1:/export/read-only/g10
Options Reconfigured:
performance.cache-size: 2GB
performance.stat-prefetch: on

Now that all of production is off the old setup, I want to take one of
those servers (suitably reconfigured with 3.1.1) and make it participate
in the read-write pool, but change the Type to Distributed-Replicate.

Is this possible?

Thanks

James Burnash, Unix Engineering


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the
addressee(s) named herein and may contain legally privileged and/or
confidential information. If you are not the intended recipient of this
e-mail, you are hereby notified that any dissemination, distribution or
copying of this e-mail, and any attachments thereto, is strictly
prohibited. If you have received this in error, please immediately notify
me and permanently delete the original and any copy of any e-mail and any
printout thereof. E-mail transmission cannot be guaranteed to be secure or
error-free. The sender therefore does not accept liability for any errors
or omissions in the contents of this message which arise as a result of
e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at
its discretion, monitor and review the content of all e-mail
communications. http://www.knight.com
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list