[Gluster-users] Gluster-users Digest, Vol 34, Issue 20

Pan, Henry Henry.Pan at ironmountain.com
Mon Feb 14 18:08:57 UTC 2011


Good Morning Gluster Gurus,

2 more silly questions:

When GlusterFS will support Iron Mountain ASP cloud?

When GlusterFS will support MS SQL2008?

Thanks

Henry PAN
Sr. Data Storage Engineer
Iron Mountain
(650) 962-6184 (o)
(650) 930-6544 (c)
Henry.pan at ironmountain.com


-----Original Message-----
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of gluster-users-request at gluster.org
Sent: Monday, February 14, 2011 10:07 AM
To: gluster-users at gluster.org
Subject: Gluster-users Digest, Vol 34, Issue 20

Send Gluster-users mailing list submissions to
        gluster-users at gluster.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
or, via email, send a message with subject or body 'help' to
        gluster-users-request at gluster.org

You can reach the person managing the list at
        gluster-users-owner at gluster.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Gluster-users digest..."


Today's Topics:

   1. Re: How to convert a Distribute volume to
      Distribute-Replicate when adding a new brick (Burnash, James)
   2. Re: How to convert a Distribute volume to
      Distribute-Replicate when adding a new brick (Jacob Shucart)
   3. Mouting a volume in an IRIX system (Marcelo Amorim)
   4. replace-brick not working (Dan Bretherton)


----------------------------------------------------------------------

Message: 1
Date: Mon, 14 Feb 2011 11:55:12 -0500
From: "Burnash, James" <jburnash at knight.com>
Subject: Re: [Gluster-users] How to convert a Distribute volume to
        Distribute-Replicate when adding a new brick
To: 'Jacob Shucart' <jacob at gluster.com>, "gluster-users at gluster.org"
        <gluster-users at gluster.org>
Message-ID:
        <9AD565C4A8561349B7227B79DDB98873683A048774 at EXCHANGE3.global.knight.com>

Content-Type: text/plain; charset="us-ascii"

Jacob.

Thank you for the response. I think I didn't ask clearly my key desire - which is to turn a volume that is currently Distribute only in to Distribute-Replicate, using a newly added server with new (but identical to the original server) bricks.

I've done the peer probe, and that's fine.

I've added the bricks, and they're seen.

But how can I get mirroring started between the previously solo node and the new one?

Thanks,

James Burnash, Unix Engineering
T. 201-239-2248
jburnash at knight.com | www.knight.com

545 Washington Ave. | Jersey City, NJ


-----Original Message-----
From: Jacob Shucart [mailto:jacob at gluster.com]
Sent: Monday, February 14, 2011 11:39 AM
To: Burnash, James; gluster-users at gluster.org
Subject: RE: [Gluster-users] How to convert a Distribute volume to Distribute-Replicate when adding a new brick

James,

The problem is that there are a bunch of extended attributes that are
present on the files/directories that point to the volume name.  If you
were taking bricks and putting them on the new volume, but the volume ID
was the same it would be much easier.  You could scrub the extended
attributes off of the files and then add it to the new volume it would
probably work.  Ideally, you would add the new bricks freshly to the new
volume and then copy the data to the Gluster mount point so it puts
everything where it should be.

-Jacob

-----Original Message-----
From: gluster-users-bounces at gluster.org
[mailto:gluster-users-bounces at gluster.org] On Behalf Of Burnash, James
Sent: Monday, February 14, 2011 7:18 AM
To: gluster-users at gluster.org
Subject: Re: [Gluster-users] How to convert a Distribute volume to
Distribute-Replicate when adding a new brick

If anybody at all has any ideas about this they would be warmly welcomed.
The key thing I'm trying to accomplish is have no service interruption on
the volume already in use ...

Thanks!

James Burnash, Unix Engineering


-----Original Message-----
From: gluster-users-bounces at gluster.org
[mailto:gluster-users-bounces at gluster.org] On Behalf Of Burnash, James
Sent: Thursday, February 10, 2011 3:42 PM
To: gluster-users at gluster.org
Subject: [Gluster-users] How to convert a Distribute volume to
Distribute-Replicate when adding a new brick

Well, I've searched through the mailing list and have been unable to find
an answer to the problem I'm currently facing.

I am currently in the middle of migrating from my old 6 server storage
pool running GlusterFS 3.0.4 to a new pool that will eventually consist of
the same machines all running GlusterFS 3.1.1.

My initial setup was all 6 servers setup as Distribute-Replicate, so that
I had 3 pairs of mirrors.

To effect the conversion and minimize downtime, I took down 1 of each of
the mirrors and converted them (with different storage hardware) to run
GlusterFS 3.1.1. This all worked fine, and production is currently running
on the following setup:

(1) Server running a read-write storage pool in Distribute mode configured
like this:
# gluster volume info

Volume Name: pfs-rw1
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: jc1letgfs16-pfs1:/export/read-write/g01
Brick2: jc1letgfs16-pfs1:/export/read-write/g02
Options Reconfigured:
performance.stat-prefetch: on
performance.cache-size: 2GB

Then I have:

(2) Servers running the read-only pools in this configuration:
gluster volume info

Volume Name: pfs-ro1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 10 x 2 = 20
Transport-type: tcp
Bricks:
Brick1: jc1letgfs17-pfs1:/export/read-only/g01
Brick2: jc1letgfs18-pfs1:/export/read-only/g01
Brick3: jc1letgfs17-pfs1:/export/read-only/g02
Brick4: jc1letgfs18-pfs1:/export/read-only/g02
Brick5: jc1letgfs17-pfs1:/export/read-only/g03
Brick6: jc1letgfs18-pfs1:/export/read-only/g03
Brick7: jc1letgfs17-pfs1:/export/read-only/g04
Brick8: jc1letgfs18-pfs1:/export/read-only/g04
Brick9: jc1letgfs17-pfs1:/export/read-only/g05
Brick10: jc1letgfs18-pfs1:/export/read-only/g05
Brick11: jc1letgfs17-pfs1:/export/read-only/g06
Brick12: jc1letgfs18-pfs1:/export/read-only/g06
Brick13: jc1letgfs17-pfs1:/export/read-only/g07
Brick14: jc1letgfs18-pfs1:/export/read-only/g07
Brick15: jc1letgfs17-pfs1:/export/read-only/g08
Brick16: jc1letgfs18-pfs1:/export/read-only/g08
Brick17: jc1letgfs17-pfs1:/export/read-only/g09
Brick18: jc1letgfs18-pfs1:/export/read-only/g09
Brick19: jc1letgfs17-pfs1:/export/read-only/g10
Brick20: jc1letgfs18-pfs1:/export/read-only/g10
Options Reconfigured:
performance.cache-size: 2GB
performance.stat-prefetch: on

Now that all of production is off the old setup, I want to take one of
those servers (suitably reconfigured with 3.1.1) and make it participate
in the read-write pool, but change the Type to Distributed-Replicate.

Is this possible?

Thanks

James Burnash, Unix Engineering


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the
addressee(s) named herein and may contain legally privileged and/or
confidential information. If you are not the intended recipient of this
e-mail, you are hereby notified that any dissemination, distribution or
copying of this e-mail, and any attachments thereto, is strictly
prohibited. If you have received this in error, please immediately notify
me and permanently delete the original and any copy of any e-mail and any
printout thereof. E-mail transmission cannot be guaranteed to be secure or
error-free. The sender therefore does not accept liability for any errors
or omissions in the contents of this message which arise as a result of
e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at
its discretion, monitor and review the content of all e-mail
communications. http://www.knight.com
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


------------------------------

Message: 2
Date: Mon, 14 Feb 2011 11:00:19 -0600 (CST)
From: "Jacob Shucart" <jacob at gluster.com>
Subject: Re: [Gluster-users] How to convert a Distribute volume to
        Distribute-Replicate when adding a new brick
To: "'Burnash, James'" <jburnash at knight.com>
Cc: gluster-users at gluster.org
Message-ID: <1203978768.4528493.1297702819247.JavaMail.root at mailbox1>
Content-Type: text/plain;       charset="us-ascii"

James,

If you mount the volume as a GlusterFS even to one of the storage nodes,
you should be able to run:

find /mnt/gluster -print0 | xargs --null stat

This should stat every file and force a self-heal to get the files over to
the new host.

Jacob Shucart | Gluster
Systems Engineer
E-Mail  : jacob at gluster.com
Direct  : (408)770-1504

-----Original Message-----
From: Burnash, James [mailto:jburnash at knight.com]
Sent: Monday, February 14, 2011 8:55 AM
To: 'Jacob Shucart'; gluster-users at gluster.org
Subject: RE: [Gluster-users] How to convert a Distribute volume to
Distribute-Replicate when adding a new brick

Jacob.

Thank you for the response. I think I didn't ask clearly my key desire -
which is to turn a volume that is currently Distribute only in to
Distribute-Replicate, using a newly added server with new (but identical
to the original server) bricks.

I've done the peer probe, and that's fine.

I've added the bricks, and they're seen.

But how can I get mirroring started between the previously solo node and
the new one?

Thanks,

James Burnash, Unix Engineering
T. 201-239-2248
jburnash at knight.com | www.knight.com

545 Washington Ave. | Jersey City, NJ


-----Original Message-----
From: Jacob Shucart [mailto:jacob at gluster.com]
Sent: Monday, February 14, 2011 11:39 AM
To: Burnash, James; gluster-users at gluster.org
Subject: RE: [Gluster-users] How to convert a Distribute volume to
Distribute-Replicate when adding a new brick

James,

The problem is that there are a bunch of extended attributes that are
present on the files/directories that point to the volume name.  If you
were taking bricks and putting them on the new volume, but the volume ID
was the same it would be much easier.  You could scrub the extended
attributes off of the files and then add it to the new volume it would
probably work.  Ideally, you would add the new bricks freshly to the new
volume and then copy the data to the Gluster mount point so it puts
everything where it should be.

-Jacob

-----Original Message-----
From: gluster-users-bounces at gluster.org
[mailto:gluster-users-bounces at gluster.org] On Behalf Of Burnash, James
Sent: Monday, February 14, 2011 7:18 AM
To: gluster-users at gluster.org
Subject: Re: [Gluster-users] How to convert a Distribute volume to
Distribute-Replicate when adding a new brick

If anybody at all has any ideas about this they would be warmly welcomed.
The key thing I'm trying to accomplish is have no service interruption on
the volume already in use ...

Thanks!

James Burnash, Unix Engineering


-----Original Message-----
From: gluster-users-bounces at gluster.org
[mailto:gluster-users-bounces at gluster.org] On Behalf Of Burnash, James
Sent: Thursday, February 10, 2011 3:42 PM
To: gluster-users at gluster.org
Subject: [Gluster-users] How to convert a Distribute volume to
Distribute-Replicate when adding a new brick

Well, I've searched through the mailing list and have been unable to find
an answer to the problem I'm currently facing.

I am currently in the middle of migrating from my old 6 server storage
pool running GlusterFS 3.0.4 to a new pool that will eventually consist of
the same machines all running GlusterFS 3.1.1.

My initial setup was all 6 servers setup as Distribute-Replicate, so that
I had 3 pairs of mirrors.

To effect the conversion and minimize downtime, I took down 1 of each of
the mirrors and converted them (with different storage hardware) to run
GlusterFS 3.1.1. This all worked fine, and production is currently running
on the following setup:

(1) Server running a read-write storage pool in Distribute mode configured
like this:
# gluster volume info

Volume Name: pfs-rw1
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: jc1letgfs16-pfs1:/export/read-write/g01
Brick2: jc1letgfs16-pfs1:/export/read-write/g02
Options Reconfigured:
performance.stat-prefetch: on
performance.cache-size: 2GB

Then I have:

(2) Servers running the read-only pools in this configuration:
gluster volume info

Volume Name: pfs-ro1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 10 x 2 = 20
Transport-type: tcp
Bricks:
Brick1: jc1letgfs17-pfs1:/export/read-only/g01
Brick2: jc1letgfs18-pfs1:/export/read-only/g01
Brick3: jc1letgfs17-pfs1:/export/read-only/g02
Brick4: jc1letgfs18-pfs1:/export/read-only/g02
Brick5: jc1letgfs17-pfs1:/export/read-only/g03
Brick6: jc1letgfs18-pfs1:/export/read-only/g03
Brick7: jc1letgfs17-pfs1:/export/read-only/g04
Brick8: jc1letgfs18-pfs1:/export/read-only/g04
Brick9: jc1letgfs17-pfs1:/export/read-only/g05
Brick10: jc1letgfs18-pfs1:/export/read-only/g05
Brick11: jc1letgfs17-pfs1:/export/read-only/g06
Brick12: jc1letgfs18-pfs1:/export/read-only/g06
Brick13: jc1letgfs17-pfs1:/export/read-only/g07
Brick14: jc1letgfs18-pfs1:/export/read-only/g07
Brick15: jc1letgfs17-pfs1:/export/read-only/g08
Brick16: jc1letgfs18-pfs1:/export/read-only/g08
Brick17: jc1letgfs17-pfs1:/export/read-only/g09
Brick18: jc1letgfs18-pfs1:/export/read-only/g09
Brick19: jc1letgfs17-pfs1:/export/read-only/g10
Brick20: jc1letgfs18-pfs1:/export/read-only/g10
Options Reconfigured:
performance.cache-size: 2GB
performance.stat-prefetch: on

Now that all of production is off the old setup, I want to take one of
those servers (suitably reconfigured with 3.1.1) and make it participate
in the read-write pool, but change the Type to Distributed-Replicate.

Is this possible?

Thanks

James Burnash, Unix Engineering


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the
addressee(s) named herein and may contain legally privileged and/or
confidential information. If you are not the intended recipient of this
e-mail, you are hereby notified that any dissemination, distribution or
copying of this e-mail, and any attachments thereto, is strictly
prohibited. If you have received this in error, please immediately notify
me and permanently delete the original and any copy of any e-mail and any
printout thereof. E-mail transmission cannot be guaranteed to be secure or
error-free. The sender therefore does not accept liability for any errors
or omissions in the contents of this message which arise as a result of
e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at
its discretion, monitor and review the content of all e-mail
communications. http://www.knight.com
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


------------------------------

Message: 3
Date: Mon, 14 Feb 2011 15:07:24 -0200
From: Marcelo Amorim <marcelodeamorim at gmail.com>
Subject: [Gluster-users] Mouting a volume in an IRIX system
To: gluster-users at gluster.org
Message-ID:
        <AANLkTini-NPDroPAM-3QujM4RmJcgPAN2rRon3u8SwyW at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi,
I'm trying to mount a gluster volume in an IRIX system, but I? getting the
following error:


server:/> mount -o vers=3 -t nfs 192.168.99.33:/vg_dados /mnt

mount: NFS version 3 mount failed, trying NFS version 2.
mount:cannot get file handle for /vg_dados from 192.168.99.33 - Procedure
unavailable
mount: giving up on:
   /mnt
server:/>

Does anybody knows how can I solve this problem?

thanks


--
*"A honestidade ? voc? falar o que pensa e fazer o que fala."*

------------------------------

Message: 4
Date: Mon, 14 Feb 2011 18:05:25 +0000
From: Dan Bretherton <d.a.bretherton at reading.ac.uk>
Subject: [Gluster-users] replace-brick not working
To: gluster-users <gluster-users at gluster.org>
Message-ID: <4D596EE5.6090006 at reading.ac.uk>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hello All,

I need to migrate some data from a pair of mirrored bricks (i.e. a brick
and it's mirror).  Unfortunately the "... replace-brick ... start" and
"... replace-brick ... status" commands give a null response.   The
commands I used followed by the relevant log file errors are shown
below.  The bdan11 server (where I want the data to go) is new and
doesn't have any data on it at the moment.  However glusterd is running
and the host is part of the peer group.  The fuse module is loaded  on
all servers.  I tried restarting all the gluster daemons on all the
servers but it didn't make any difference.  I also tried a similar
operation on another volume using other servers (all with existing
GlusterFS bricks this time) with the same result.  Any suggestions would
be much appreciated.

[root at bdan0 ~]# gluster volume replace-brick atmos
bdan5:/vg2lv1/glusterfs2 bdan11:/atmos/glusterfs start
[root at bdan0 ~]# gluster volume replace-brick atmos
bdan5:/vg2lv1/glusterfs2 bdan11:/atmos/glusterfs status
[root at bdan0 ~]#

---- on bdan11 (where I want the data to go to) ---

[2011-02-14 16:57:52.296610] I
[glusterd-handler.c:673:glusterd_handle_cli_list_friends] glusterd:
Received cli list req
[2011-02-14 16:58:21.56781] I
[glusterd-handler.c:425:glusterd_handle_cluster_lock] glusterd: Received
LOCK from uuid: 473c4e80-d55c-45ee-8a52-a4aba4da8317
[2011-02-14 16:58:21.56828] I [glusterd-utils.c:236:glusterd_lock]
glusterd: Cluster lock held by 473c4e80-d55c-45ee-8a52-a4aba4da8317
[2011-02-14 16:58:21.56868] I
[glusterd-handler.c:1973:glusterd_op_lock_send_resp] glusterd:
Responded, ret: 0
[2011-02-14 16:58:21.96404] I
[glusterd-handler.c:464:glusterd_handle_stage_op] glusterd: Received
stage op from uuid: 473c4e80-d55c-45ee-8a52-a4aba4da8317
[2011-02-14 16:58:21.96466] I
[glusterd-utils.c:730:glusterd_volume_brickinfo_get_by_brick] : brick:
bdan5:/vg2lv1/glusterfs2
[2011-02-14 16:58:21.99679] I
[glusterd-utils.c:2144:glusterd_friend_find_by_hostname] glusterd:
Friend bdan5 found.. state: 3
[2011-02-14 16:58:21.99720] I
[glusterd-utils.c:701:glusterd_volume_brickinfo_get] : Found brick
[2011-02-14 16:58:21.102568] I
[glusterd-handler.c:2065:glusterd_op_stage_send_resp] glusterd:
Responded to stage, ret: 0

---- on bdan5 (current location of data) ---

[2011-02-14 16:58:59.159290] I
[glusterd-handler.c:425:glusterd_handle_cluster_lock] glusterd: Received
LOCK from uuid: 473c4e80-d55c-45ee-8a52-a4aba4da8317
[2011-02-14 16:58:59.159567] I [glusterd-utils.c:236:glusterd_lock]
glusterd: Cluster lock held by 473c4e80-d55c-45ee-8a52-a4aba4da8317
[2011-02-14 16:58:59.159621] I
[glusterd-handler.c:1973:glusterd_op_lock_send_resp] glusterd:
Responded, ret: 0
[2011-02-14 16:58:59.198752] I
[glusterd-handler.c:464:glusterd_handle_stage_op] glusterd: Received
stage op from uuid: 473c4e80-d55c-45ee-8a52-a4aba4da8317
[2011-02-14 16:58:59.198799] I
[glusterd-utils.c:730:glusterd_volume_brickinfo_get_by_brick] : brick:
bdan5:/vg2lv1/glusterfs2
[2011-02-14 16:58:59.199123] I
[glusterd-utils.c:701:glusterd_volume_brickinfo_get] : Found brick
[2011-02-14 16:58:59.199183] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.199201] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.199214] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.199227] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.199240] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.199251] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.199263] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.199275] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.199287] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.199299] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.199312] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.201616] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.201667] I
[glusterd-handler.c:2065:glusterd_op_stage_send_resp] glusterd:
Responded to stage, ret: 0

---- on bdan0 (where volume commands were executed)---

[2011-02-14 16:58:59.159586] I
[glusterd-handler.c:1222:glusterd_handle_replace_brick] glusterd:
Received replace brick req
[2011-02-14 16:58:59.159694] I [glusterd-utils.c:236:glusterd_lock]
glusterd: Cluster lock held by 473c4e80-d55c-45ee-8a52-a4aba4da8317
[2011-02-14 16:58:59.159717] I
[glusterd-handler.c:2832:glusterd_op_txn_begin] glusterd: Acquired local
lock
[2011-02-14 16:58:59.160085] I
[glusterd3_1-mops.c:1091:glusterd3_1_cluster_lock] glusterd: Sent lock
req to 14 peers
[2011-02-14 16:58:59.160168] I
[glusterd3_1-mops.c:395:glusterd3_1_cluster_lock_cbk] glusterd: Received
ACC from uuid: cc81118c-ff50-4823-8e75-f710b88f9d9c
[2011-02-14 16:58:59.160204] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.160259] I
[glusterd3_1-mops.c:395:glusterd3_1_cluster_lock_cbk] glusterd: Received
ACC from uuid: 66f88fb4-5aa7-44c0-870f-a2d41fd88c47
[2011-02-14 16:58:59.160282] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.160323] I
[glusterd3_1-mops.c:395:glusterd3_1_cluster_lock_cbk] glusterd: Received
ACC from uuid: 29566f7b-fa3a-43fd-9751-086479cedd01
[2011-02-14 16:58:59.160349] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.160396] I
[glusterd3_1-mops.c:395:glusterd3_1_cluster_lock_cbk] glusterd: Received
ACC from uuid: 8e037099-d853-4fc3-a4b7-28980fd2b131
[2011-02-14 16:58:59.160419] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.160468] I
[glusterd3_1-mops.c:395:glusterd3_1_cluster_lock_cbk] glusterd: Received
ACC from uuid: 814b274a-8dbe-40d2-a406-c41d347f8ad1
[2011-02-14 16:58:59.160538] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.160577] I
[glusterd3_1-mops.c:395:glusterd3_1_cluster_lock_cbk] glusterd: Received
ACC from uuid: 80363690-44c6-4d86-9607-5af0764e8900
[2011-02-14 16:58:59.160600] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.160638] I
[glusterd3_1-mops.c:395:glusterd3_1_cluster_lock_cbk] glusterd: Received
ACC from uuid: 344ba772-9c6f-4263-a380-79a24fb6f741
[2011-02-14 16:58:59.160660] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.169445] I
[glusterd3_1-mops.c:395:glusterd3_1_cluster_lock_cbk] glusterd: Received
ACC from uuid: 6753c5fa-e33c-4724-930e-53efa6e95563
[2011-02-14 16:58:59.169481] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.172089] I
[glusterd3_1-mops.c:395:glusterd3_1_cluster_lock_cbk] glusterd: Received
ACC from uuid: 39422893-699c-4f4a-a1c4-e038a4c928b8
[2011-02-14 16:58:59.172121] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.172382] I
[glusterd3_1-mops.c:395:glusterd3_1_cluster_lock_cbk] glusterd: Received
ACC from uuid: 681f3a0d-4fad-4b18-a625-da747f551155
[2011-02-14 16:58:59.172406] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.177922] I
[glusterd3_1-mops.c:395:glusterd3_1_cluster_lock_cbk] glusterd: Received
ACC from uuid: fca1393e-ee59-4b4c-9205-b3dcdd2ee1ae
[2011-02-14 16:58:59.177946] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.181466] I
[glusterd3_1-mops.c:395:glusterd3_1_cluster_lock_cbk] glusterd: Received
ACC from uuid: a2cd8e72-151a-4fb1-936d-1fa9ef9c1ca4
[2011-02-14 16:58:59.181506] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.181548] I
[glusterd3_1-mops.c:395:glusterd3_1_cluster_lock_cbk] glusterd: Received
ACC from uuid: fa313e90-3e8e-4f4d-a266-43f34777ed07
[2011-02-14 16:58:59.181570] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.195991] I
[glusterd3_1-mops.c:395:glusterd3_1_cluster_lock_cbk] glusterd: Received
ACC from uuid: 551ca306-46a9-423a-92a8-0e33ccbb3b2b
[2011-02-14 16:58:59.196023] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.196072] I
[glusterd-utils.c:730:glusterd_volume_brickinfo_get_by_brick] : brick:
bdan5:/vg2lv1/glusterfs2
[2011-02-14 16:58:59.197837] I
[glusterd-utils.c:2144:glusterd_friend_find_by_hostname] glusterd:
Friend bdan5 found.. state: 3
[2011-02-14 16:58:59.197875] I
[glusterd-utils.c:701:glusterd_volume_brickinfo_get] : Found brick
[2011-02-14 16:58:59.197950] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.197976] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.197999] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.198023] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.198046] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.198068] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.198091] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.198155] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.198180] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.198204] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.198228] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.199251] I
[glusterd-utils.c:2105:glusterd_friend_find_by_hostname] glusterd:
Friend bdan11 found.. state: 3
[2011-02-14 16:58:59.199580] I
[glusterd3_1-mops.c:1233:glusterd3_1_stage_op] glusterd: Sent op req to
14 peers
[2011-02-14 16:58:59.202555] I
[glusterd3_1-mops.c:594:glusterd3_1_stage_op_cbk] glusterd: Received ACC
from uuid: 814b274a-8dbe-40d2-a406-c41d347f8ad1
[2011-02-14 16:58:59.202582] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.203934] I
[glusterd3_1-mops.c:594:glusterd3_1_stage_op_cbk] glusterd: Received ACC
from uuid: 6753c5fa-e33c-4724-930e-53efa6e95563
[2011-02-14 16:58:59.203957] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.204128] I
[glusterd3_1-mops.c:594:glusterd3_1_stage_op_cbk] glusterd: Received ACC
from uuid: 8e037099-d853-4fc3-a4b7-28980fd2b131
[2011-02-14 16:58:59.204156] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.204281] I
[glusterd3_1-mops.c:594:glusterd3_1_stage_op_cbk] glusterd: Received ACC
from uuid: fca1393e-ee59-4b4c-9205-b3dcdd2ee1ae
[2011-02-14 16:58:59.204303] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.204528] I
[glusterd3_1-mops.c:594:glusterd3_1_stage_op_cbk] glusterd: Received ACC
from uuid: fa313e90-3e8e-4f4d-a266-43f34777ed07
[2011-02-14 16:58:59.204564] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.204608] I
[glusterd3_1-mops.c:594:glusterd3_1_stage_op_cbk] glusterd: Received ACC
from uuid: 681f3a0d-4fad-4b18-a625-da747f551155
[2011-02-14 16:58:59.204631] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.204727] I
[glusterd3_1-mops.c:594:glusterd3_1_stage_op_cbk] glusterd: Received ACC
from uuid: 66f88fb4-5aa7-44c0-870f-a2d41fd88c47
[2011-02-14 16:58:59.204763] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.204806] I
[glusterd3_1-mops.c:594:glusterd3_1_stage_op_cbk] glusterd: Received ACC
from uuid: 344ba772-9c6f-4263-a380-79a24fb6f741
[2011-02-14 16:58:59.204829] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.204939] I
[glusterd3_1-mops.c:594:glusterd3_1_stage_op_cbk] glusterd: Received ACC
from uuid: 29566f7b-fa3a-43fd-9751-086479cedd01
[2011-02-14 16:58:59.204970] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.205012] I
[glusterd3_1-mops.c:594:glusterd3_1_stage_op_cbk] glusterd: Received ACC
from uuid: a2cd8e72-151a-4fb1-936d-1fa9ef9c1ca4
[2011-02-14 16:58:59.205034] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.206075] I
[glusterd3_1-mops.c:594:glusterd3_1_stage_op_cbk] glusterd: Received ACC
from uuid: 80363690-44c6-4d86-9607-5af0764e8900
[2011-02-14 16:58:59.206099] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.206724] I
[glusterd3_1-mops.c:594:glusterd3_1_stage_op_cbk] glusterd: Received ACC
from uuid: 551ca306-46a9-423a-92a8-0e33ccbb3b2b
[2011-02-14 16:58:59.206747] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 16:58:59.227190] I
[glusterd3_1-mops.c:594:glusterd3_1_stage_op_cbk] glusterd: Received ACC
from uuid: 39422893-699c-4f4a-a1c4-e038a4c928b8
[2011-02-14 16:58:59.227214] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster
[2011-02-14 17:07:16.354075] E
[glusterd3_1-mops.c:1357:glusterd_handle_rpc_msg] : Unable to set cli op: 16
[2011-02-14 17:07:16.354152] E
[glusterd-utils.c:399:glusterd_serialize_reply] : Failed to encode message
[2011-02-14 17:07:16.354179] E
[glusterd-utils.c:441:glusterd_submit_reply] : Failed to serialize reply
[2011-02-14 17:07:16.354200] W
[glusterd3_1-mops.c:1498:glusterd_handle_rpc_msg] : Returning -1

-Dan.



------------------------------

_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


End of Gluster-users Digest, Vol 34, Issue 20
*********************************************



The information contained in this email message and its attachments is intended only for the private and confidential use of the recipient(s) named above, unless the sender expressly agrees otherwise. Transmission of email over the Internet is not a secure communications medium. If you are requesting or have requested the transmittal of personal data, as defined in applicable privacy laws by means of email or in an attachment to email, you must select a more secure alternate means of transmittal that supports your obligations to protect such personal data. If the reader of this message is not the intended recipient and/or you have received this email in error, you must take no action based on the information in this email and you are hereby notified that any dissemination, misuse or copying or disclosure of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by email and delete the original message. 




More information about the Gluster-users mailing list