[Gluster-users] Problem: rsync files to glusterfs fail randomly~

bonn deng bonndeng at gmail.com
Fri May 21 02:50:26 UTC 2010


    Hello, everyone~
    A few days ago I asked the same question, then I got a reply from joel
vennin (thanks again), he said they got a similar problem, and once they
removed the readahead configuration, everything worked fine. I took his
advice and the situation got better at that moment. But today, the same
problem occurred again, and I'm sure that our configurations already
excluded the readahead feature. Here's the log from the gfs client where the
rsync operation failed:
……
[2010-05-21 10:25:03] W [fuse-bridge.c:491:fuse_entry_cbk] glusterfs-fuse:
LOOKUP(/uigs/pblog/web/201005
/20100521) inode (ptr=0x18ea9960, ino=231986435693, gen=5471125106254156105)
found conflict (ptr=0x19003
2b0, ino=231986435693, gen=5471125106254156522)
[2010-05-21 10:25:03] W [fuse-bridge.c:1719:fuse_create_cbk] glusterfs-fuse:
35641: /uigs/pblog/web/2010
05/20100521/.pb_access_log.201005211020.10.15.4.61.nginx1.5FyvzI => -1 (No
such file or directory)
[2010-05-21 10:25:03] W [fuse-bridge.c:1719:fuse_create_cbk] glusterfs-fuse:
35644: /uigs/pblog/web/2010
05/20100521/pb_access_log.201005211020.10.15.4.61.nginx1 => -1 (No such file
or directory)
……

    Anybody knows the reason that may cause such a problem? Thanks in
advance, any suggestion would be appreciated~

On Mon, May 10, 2010 at 9:12 PM, bonn deng <bonndeng at gmail.com> wrote:

>
>     Hello, everyone~
>     We're using glusterfs as our data storage tool, after we upgraded gfs
> version from 2.0.7 to 3.0.3, we encountered some wierd problems: we need to
> rsync some files to gfs cluster every five minutes, but randomly some files
> cannot be transfered correctly or evan cannot be transfered at all. I ssh to
> the computer where the rsync operation failed and check the log under
> directory "/var/log/glusterfs", which reads:
>
> ……
> [2010-05-10 20:32:05] W [fuse-bridge.c:1719:fuse_create_cbk]
> glusterfs-fuse: 4499440:
> /uigs/sugg/.sugg_access_log.2010051012.10.11.89.102.nginx1.cMi7LW => -1
>  (No such file or directory)
> [2010-05-10 20:32:13] W [fuse-bridge.c:1719:fuse_create_cbk]
> glusterfs-fuse: 4499542:
> /sogou-logs/nginx-logs/proxy/.proxy_access_log.2010051019.10.11.89.102.
> nginx1.MnUaIR => -1 (No such file or directory)
>
> [2010-05-10 20:35:12] W [fuse-bridge.c:491:fuse_entry_cbk] glusterfs-fuse:
> LOOKUP(/uigs/pblog/bdweb/201005/20100510) inode (ptr=0x2aaaac010fb0,
> ino=183475774
> 468, gen=5467705122580597717) found conflict (ptr=0x1d75640,
> ino=183475774468, gen=5467705122580599136)
> [2010-05-10 20:35:16] W [fuse-bridge.c:491:fuse_entry_cbk] glusterfs-fuse:
> LOOKUP(/uigs/pblog/suggweb/201005/20100510) inode (ptr=0x1d783b0,
> ino=245151107323
> , gen=5467705122580597722) found conflict (ptr=0x2aaaac0bc4b0,
> ino=245151107323, gen=5467705122580598133)
>
> [2010-05-10 20:40:08] W [fuse-bridge.c:491:fuse_entry_cbk] glusterfs-fuse:
> LOOKUP(/uigs/pblog/bdweb/201005/20100510) inode (ptr=0x2aaab806cca0,
> ino=183475774
> 468, gen=5467705122580597838) found conflict (ptr=0x1d75640,
> ino=183475774468, gen=5467705122580599136)
> [2010-05-10 20:40:12] W [fuse-bridge.c:491:fuse_entry_cbk] glusterfs-fuse:
> LOOKUP(/uigs/pblog/suggweb/201005/20100510) inode (ptr=0x1d7c190,
> ino=245151107323
> , gen=5467705122580597843) found conflict (ptr=0x2aaaac0bc4b0,
> ino=245151107323, gen=5467705122580598133)
>
> [2010-05-10 20:45:10] W [fuse-bridge.c:491:fuse_entry_cbk] glusterfs-fuse:
> LOOKUP(/uigs/pblog/bdweb/201005/20100510) inode (ptr=0x2aaab00a6a90,
> ino=183475774
> 468, gen=5467705122580597838) found conflict (ptr=0x1d75640,
> ino=183475774468, gen=5467705122580599136)
> [2010-05-10 20:45:14] W [fuse-bridge.c:491:fuse_entry_cbk] glusterfs-fuse:
> LOOKUP(/uigs/pblog/suggweb/201005/20100510) inode (ptr=0x2aaab80960e0,
> ino=2451511
> 07323, gen=5467705122580597669) found conflict (ptr=0x2aaaac0bc4b0,
> ino=245151107323, gen=5467705122580598133)
> ……
>
>     Does anybody know what's wrong with our gfs? And another question, in
> order to trace the problem, we want to know to which machine the failed file
> should be put, where can I get this information or what can I do?
>     By the way, we're now using glusterfs version 3.0.3, and we have nearly
> 200 data servers in the gfs cluster (in distribute mode, not replicate).
> What else do I need to put here in order to make our problem clear if it's
> not now?
>     Thanks for your help! Any suggestion would be appreciated~
>
>


More information about the Gluster-users mailing list