[Gluster-users] Settings for VM hosting

Martin Toth snowmailer at gmail.com
Thu Apr 18 13:13:25 UTC 2019


Hi,

I am curious about your setup and settings also. I have exactly same setup and use case.

- why do you use sharding on replica3? Do you have various size of bricks(disks) pre node?

Wonder if someone will share settings for this setup.

BR!

> On 18 Apr 2019, at 09:27, lemonnierk at ulrar.net wrote:
> 
> Hi,
> 
> We've been using the same settings, found in an old email here, since
> v3.7 of gluster for our VM hosting volumes. They've been working fine
> but since we've just installed a v6 for testing I figured there might
> be new settings I should be aware of.
> 
> So for access through the libgfapi (qemu), for VM hard drives, is that
> still optimal and recommended ?
> 
> Volume Name: glusterfs
> Type: Replicate
> Volume ID: b28347ff-2c27-44e0-bc7d-c1c017df7cd1
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: ips1adm.X:/mnt/glusterfs/brick
> Brick2: ips2adm.X:/mnt/glusterfs/brick
> Brick3: ips3adm.X:/mnt/glusterfs/brick
> Options Reconfigured:
> performance.readdir-ahead: on
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> features.shard: on
> features.shard-block-size: 64MB
> cluster.data-self-heal-algorithm: full
> network.ping-timeout: 30
> diagnostics.count-fop-hits: on
> diagnostics.latency-measurement: on
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> 
> Thanks !
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list