2020 has not been a year we would have been able to predict. With a worldwide pandemic and lives thrown out of gear, as we head into 2021, we are thankful that our community and project continued to receive new developers, users and make small gains. For that and a lot of other things we …Read more
The initial rounds of conversation around the planning of content for release 8 has helped the project identify one key thing – the need to stagger out features and enhancements over multiple releases. Thus, while release 8 is unlikely to be feature heavy as previous releases, it will be the starting point in the set …Read more
The Gluster 4.0 release is coming out, one of the most important releases for the Gluster community in quite some time. The bump in the major version is being brought about by a few new changes, namely a change in the on-wire protocol, and the new management framework, GlusterD2 (GD2 for short). GD2 has been …Read more
Gluster and Ceph are delighted to be hosting a Software Defined Storage devroom at FOSDEM 2017. Important dates: Nov 16: Deadline for submissions Dec 1: Speakers notified of acceptance Dec 5: Schedule published This year, we’re looking for conversations about open source software defined storage, use cases in the real world, and where the future …Read more
So let’s look at how this is done.
[Slice] CPUQuota=200% |
# cd /etc/systemd/system # mkdir glusterd.service.d # echo -e “[Service]\nCPUAccounting=true\nSlice=glusterfs.slice” > glusterd.service.d/override.conf |
# systemctl daemon-reload # systemctl stop glusterd # killall glusterfsd && killall glusterfs # systemctl daemon-reload # systemctl start glusterd |
# systemctl show glusterd | grep slice Slice=glusterfs.slice ControlGroup=/glusterfs.slice/glusterd.service Wants=glusterfs.slice After=rpcbind.service glusterfs.slice systemd-journald.socket network.target basic.target |
├─1 /usr/lib/systemd/systemd –switched-root –system –deserialize 19 ├─glusterfs.slice │ └─glusterd.service │ ├─ 867 /usr/sbin/glusterd -p /var/run/glusterd.pid –log-level INFO │ ├─1231 /usr/sbin/glusterfsd -s server-1 –volfile-id repl.server-1.bricks-brick-repl -p /var/lib/glusterd/vols/repl/run/server-1-bricks-brick-repl.pid │ └─1305 /usr/sbin/glusterfs -s localhost –volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log ├─user.slice │ └─user-0.slice │ └─session-1.scope │ ├─2075 sshd: root@pts/0 │ ├─2078 -bash │ ├─2146 systemd-cgls │ └─2147 less └─system.slice |
# systemctl set-property glusterfs.slice CPUQuota=350% |
[Unit] Description=CPU soak task [Service] [Install] |
With that said…happy hacking 🙂
Recently I got a chance to consolidate the Demo videos which covers how GlusterFS can be used in Docker, kubernetes and Openshift. I have placed everything in single channel for better tracking. The demo videos are analogous to previously published blog entries in this space. Here is a brief description…
In previous blog, I explained a method ( oci-systemd-hook) to run systemd gluster containers using oci-system-hook in a locked down mode. Today we will discuss about how to run gluster systemd containers without ‘privilege’ mode !! Awesome .. Isnt it ? I owe this blog to few people latest being…
In previous blog posts we discussed, how to use GlusterFS as a persistent storage in Kubernetes and Openshift. In nutshell, the GlusterFS can be deployed/used in a kubernetes/openshift environment as : *) Contenarized GlusterFS ( Pod ) *) GlusterFS as Openshift service and Endpoint (Service and Endpoint). *) GlusterFS volume…
In this context I am talking about the dynamic provisioning capability of ‘glusterfs’ plugin in Kubernetes/Openshift. I have submitted a Pull Request to Kubernetes to add this functionality for GlusterFS. At present, there is no existing network storage provisioners in kubernetes eventhough there are cloud providers. The idea here is…
OpenShift is a platform as a service product from Red Hat. The software that runs the service is open-sourced under the name OpenShift Origin, and is available on GitHub. OpenShift v3 is a layered system designed to expose underlying Docker and Kubernetes concepts as accurately as possible, with a focus…
We hosted a small meetup/birds of a feather session at USENIX’s FAST conference. FAST is a conference that focuses on File And Storage Technologies in Santa Clara, California. Vijay Bellur, Gluster Project Lead did a short talk on Gluster.Next, our ongoing architectural evolution in Gluster to improve scaling and enable new use cases like like storage as a …Read more
The last week of January and the first week of February were packed with events and meetings.
This blog contains my observations, opinions, and ideas in the hope that they will be useful or at least interesting for some.
The day before FOSDEM starts, the CentOS project organizes a community
meetup in the form of their Dojos at an IBM office in Brussels. Because
Gluster is participating in the CentOS Storage SIG (special interest group), I was asked to present something. My talk had a good
participation, asking about different aspects of the goals that the
Storage SIG has.
Many people are interested in the Storage SIG, mainly other SIGs that
would like to consume the packages getting produced. There is
also increasing interest from upcoming architectures to get Gluster
running on their new hardware (Aarch64 and ppc64le). The CentOS team is
working on getting the hardware in the build infrastructure and testing
environment, the Gluster packages will be one of the first SIG projects
going to use that.
I was surprised to see two engineers from Nutanix attend
the talk. They were very attentive when others asked about VM workloads
and hyper-convergence-related topics.
The CentOS team maintains a Gluster environment for virtual machines. It
is possible for CentOS projects to request a VM, and this VM will be
located on their OpenNebula “cloud” backed by Gluster. This is a small
environment with four servers, connected over Infiniband. Gluster is setup
to use IPoIB, not using its native RDMA support. Currently, this is
running glusterfs-3.6 with a two-way replica, OpenNebula runs the VMs over
QEMU+libgfapi. In the future, this will most likely be replaced by a similar setup
based on oVirt.
At FOSDEM, we had a very minimal booth/table. The 500 stickers that
Amye Scavarda brought and a bunch of ball pens imported by Humble and Kaushal
were handed out around noon on the second day. Lots of people were aware of
Gluster and many were not. We definitely need a better presence next
year, visitors should easily see that Gluster is about storage and not
only the good-looking ant. Kaushal and Humble wrote detailed
blog posts about FOSDEM already.
Some users that knew about Gluster also had some questions about Ceph. I
unfortunately could not point them to a booth where experts were hanging
around. It would really be nice to have some Ceph people manning a
(maybe even shared) table. Interested users should get good advice on
picking the best storage solution for their needs, and of course we
would like then to try Gluster or Ceph in the first place. A good
suggestion for users is important to prevent disappointment and possibly
negative promotion.
The talk I gave at FOSDEM attracted (a guestimated) 400-500 people.
The auditorium was huge, but “only” filled somewhere between 25-50% with
a lot of people arriving late, and some leaving a few minutes early.
After the talk, there were a lot of questions and we asked to move
the group of people to a little more remote location so that the next
presentation could start without the background noise. Kaleb helped in
answering some of the visitors questions, and we directed a few to the
guys at the Gluster booth as well. The talk seemed to have gone well, and I
got a request to present something at the next NLUUG conference.
This was mainly informal chats about different topics listed in this Google Doc. We encouraged each topic to add a link to an etherpad where notes
are kept. The presenters of the sessions are expected to send a summary
based on the notes to the (community) mailing lists, which I won’t cover here.
Some notes that I made during conversations that were not really
planned:
Richacl needed for multiprotocol support, Rajesh will post his
work-in-progress patches to Gerrit so that others can continue with
his start and get it in for glusterfs-3.8. (Michael Adam)
QE will push downstream helper libraries for testing with distaf to
the upstream distaf framework repo or upstream tests repo. MS and
Jonathan are the main contacts for defining and enforcing an
“upstream first” process. “Secret sauce” tests will not become part
of upstream (like some performance things), but all basic
functionality should. At the moment we only catch basic functionality
problems downstream, when we test upstream we should find them
earlier and have more time to fix them, less chance in slipping
release dates.
Downstream QA will ultimately consist out of running the upstream
distaf framework, upstream tests repo and downstream tests repo.
Paolo Bonzini (KVM maintainer) and Kevin Wolf (QEMU/block maintainer)
are interested in improved Gluster support in QEMU. Not only
SEEK_DATA/SEEK_HOLE would be nice, but also something that makes it
possible to detect “allocated but zero-filled.” lseek() can not
detect this yet, it might be a suitable topic for discussion during
LSF/MM in April.
One of the things that the libvirt team (requested by oVirt/RHEV)
asked about was support for “backup-volfile-server” support. This was
a question from Doron Fediuck at FOSDEM as well. It was the first time
I heard about it. Adding this seemed simple, and a train ride
from Brussels to Amsterdam was enough to get something working. I was
informed that someone already attempted this approach earlier… This
work was not shared with other Gluster developers, so the progress on
it was also not clear :-/ After searching for proposed patches, we
found that Prasanna did quite some work (patch v13) for this. He was
expected to arrive after the meetup with the virtualization team was planned.
Kevin did send me a detailed follow-up (in Dutch!) after he reviewed
the current status of QEMU/gluster. There are five suggestions on his
list, I will follow-up on that later (plus Prasanna and gluster-devel@).
Snapshots of VM images can be done already, but they would benefit
from reflink support. This most likely will require a REFLINK FOP in
Gluster, and the appropriate extensions to FUSE and libgfapi.
Something we might want to think about for after Gluster 4.0.
Finally, I met Csaba Henk in real life. He will be picking up adding
support for Kerberos in the multitude of Gluster protocols. More
on that will follow at some point.
Unfortunately, there was no Gluster swag or stickers at DevConf.cz, but this time there
were Ceph items! It feels like the Ceph and Gluster community managers
should work a little closer together so that we’re evenly recognized at
events. The impressions that I have heard, was like “Gluster is a
project for community users, Ceph is what Red Hat promotes for storage
solutions.” I’m pretty sure that it is not the message we want to relay
to others. The talks on Ceph and Gluster at the event(s) were more
equally distributed, so maybe visitors did not notice it like I did.
During the Gluster Workshop (and most of the conference), there was
very bad Internet connectivity. This made it very difficult for the
participants to download the Gluster packages for their distribution. So
instead of a very “do-it-yourself” workshop, it became more of a
presentation and demonstration. From the few people that had taken the
courage to open their laptops, only a handful of attendees managed to
create a Gluster volume and try it out. The attendees of the workshop
were quite knowledgeable, and did not hesitate to ask good questions.
After the workshop, there were more detailed questions from users and
developers. Some about split-brain resolution and prevention, others
about (again) the “backup-volfile-server” ‘mount’ option for QEMU. We
definitely need to promote features like “policy based split-brain
resolution,” “arbiter volumes,” and “sharded volumes” much more. Many
users store VM images on Gluster and anything that helps improving the
performance and stability gets a lot of interest.
Nir Soffer (working on oVirt/RHEV) wanted to discuss some more about
improving their support for Gluster. They currently use FUSE mounts and
should move to QEMU+libgfapi to improve performance and work around
difficulties with their usage of FUSE filesystems. At least two things
could use assistance from the Gluster team:
Speaking to Heinz Mauelshagen (LVM/dm developer) about different aspects
of Gluster triggered a memory of something a FOSDEM visitor asked: Would
it be possible to have a tiered Gluster volume with a RAM-disk as “hot”
tier? This is not something we can do in Gluster now, but it seems
that dm-cache can be configured like this. dm-cache just needs a
block-device, and that can be created at boot. With some config-options
it is possible to setup dm-cache as a write-through cache. This is
definitely something I need to check out and relay back to the guy
asking this question (he’s in the interesting situation where they can
fill up all the RAM slots in their server if they want).
Upstream testing the CentOS CI is available for many open source
projects. Gluster will be using this soon for regular distaf test runs,
and integration tests with other projects. NFS-Ganesha and Samba are
natural candidates for that, so I encouraged Michael and Guenter to
attend the CentOS CI talk by Brian Stinson.
Because the (partial) sysadmins for the Gluster infrastructure (Jenkins,
Gerrit, others servers and services) have too little time to maintain
everything, OSAS suggested to use the expertise of the CentOS team.
Many of the CentOS core members are very experienced in maintaining many
servers and services, the Gluster community could probably move much of
the infrastructure to the CentOS project and benefit from their
expertise. KB Singh sent an email with notes from a meeting about this topic to the gluster-infra list. It is up to the Gluster community
to accept their assistance and enjoy a more stable infrastructure.
Wow, did you really read this up to here?! Thanks 🙂
This article originally appeared on
community.redhat.com.
Follow the community on Twitter at
@redhatopen, and find us on
Facebook and
Google+.
Richard Wareing gave a phenomenal talk at Southern California Linux Expo on Saturday, January 23 about scaling GlusterFS at Facebook. In his own words: GlusterFS is an open-source (mostly) POSIX compliant distributed filesystem originally written by Gluster Inc and now maintained by RedHat Inc. Here at Facebook it had humble beginings: a single rack of …Read more
We’re kicking off an updated Monthly Newsletter, coming out mid-month. We’ll highlight special posts, news and noteworthy threads from the mailing lists, events, and other things that are important for the Gluster community. Community Survey Followup Our community survey results from November are out on the blog! News and Noteworthy Threads from the Mailing Lists …Read more
In November 2015, we did our annual Gluster Community Survey, and we had some great responses and turnout! We’ve taken some of the highlights and distilled them down for our overall community to review. Some interesting things: 68% of respondents have been using Gluster for less than 2 years. 3 shall be the number:The most …Read more
Sometimes the solutions we put in place turn out even better than what we originally hoped. That could sum of the experience of Belgian Internet Service Provider RIS Belgium,which turned to Gluster to solve the problem of distributed storage and ended up getting more benefit from the solution than they expected. Initially RIS, a web …Read more
NFS-Ganesha 2.3 is rapidly winding down to release and it has a bunch of new things in it that make it fairly compelling. A lot of people are also starting to use Red Hat Gluster Storage with the NFS-Ganesha NFS server that is part of that package. Setting up a highly available NFS-Ganesha system using …Read more
We’ve just wrapped up a great week at LinuxCon Europe 2016 in Dublin with a great showing from the Gluster community! BitRot Detection in GlusterFS – Gaurav Garg, Red Hat & Venky Shankar Advancements in Automatic File Replication in Gluster – Ravishankar N Gluster for Sysadmins – Dustin Black Open Storage in the Enterprise with …Read more
closing a lot of bugs in bugzilla through the (Red Hat) bugzilla web site is a royal pain in the a**. So— 1) get the bugzilla CLI utility: `dnf install -y python-bugzilla` (or `yum install python-bugzilla`) 2) sign in to bugzilla using your bugzilla account and password: `bugzilla login` 3) get a list of bugs …Read more
We organized Docker Global Hack Day at Red Hat Office on 19th Sep’15. Though there were lots RSVPs, the turn up for the event was less than expected. We started the day by showing the recording of kick-off event. The … Continue reading →