The Gluster community is pleased to announce the release of Gluster 3.9. This is a major release that includes a number of changes. Many improvements contribute to better support of Gluster with containers and running your storage on the same server as your hypervisors. Additionally, we’ve focused on integrating with other projects in the open source ecosystem. …Read more
Important happenings for Gluster this month: A great Gluster Developer Summit this month, thanks to all who participated. Find all of our recorded talks with slides at: https://www.youtube.com/user/GlusterCommunity/playlists http://www.slideshare.net/GlusterCommunity/ Changes to the Community Meeting We’re trying out something new for a few weeks, we’ve removed our updates from the Community Meeting and made it …Read more
Gluster can have trouble delivering good performance for small file workloads. This problem is acute for features such as tiering and RDMA, which employ expensive hardware such as SSDs or infiniband. In such workloads the hardware’s benefits are unrealized, so there is little return on the investment. A major contributing factor to this problem has …Read more
Gluster and Ceph are delighted to be hosting a Software Defined Storage devroom at FOSDEM 2017. Important dates: Nov 16: Deadline for submissions Dec 1: Speakers notified of acceptance Dec 5: Schedule published This year, we’re looking for conversations about open source software defined storage, use cases in the real world, and where the future …Read more
Back in February of this year Martin Kletzander gave a talk at devconf.cz on GCC plug-ins. It would seem that gcc plug-ins are a feature that has gone largely overlooked for many years. I came back from DevConf inspired to try it out. A quick search showed me I was not alone – a colleague …Read more
Important happenings for Gluster this month: GlusterFS-3.9rc1 is out for testing! Gluster 3.8.4 is released, users are advised to update: http://blog.gluster.org/2016/09/glusterfs-3-8-4-is-available-gluster-users-are-advised-to-update/ Gluster Developer Summit: Next week, Gluster Developer Summit happens in Berlin from October 6 through 7th. Our schedule: https://www.gluster.org/events/schedule2016/ We will be recording scheduled talks, and posting them to our YouTube channel! Gluster-users: Amudhan …Read more
Even though the last release 3.8 was just two weeks ago, we’re sticking to the release schedule and have 3.8.4 ready for all our current and future users. As with all updates, we advise users of previous versions to upgrade to the latest and greatest. …
So let’s look at how this is done.
[Slice] CPUQuota=200% |
# cd /etc/systemd/system # mkdir glusterd.service.d # echo -e “[Service]\nCPUAccounting=true\nSlice=glusterfs.slice” > glusterd.service.d/override.conf |
# systemctl daemon-reload # systemctl stop glusterd # killall glusterfsd && killall glusterfs # systemctl daemon-reload # systemctl start glusterd |
# systemctl show glusterd | grep slice Slice=glusterfs.slice ControlGroup=/glusterfs.slice/glusterd.service Wants=glusterfs.slice After=rpcbind.service glusterfs.slice systemd-journald.socket network.target basic.target |
├─1 /usr/lib/systemd/systemd –switched-root –system –deserialize 19 ├─glusterfs.slice │ └─glusterd.service │ ├─ 867 /usr/sbin/glusterd -p /var/run/glusterd.pid –log-level INFO │ ├─1231 /usr/sbin/glusterfsd -s server-1 –volfile-id repl.server-1.bricks-brick-repl -p /var/lib/glusterd/vols/repl/run/server-1-bricks-brick-repl.pid │ └─1305 /usr/sbin/glusterfs -s localhost –volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log ├─user.slice │ └─user-0.slice │ └─session-1.scope │ ├─2075 sshd: root@pts/0 │ ├─2078 -bash │ ├─2146 systemd-cgls │ └─2147 less └─system.slice |
# systemctl set-property glusterfs.slice CPUQuota=350% |
[Unit] Description=CPU soak task [Service] [Install] |
With that said…happy hacking 🙂
Gluster Eventing is the new feature as part of Gluster.Next
initiatives, it provides close to realtime notification and alerts for
the Gluster cluster state changes.
Websockets APIs to consume events will be added later. Now we emit
events via another popular mechanism called “Webhooks”.(Many popular
products provide notifications via Webhooks Github, Atlassian,
Dropbox, and many more)
Webhooks are similar to callbacks(over HTTP), on event Gluster will
call the Webhook URL(via POST) which is configured. Webhook is a web server
which listens on a URL, this can be deployed outside of the
Cluster. Gluster nodes should be able to access this Webhook server on
the configured port. We will discuss about adding/testing webhook
later.
Example Webhook written in python,
from flask import Flask, request app = Flask(__name__) @app.route("/listen", methods=["POST"]) def events_listener(): gluster_event = request.json if gluster_event is None: # No event to process, may be test call return "OK" # Process gluster_event # { # "nodeid": NODEID, # "ts": EVENT_TIMESTAMP, # "event": EVENT_TYPE, # "message": EVENT_DATA # } return "OK" app.run(host="0.0.0.0", port=9000)
Eventing feature is not yet available in any of the releases, patch is
under review in upstream master(http://review.gluster.org/14248). If anybody interested in trying it
out can cherrypick the patch from review.gluster.org
git clone http://review.gluster.org/glusterfs
cd glusterfs
git fetch http://review.gluster.org/glusterfs refs/changes/48/14248/5
git checkout FETCH_HEAD
git checkout -b <YOUR_BRANCH_NAME>
./autogen.sh
./configure
make
make install
Start the Eventing using,
gluster-eventing start
Other commands available are stop, restart, reload and
status. gluster-eventing --help for more details.
Now Gluster can send out notifications via Webhooks. Setup a web
server listening to a POST request and register that URL to Gluster
Eventing. Thats all.
gluster-eventing webhook-add <MY_WEB_SERVER_URL>
For example, if my webserver is running at http://192.168.122.188:9000/listen
then register using,
gluster-eventing webhook-add ``http://192.168.122.188:9000/listen``
We can also test if web server is accessible from all Gluster nodes
using webhook-test subcommand.
gluster-eventing webhook-test http://192.168.122.188:9000/listen
With the initial patch only basic events are covered, I will add more
events once this patch gets merged. Following events are available
now.
Volume Create Volume Delete Volume Start Volume Stop Peer Attach Peer Detach
Created a small demo to show this eventing feature, it uses Web server
which is included with the patch for Testing.(laptop hostname is sonne)
/usr/share/glusterfs/scripts/eventsdash.py --port 8080
Login to Gluster node and start the eventing,
gluster-eventing start gluster-eventing webhook-add http://sonne:8080/listen
And then login to VM and run Gluster commands to probe/detach peer,
volume create, start etc and Observe the realtime notifications for
the same where eventsdash is running.
Example,
ssh root@fvm1 gluster peer attach fvm2 gluster volume create gv1 fvm1:/bricks/b1 fvm2:/bricks/b2 force gluster volume start gv1 gluster volume stop gv1 gluster volume delete gv1 gluster peer detach fvm2
Demo also includes a Web UI which refreshes its UI automatically when
something changes in Cluster.(I am still fine tuning this UI, not yet
available with the patch. But soon will be available as seperate repo
in my github)
Will this feature available in 3.8 release?
Sadly No. I couldn’t get this merged before 3.8 feature freeze 🙁
Is it possible to create a simple Gluster dashboard outside the
cluster?
It is possible, along with the events we also need REST APIs to get
more information from cluster or to perform any action in cluster.
(WIP REST APIs are available here)
Is it possible to filter only alerts or critical notifications?
Thanks Kotresh for the
suggestion. Yes it is possible to add event_type and event_group
information to the dict so that it can be filtered easily.(Not yet
available now, but will add this feature once this patch gets merged
in Master)
Is documentation available to know more about eventing design and
internals?
Design spec available here
(which discusses about Websockets, currently we don’t have
Websockets support). Usage documentation is available in the commit
message of the patch(http://review.gluster.org/14248).
Comments and Suggestions Welcome.
Recently I got a chance to consolidate the Demo videos which covers how GlusterFS can be used in Docker, kubernetes and Openshift. I have placed everything in single channel for better tracking. The demo videos are analogous to previously published blog entries in this space. Here is a brief description…
Tiering is a powerful feature in Gluster. It divides the available storage into two parts: the hot tier populated by small fast storage devices like SSDs or a RAMDisk, and the cold tier populated by large slow devices like mechanical HDDs. By placing most recently accessed files in the hot , Gluster can quickly process …Read more
Now that we know how to create GlusterFS snapshots, it will be handy to know, how to delete them as well. Right now I have a cluster with two volumes at my disposal. As can be seen below, each volume has 1 brick.# gluster volume infoVolume Name: test_v…
In previous blog, I explained a method ( oci-systemd-hook) to run systemd gluster containers using oci-system-hook in a locked down mode. Today we will discuss about how to run gluster systemd containers without ‘privilege’ mode !! Awesome .. Isnt it ? I owe this blog to few people latest being…
Important happenings for Gluster this month: 3.7.14 released 3.8.3 released CFP for Gluster Developer Summit open until August 31st gluster-users: [Gluster-users] release-3.6 end of life http://www.gluster.org/pipermail/gluster-users/2016-August/028078.html – Joe requests a review of the 3.6 EOL proposal [Gluster-users] The out-of-order GlusterFS 3.8.3 release addresses a usability regression http://www.gluster.org/pipermail/gluster-users/2016-August/028155.html Niels de Vos announces 3.8.3 [Gluster-users] GlusterFS-3.7.14 released …Read more
In previous blog posts we discussed, how to use GlusterFS as a persistent storage in Kubernetes and Openshift. In nutshell, the GlusterFS can be deployed/used in a kubernetes/openshift environment as : *) Contenarized GlusterFS ( Pod ) *) GlusterFS as Openshift service and Endpoint (Service and Endpoint). *) GlusterFS volume…
In this context I am talking about the dynamic provisioning capability of ‘glusterfs’ plugin in Kubernetes/Openshift. I have submitted a Pull Request to Kubernetes to add this functionality for GlusterFS. At present, there is no existing network storage provisioners in kubernetes eventhough there are cloud providers. The idea here is…
Important happenings for Gluster this month: First stable update for 3.8 is available, GlusterFS 3.8.1 fixes several bugs Gluster Developers Summit: October 6, 7, 2016 directly following LinuxCon Berlin This is an invite-only event, but you can apply for an invitation. Deadline for application is July 31, 2016. Apply for an invitation: http://goo.gl/forms/JOEzoimW9qVV4jdz1 Gluster-users: Aravinda …Read more
This feature is about having WORM-based compliance/archiving solution in glusterfs. It mainly focus on the following Compliance: Laws and regulations to access and store intellectual property and confidential information. WORM/Retention : Store data in a tamper-proof and secure way & Data accessibility policies Archive: Storing data in effectively and efficiently & Disaster-Recovery solution WORM …Read more
Announcing 3.8! As of June 14, 3.8 is released for general use. The 3.8 release focuses on: containers with inclusion of Heketi hyperconvergence ecosystem integration protocol improvements with NFS Ganesha http://blog.gluster.org/2016/06/glusterfs-3-8-released/ Please note that this release also marks the end of updates for Gluster 3.5. Upcoming Events: Red Hat Summit, June 28-30 Gluster related talks: …Read more
Gluster.org announces the release of 3.8 on June 14, 2016, marking a decade of active development. The 3.8 release focuses on: containers with inclusion of Heketi hyperconvergence ecosystem integration protocol improvements with NFS Ganesha Contributed features are marked with the supporting organizations. Automatic conflict resolution, self-healing improvements (Facebook) Synchronous Replication receives a …Read more