Skip to main content
Back to Blog
Server & DevOpsOctober 21, 202410 min read

How to Set Up GlusterFS on Ubuntu

A complete guide to setting up a distributed, replicated GlusterFS filesystem across multiple Ubuntu 22.04 nodes, including installation, volume creation, client mounting, maintenance, and troubleshooting.

OPS

Setting Up GlusterFS on Ubuntu 22.04

GlusterFS is a scalable, distributed filesystem that ensures high availability and fault tolerance. This guide walks through configuring GlusterFS on three Ubuntu 22.04 servers, enabling a replicated filesystem across nodes. We also cover maintenance tasks, performance optimizations, and troubleshooting techniques for production-ready setups.

Overview

We will configure GlusterFS across three servers:

  • web: 10.0.0.2
  • web-node1: 10.0.0.5
  • web-node2: 10.0.0.6

By replicating data across these nodes, GlusterFS ensures fault tolerance and consistent access to data, even if a node fails.

Prerequisites

  • Three Ubuntu 22.04 servers with sudo privileges.
  • Network connectivity between the servers.
  • Update the /etc/hosts file on each server with the following entries:
10.0.0.2 web
10.0.0.5 web-node1
10.0.0.6 web-node2

Installation Steps

Step 1: Install GlusterFS on All Nodes

Run the following commands on each server:

sudo apt update
sudo apt install glusterfs-server -y
sudo systemctl start glusterd
sudo systemctl enable glusterd

Step 2: Configure the GlusterFS Trusted Pool

From the primary node (web), probe the other nodes:

sudo gluster peer probe web-node1
sudo gluster peer probe web-node2

Tip: Run gluster peer status on any node to verify that all peers are connected.

Step 3: Create Brick Directories

Create brick directories on all nodes to store GlusterFS data:

sudo mkdir -p /glusterfs/brick1/gv0

Step 4: Create and Start the Volume

On the primary node, create a replicated GlusterFS volume:

sudo gluster volume create gv0 replica 3 web:/glusterfs/brick1/gv0 web-node1:/glusterfs/brick1/gv0 web-node2:/glusterfs/brick1/gv0
sudo gluster volume start gv0

Step 5: Mount the GlusterFS Volume

Install the GlusterFS client and mount the volume on each node:

sudo apt-get install glusterfs-client -y
sudo mkdir -p /mnt/glusterfs
sudo mount -t glusterfs web:/gv0 /mnt/glusterfs

To auto-mount at boot, add the following line to /etc/fstab:

web:/gv0 /mnt/glusterfs glusterfs defaults,_netdev 0 0

Testing the Setup

Create a test file on one node to verify data replication:

sudo touch /mnt/glusterfs/testfile.txt

Check the other nodes -- the file should appear on all of them.

Adding a New Node

Step 1: Prepare the New Node

Install GlusterFS and add its IP to /etc/hosts on all servers:

10.0.0.7 web-node3

Step 2: Add the Node to the Trusted Pool

sudo gluster peer probe web-node3
sudo gluster peer status

Network Optimization

Enhance performance by tuning TCP settings:

sudo sysctl -w net.core.rmem_max=16777216
sudo sysctl -w net.core.wmem_max=16777216
sudo sysctl -w net.ipv4.tcp_rmem='4096 87380 16777216'
sudo sysctl -w net.ipv4.tcp_wmem='4096 65536 16777216'
sudo sysctl -w net.core.netdev_max_backlog=250000

GlusterFS Maintenance

Rebalancing Volumes

Rebalance the volume after adding or removing bricks:

sudo gluster volume rebalance gv0 start

Checking Volume Health

Monitor volume health and heal inconsistencies:

sudo gluster volume heal gv0 info
sudo gluster volume heal gv0

Troubleshooting

Peer Not Connected

If a peer shows as disconnected, verify:

  • Network connectivity between nodes.
  • Ports 24007-24009 are open.
  • The glusterd service is running on all nodes.
sudo systemctl status glusterd

Conclusion

By following this guide, we have set up a distributed and replicated GlusterFS filesystem across multiple nodes. This configuration ensures high availability and fault tolerance, making it suitable for demanding environments. We recommend regularly monitoring performance, maintaining volume health, and following best practices to ensure long-term reliability.

Need help with this?

Our team handles this kind of work daily. Let us take care of your infrastructure.

Related Articles