Overview:Â
This guide covers setting up GlusterFS, a scalable network filesystem, on three Ubuntu 22.04 servers (web, web-node1, and web-node2). GlusterFS allows you to create a distributed filesystem that replicates data across multiple nodes for high availability and fault tolerance.
Prerequisites:
Three Ubuntu 22.04 servers:
web (10.0.0.2)
web-node1Â (10.0.0.5)
web-node2 (10.0.0.6)
Add at the end of the /etc/hosts file on each server:
10.0.0.2 web
10.0.0.5 web-node1
10.0.0.6 web-node2
Installation Steps:
1. Install GlusterFS on All Nodes:
sudo apt-get update
sudo apt-get install glusterfs-server -y
sudo systemctl start glusterd
sudo systemctl enable glusterd
2. Configure the GlusterFS Trusted Pool: On web, add the other nodes:
sudo gluster peer probe web-node1
sudo gluster peer probe web-node2
sudo gluster peer status
3. Create Brick Directories: On all nodes, create the directory for GlusterFS bricks:
sudo mkdir -p /glusterfs/brick1/gv0
4. Create and Start the Volume: On web, create the volume with replica configuration:
sudo gluster volume create gv0 replica 3 web:/glusterfs/brick1/gv0 web-node1:/glusterfs/brick1/gv0 web-node2:/glusterfs/brick1/gv0
sudo gluster volume start gv0
5. Mount the GlusterFS Volume: On each node, install the GlusterFS client and mount the volume:
sudo apt-get install glusterfs-client -y
sudo mkdir -p /mnt/glusterfs
sudo mount -t glusterfs web:/gv0 /mnt/glusterfs
6. To mount the volume automatically at boot, add the following line to /etc/fstab:
web:/gv0 /mnt/glusterfs glusterfs defaults,_netdev 0 0
Testing the Setup:
Create a Test File:Â On one of the nodes, create a file on the mounted volume:
sudo touch /mnt/glusterfs/testfile
Verify Replication:Â Check if the file is present on all nodes:
ls /mnt/glusterfs/testfile
Explanation:
GlusterFS Installation:Â Installs the GlusterFS server software to manage the distributed filesystem.
Trusted Pool Configuration:Â Sets up a trusted relationship between nodes to allow data replication.
Brick Directory Creation:Â Creates directories that serve as bricks (storage units) for GlusterFS.
Volume Creation and Start:Â Creates a replicated volume that ensures data redundancy and starts it.
Mounting the Volume:Â Attaches the distributed volume to a mount point, allowing file operations to be performed on it.
Testing:Â Verifies that data written to one node is correctly replicated across all nodes.
This setup ensures high availability and fault tolerance, making it ideal for environments requiring robust data replication and access.
Adding a New Node to GlusterFS
If you want to add a new node (web-node3) to the existing GlusterFS setup, follow these steps:
1. Prepare the New Node:
Ensure the new node (web-node3) has Ubuntu 22.04 installed.
2. Add the new node’s IP and hostname to /etc/hosts on all existing nodes (web, web-node1, web-node2):
10.0.0.7 web-node3
3. Install GlusterFS on web-node3:
sudo apt-get update
sudo apt-get install glusterfs-server -y
sudo systemctl start glusterd
sudo systemctl enable glusterd
4. Add web-node3 to the Trusted Pool: On web:
sudo gluster peer probe web-node3
sudo gluster peer status
5. Create the Brick Directory on web-node3:
sudo mkdir -p /glusterfs/brick1/gv0
6. Add the New Brick to the Existing Volume: On web:
sudo gluster volume add-brick gv0 replica 4 web-node3:/glusterfs/brick1/gv0
7. Rebalance the Volume: On web:
sudo gluster volume rebalance gv0 start
8. Mount the Volume on web-node3:
sudo apt-get install glusterfs-client -y
sudo mkdir -p /mnt/glusterfs
sudo mount -t glusterfs web:/gv0 /mnt/glusterfs
9. To auto-mount, add to /etc/fstab on web-node3:
web:/gv0 /mnt/glusterfs glusterfs defaults,_netdev 0 0
Tuning GlusterFS involves optimizing performance and ensuring stability.Â
Network Optimization
1. Increase TCP Buffer Size:
sudo sysctl -w net.core.rmem_max=16777216
sudo sysctl -w net.core.wmem_max=16777216
sudo sysctl -w net.ipv4.tcp_rmem=’4096 87380 16777216′
sudo sysctl -w net.ipv4.tcp_wmem=’4096 65536 16777216′
sudo sysctl -w net.core.netdev_max_backlog=250000
GlusterFS Volume Example Options
Enable Performance Translators:
sudo gluster volume set gv0 performance.cache-size 512MB
sudo gluster volume set gv0 performance.write-behind-window-size 4MB
sudo gluster volume set gv0 performance.read-ahead on
sudo gluster volume set gv0 performance.io-cache on
sudo gluster volume set gv0 performance.quick-read on
sudo gluster volume set gv0 performance.stat-prefetch on
Tuning I/O Threads:
sudo gluster volume set gv0 performance.io-thread-count 16
By following these steps, you can expand your GlusterFS cluster to include new nodes, ensuring continued scalability and reliability.