Glusterfs
Version vom 25. Oktober 2012, 09:01 Uhr von 192.168.241.1 (Diskussion) (Die Seite wurde neu angelegt: „orig. Link: http://www.howtoforge.com/high-availability-storage-with-glusterfs-3.2.x-on-ubuntu-11.10-automatic-file-replication-across-two-storage-servers-p2 =in…“)
installieren glusterfs
apt-get install glusterfs-client mkdir /mnt/glusterfs mount -t glusterfs server1.example.com:/testvol /mnt/glusterfs mount
- output sollte so aussehen:
root@client1:~# mount /dev/mapper/server3-root on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/sda1 on /boot type ext2 (rw) server1.example.com:/testvol on /mnt/glusterfs type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072) root@client1:~#
df -h
root@client1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/server3-root
29G 1.1G 27G 4% /
udev 238M 4.0K 238M 1% /dev
tmpfs 99M 212K 99M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 247M 0 247M 0% /run/shm
/dev/sda1 228M 24M 193M 11% /boot
server1.example.com:/testvol
29G 1.1G 27G 4% /mnt/glusterfs
root@client1:~#
- Instead of mounting the GlusterFS share manually on the client, you could modify /etc/fstab,
- so that the share gets mounted automatically when the client boots.
- Open /etc/fstab and append the ollowing line:
vi /etc/fstab
- das kommt rein:
server1.example.com:/testvol /mnt/glusterfs glusterfs defaults,_netdev 0 0
reboot df -h mount
some testfiles
touch /mnt/glusterfs/test1 touch /mnt/glusterfs/test2
ls -l /data
server1.example.com:
- Now we shut down server1.example.com and add/delete some files on the GlusterFS share on client1.example.com.
shutdown -h now
client1.example.com:
touch /mnt/glusterfs/test3 touch /mnt/glusterfs/test4 rm -f /mnt/glusterfs/test2
server2.example.com:
- The changes should be visible in the /data directory on server2.example.com
ls -l /data
server1.example.com:
- As you see, server1.example.com hasn't noticed the changes that happened while it was down.
- This is easy to fix, all we need to do is invoke a read command,
- on the GlusterFS share on client1.example.com, e.g.:
ls -l /data
client1.example.com:
- Now take a look at the /data directory on server1.example.com again,
- and you should see that the changes have been replicated to that node:
ls -l /mnt/glusterfs/
server1.example.com:
- ls -l /data