I couldn't do such a thing without a 10Gig Ethernet to power the systems I use with Gluster and GFS. They both run pretty smoothly on such an environment.
My application is a large iSCSi domain.
For a very small network I would think it wouldn't be a problem.
How small though you would have to try and see. Gluster/GFS thier ilk are not very forgiving on frugal systems.
On Thu, Aug 11, 2022, 3:49 AM Philip Rhoades <firstname.lastname@example.org> wrote:
For many years (after some nasty experiences) I have been in the habit
of using a Fedora Work Station (which stays on but gets rebooted fairly
frequently) and a separate Fedora server (email-MTA, some Web sites etc)
which stays on for very long periods of time and only gets rebooted
infrequently. I have other systems that get booted occasionally. My
habit has been to backup / rsync important / critical data between the
regular WS and the main server as appropriate - this has allowed me on a
number of occasions to recover happily when hardware has failed, to get
going temporarily again on one or the other machine while I sort out the
More recently I have been thinking about building an ARM cluster with
Gluster that would hold all the data for both the WS and the main server
and any other WSs or servers I might need to run from time to time. I
would still make use of an off-site backup anyway but I like the idea of
just being able to add another ARM device + SATA drive to the cluster to
create more data space. I also thought I would go back to making use of
WSs that were basically just X-servers that booted from some sort of USB
stick but got the OS image from the cluster - this might be complicated
by the fact I now use Sway rather than X - but one problem at a time . .
Has anyone here set up a cluster something like this? Have people any
suggestions about specific Fedora web pages / docs to look at?
I have been tracking CEPH for some time and while it looks interesting,
it seems overkill for what I am thinking of doing and more technically
difficult to support / debug etc. I am thinking of starting the
exercise with 4 RPis each with an 8Tb SATA drive.
PO Box 896
Cowra NSW 2794
arm mailing list -- email@example.com
To unsubscribe send an email to firstname.lastname@example.org
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://email@example.com
Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue