[NCLUG] help finding documentation for unique cluster setup

Sean Reifschneider jafo at tummy.com
Tue Jul 18 15:53:18 MDT 2006


On Mon, Jul 17, 2006 at 08:09:40PM -0600, Mike Jensen wrote:
>having issues with is I want them to share disk space. Sorta like having a
>raid 0 between two machines. That way I can connect to the main node and
>have one of its mount points be a combination of a local drive, and an NFS
>drive on the other node. I assume NFS is what I will be using, if this is

You may be able to do something like this with Intermezzo or CODA or one of
the other "clustered file systems".  Expect quite a bit of pain...  I've
never had luck with these, but haven't tried in years.

You could just NFS mount the remote file-system and then use UnionFS to
make the local and remote file-system appear to be a single file-system.
I've had problems with it panicing in the past, but it's probably worth
trying now.

>communication to the other node. I have not been able to find much info
>about how to setup a cluster like this, so any documentation and help would
>be nice.

In short, depending on what you want to do, this isn't really what I'd call
a common setup.  Looking for clustered or distributed file-system stuff is
probably where you want to start.  But, as they say, the pioneers get the
arrows.

Compute clusters and high availability clusters are rather mature, and
load-balancing certain types of things are mature, but load-balancing a
file-system is not.  Even distributing it is not.

If I understand what you're trying to do, using NFS and UnionFS is probably
the way you want to go without going insane.

>1) Learn more about cluster computing, and get some hands on experince

For that goal, nobody is really doing what you're trying to do, so I'm not
sure it would be safe to say you're getting hands on experience.  Setting
up a HA or HPC cluster is probably something that's more applicable as
experience.

>2) Have a large amount of file storage available for media (data integrity
>is not a concern, it would not matter if the data is lost, although if
>possible it would be _nice_ to only lose som data should a node fail)

How about this for getting some hands on experience.  Set up one box as an
iSCSI initiator, exposing it's discs to the other system.  On the other
system put Open Solaris and use it's local discs along with the remote
discs via iSCSI, to set up a RAIDZ array with ZFS.  ZFS checksums all data
written to disc, so it assures you data integrity, even if one of your
discs suddenly starts returning garbage without errors.  It's good stuff,
though definitely not quite mature yet.  That would be a learning
experience though.

>3) although not needed, it would be nice to set this up to be load balancing
>beteween the two machines, just for the experince

Define "load balancing".  NFS is almost never CPU-bounded, so trying to
spread requests between boxes probably isn't really going to help.  Usually
you would load-balance things that are CPU-intensive or are hit quite hard,
and usually it involves setting up a load-balancer in front of the service.
For this sort of thing you would normally need 3 machines to configure, but
it is a well defined problem.  See the Red Hat Cluster documentation,
AKA "piranha", Ultra Monkey, LVS (Linux Virtual Server) and similar
projects.

Thanks,
Sean
-- 
 Sucking all the marrow out of life doesn't mean choking on the bone.
                 -- _Dead_Poet's_Society_
Sean Reifschneider, Member of Technical Staff <jafo at tummy.com>
tummy.com, ltd. - Linux Consulting since 1995: Ask me about High Availability




More information about the NCLUG mailing list