[NCLUG] A *nix DFS alternative?
Bob Proulx
bob at proulx.com
Thu Feb 18 16:47:59 MST 2010
grant at amadensor.com wrote:
> I have an absolutely insane idea that will take care of large file
> transfers, recovery from errors, intermittent connections, and everything,
> except updating a file (each update would be in essence a new file).
>
> This would also let you expand to an almost limitless number of locations
> with minimal pain:
It is a pretty cool idea! Let me poke at it...
> Use a version control system (SVN or CVS) for a central point of control,
> but the only files you control are torrent files. You can remove or add
> them as needed. The remote machines will then update their list of
> torrents, and begin downloading. Everyone gets every file, and fault
> tolerance is there. The bandwidth is only used until the files are
> everywhere, then it settles to nothing. All you need to work out is how
> to do access controls on the tracker so that not everyone in the world can
> grab your pictures.
So far it seems really cool.
> In a cron job, stop all of the torrent jobs. Run a SVn or CVS update.
> The spin through all of the torrents in a shell, starting a download on
> each. The main server it started on will be the seeder, and everything
> is already designed to handle the outage and recover.
Doesn't the bittorrent seeder need to read the files at start time and
compute their md5sums? If you had a large collection of large files
that would be a problem. At (re)start time things would melt down.
But there are many bittorrent clients and the behavior of the ones I
am using may be different and less than good. A good bittorrent
client would solve this problem. And perhaps you can tell me that my
observations are completely incorrect. That would be great in this case.
Bob
More information about the NCLUG
mailing list