<div dir="auto">Thanks Aaron for a great presentation on Ceph.<div dir="auto"><br></div><div dir="auto">So many good lessons to learn.</div><div dir="auto"><br></div><div dir="auto">Evelyn </div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Apr 9, 2024, 8:10 PM Bob Proulx <<a href="mailto:bob@proulx.com">bob@proulx.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">j dewitt wrote:<br>
> What: Tuesday April 9th, 2024 NCLUG Meeting<br>
<br>
All Your Bits Are Belong To Ceph<br>
<br>
Tonight we actually had a scheduled topic! Aaron works with Ceph<br>
professionally as part of his day job. Had been chatting with people<br>
and there was an interest in hearing about Ceph. Tonight was the<br>
night! Of course a couple of the folks who had expressed interest<br>
were not here tonight. Oh well. They will have to read the slides<br>
through later.<br>
<br>
Ceph is used by very large corporations. It is known to host<br>
extremely large amounts of data. It scales from small to extremely<br>
large. It provides redundancy for reliability. It can recover from<br>
data corruption or data loss due to datacenter level failures. Ceph<br>
can provide great performance with bandwidth at 1+ Tibibytes/sec have<br>
been documented.<br>
<br>
Aaron started the presentation with a general overview of Ceph. Then<br>
moved into some great details and examples. It was a compressed<br>
introductory course to Ceph. I am motivated to set up a Ceph cluster<br>
in my basement, er, underground facility now.<br>
<br>
Along the way Aaron says that the Paxos Part-Time Parliament paper by<br>
Leslie Lamport (famous computer scientist and mathematician) which is<br>
the basis for the "monitor" daemons to form a quorum was a fun read.<br>
<br>
<a href="https://en.wikipedia.org/wiki/Paxos_(computer_science)" rel="noreferrer noreferrer" target="_blank">https://en.wikipedia.org/wiki/Paxos_(computer_science)</a><br>
<a href="https://lamport.azurewebsites.net/pubs/lamport-paxos.pdf" rel="noreferrer noreferrer" target="_blank">https://lamport.azurewebsites.net/pubs/lamport-paxos.pdf</a><br>
<br>
Some alternatives to Ceph can be constructed modularly using other<br>
individual parts. ZFS file system. MinIO S3 compatible object store<br>
is AGPL. LINSTOR from Linbit, the people behind DRBD block clustered<br>
storage.<br>
<br>
Aaron's slides of the presentation!<br>
<br>
<a href="https://home.fnord.greeley.co.us/~adj/nclug/2024-04/" rel="noreferrer noreferrer" target="_blank">https://home.fnord.greeley.co.us/~adj/nclug/2024-04/</a><br>
</blockquote></div>