Show newer

And so it looms...

Again the aesthetic leaves a lot to be desired but I'm feeling pretty good about the early results between both the ceph and gluster clusters

All i was able to complete this weekend is the cabling and getting them powered up for testing. Benching and racking will come later this week, then expansions are to be purchased to finish the conversion :)

This little guy showed up yesterday. It's a $60 USD chunzehui fused power block I'll be using along with the bench psu. I wanted a means to easily replace a bad cable and modularize the setup. Plus keep all open power connectors secure in the event of little fingers or kitty curiosity.

I'll let you know how this pans out but the thought is bench psu as DC in, then 8 of the odroid hc2s wired in with 5 mm barrel Jack's.

I'm pretty happy over all with how the ceph cluster is turning out on these odroid hc2's. Luminous will EOL soon, so its not a real long term viable option, but as some lab storage it's been a treat so far.

I hope hardkernel updates the HC flavor of boards with some Arm64 variants.

8m 34 seconds to go from cold boot 2 nano pi m4's and 6 pi 3b's to functional k8s. :flan_hacker:

Bingo, finally got the CVE services enabled, scanning, and generating reports.

Here you have it folks. This replaces threatstack, nessus, and a few other things once I figure out how to configure all the bells and whistles. Some added benefit if you're monitoring for PCI DSS, GDPR or just want some general tips on how to harden your system per guidelines.

Pretty happy I looked at this and took a deep dive.

Capturing events on host now too, which is pretty nice. Did you want to know when a shell was popped on host? you've got a paper trail now :D

Yo peeps, any opinions on ossec and its fork Wazuh?

I just started looking into Wazuh and its kibana integration is pretty nice as a management ui perspective. You also get the benefit of Kibana :P

here's a little screengrab of some cluster log output as the nodes normalize. I have no agents deployed yet, this is all self sanity output :D

Perhaps some folks would be interested in vulnerability observability?

I wonder how well this is going to go... Decided to kick over an RKE deployment instead of k3s just to see if i could do it before i start recompiling k3's w/ some plugins that were dropped in the official distro.

over/under on it blowing up?

It lives! again...

This particular deployment probably wont make it past tomorrow evening, but at least its been proven that it can happen on armhf, and hoo boy its kinda stale looking.

Luminous is an older (but supported) release, I'm going to have to keep an eye out for arm64 boards to replace these HC2's later in the year / early next. I dont want to replace them with nanopi's unless i *have* to. 32 bit is fine and all, but i gotta have some of those feature sets in nautilus.

They finally showed up, after a call to Amazon and a painfree replacement (which never happens on my prior experience, so kudos)

I couldn't sleep dedicating those thicc spinning rusts to lab storage. So hopefully these are zippy enough until I make the leap to m2 nvme disks for the lab.

And those thicc rusty disks will be delegated to cold storage

If you're doing any "fun" weekend projects for personal ops, and would like to share with the masses, is a great hashtag to follow and toot under.

Be as descriptive or vague as you like, who knows maybe you'll inspire someone else to explore the wild and wooly world of libreops - operations in the open.

ideas:

- building a SBC powered home appliance
- Drudgery work like syncing backups
- Patching systems (you are patching them, aren't you?)
- poking a remote unit thats gone AWOL

It cleaned up so nicely

One thing I will say is that the reduction in microusb cable length was a mistake. Pulling these out for replacement and maintenance may prove tricky. Time will tell

The rack is here and assembled. Time to start unplugging and rearranging the rackmount setup

Showing off the Ceph dashboard that ships with rook. This is the "hot storage" leg of my setup. It provides floating storage as either:

- Raw block devices (RBD)
- Shared Multi-Tennant Filesystems (similar to NFS, via CephFS)

This grants me the flexibility of giving workloads their own disk, or sharing disk paths between several pods of several deployments that need to share volumes of data as normal op. pro.

restic backups drop snapshots in cold storage for restore / disaster.

There we go, now its getting interesting.

Mixed architecture in the readout, and some good labeling on the nodes metadata for nodeselectors

Now the stateful stuff can start to live.

Even running gitea/memcached/postgres on the same host, at idle, its less than 1/2 consumed - plenty of capacity to grow and burst through the workloads needs

Show thread

Functional workloads:

- An ingress controller
- Reverse proxying to an nginx static site container courtesy of the hypriot project *hattip*
- all incoming requests are load balanced against 3 backing replica pods.

This is coming together really quick. The rancher folks did a good job here.

Bootstrapping helm took some doing though - here's a multiarch tiller image:

$ helm init --tiller-image=jessestuart/tiller:v2.9.1 --service-account tiller

And that's a happy little cluster. all 6 nodes enlisted, ready for action.

OS: Ubuntu 18.04
Running K3's - a rancher offering of a stripped down, bare-bones, arm targeted kubernetes.

Disk - 32gb sandisk cruzer nubbin

Show older
LinuxLab

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!