And so it looms...
Again the aesthetic leaves a lot to be desired but I'm feeling pretty good about the early results between both the ceph and gluster clusters
All i was able to complete this weekend is the cabling and getting them powered up for testing. Benching and racking will come later this week, then expansions are to be purchased to finish the conversion :)
This little guy showed up yesterday. It's a $60 USD chunzehui fused power block I'll be using along with the bench psu. I wanted a means to easily replace a bad cable and modularize the setup. Plus keep all open power connectors secure in the event of little fingers or kitty curiosity.
I'll let you know how this pans out but the thought is bench psu as DC in, then 8 of the odroid hc2s wired in with 5 mm barrel Jack's.
Bingo, finally got the CVE services enabled, scanning, and generating reports.
Here you have it folks. This replaces threatstack, nessus, and a few other things once I figure out how to configure all the bells and whistles. Some added benefit if you're monitoring for PCI DSS, GDPR or just want some general tips on how to harden your system per guidelines.
Pretty happy I looked at this and took a deep dive.
Yo #infosec peeps, any opinions on ossec and its fork Wazuh?
I just started looking into Wazuh and its kibana integration is pretty nice as a management ui perspective. You also get the benefit of Kibana :P
here's a little screengrab of some cluster log output as the nodes normalize. I have no agents deployed yet, this is all self sanity output :D
Perhaps some #libreops folks would be interested in vulnerability observability?
I wonder how well this is going to go... Decided to kick over an RKE deployment instead of k3s just to see if i could do it before i start recompiling k3's w/ some plugins that were dropped in the official distro.
over/under on it blowing up?
It lives! again...
This particular deployment probably wont make it past tomorrow evening, but at least its been proven that it can happen on armhf, and hoo boy its kinda stale looking.
Luminous is an older (but supported) release, I'm going to have to keep an eye out for arm64 boards to replace these HC2's later in the year / early next. I dont want to replace them with nanopi's unless i *have* to. 32 bit is fine and all, but i gotta have some of those feature sets in nautilus.
They finally showed up, after a call to Amazon and a painfree replacement (which never happens on my prior experience, so kudos)
I couldn't sleep dedicating those thicc spinning rusts to lab storage. So hopefully these are zippy enough until I make the leap to m2 nvme disks for the lab.
And those thicc rusty disks will be delegated to cold storage
If you're doing any "fun" weekend projects for personal ops, and would like to share with the masses, #libreops is a great hashtag to follow and toot under.
Be as descriptive or vague as you like, who knows maybe you'll inspire someone else to explore the wild and wooly world of libreops - operations in the open.
ideas:
- building a SBC powered home appliance
- Drudgery work like syncing backups
- Patching systems (you are patching them, aren't you?)
- poking a remote unit thats gone AWOL
It cleaned up so nicely #libreops
One thing I will say is that the reduction in microusb cable length was a mistake. Pulling these out for replacement and maintenance may prove tricky. Time will tell
Showing off the Ceph dashboard that ships with rook. This is the "hot storage" leg of my setup. It provides floating storage as either:
- Raw block devices (RBD)
- Shared Multi-Tennant Filesystems (similar to NFS, via CephFS)
This grants me the flexibility of giving workloads their own disk, or sharing disk paths between several pods of several deployments that need to share volumes of data as normal op. pro.
restic backups drop snapshots in cold storage for restore / disaster.
There we go, now its getting interesting.
Mixed architecture in the readout, and some good labeling on the nodes metadata for nodeselectors
Now the stateful stuff can start to live.
Functional workloads:
- An ingress controller
- Reverse proxying to an nginx static site container courtesy of the hypriot project *hattip*
- all incoming requests are load balanced against 3 backing replica pods.
This is coming together really quick. The rancher folks did a good job here.
Bootstrapping helm took some doing though - here's a multiarch tiller image:
$ helm init --tiller-image=jessestuart/tiller:v2.9.1 --service-account tiller
And that's a happy little cluster. all 6 nodes enlisted, ready for action.
OS: Ubuntu 18.04
Running K3's - a rancher offering of a stripped down, bare-bones, arm targeted kubernetes.
Disk - 32gb sandisk cruzer nubbin
Admin of linuxlab.sh - Infosec curious, automation enthusiast. Former Ubuntu and Kubernetes contributor. #nobot
Random follow requests with no context (eg never interacted before) will be rejected.