Advice on creating a Home Lab with MicroCloud

I have a Home Lab consisting of four servers, I used to run Windows/Hyper-V on it so they were arranged as 1 Domain Controller, 2 Hypervisors, and storage server

only the storage server really has any disk space the others had some local disk but mostly to hold the OS. The storage server I exposed over iSCSI to the two hypervisors.

I’ve tried installing OpenStack on them and not got it working. So now I’m thinking of MicroCloud as a simpler option but I’m not sure how to arrange the servers. The old Domain Controller was intended as a management server so is probably a little under powered for a Hypervisor and maybe lacks quite enough NICs,

Similarly I’m not sure about the storage server it seems like Cephs would be the way to go I could combine all the discs with ZFS and then make it a Cephs server, but the MicroCloud docs suggest that microcephs would want three servers each with their own disks which I don’t have. I could iSCSI disks to the other machines from the storage server and then run micro Cephs maybe?

So I’m looking for some advice about how best to slice and dice the 4 servers.

As a secondary question I’d like to use the lab to get my head round Terraform as well, but there is no provider for MicroCLoud but it runs on LXD so I could use the LXD provider(?)

Hey, let’s start from the bottom.

but it runs on LXD so I could use the LXD provider(?)

This is exactly right. MicroCloud bootstraps LXD so you can then use LXD’s terraform provider to deploy workload.

So I’m looking for some advice about how best to slice and dice the 4 servers.

I would not serve a disk to each of the machines from the single storage server as this would still come down to a single machine providing the storage so if this one (your storage server) fails then it will also bring down the entire MicroCeph cluster.

As you mentioned a local disk on each of the servers (at least three) is required for a redundant setup of MicroCeph.
You now have multiple options:

  1. Run MicroCeph across all of the servers with a single (or multiple) OSD (disk) on your storage server. Whilst this ensures a redundant MicroCeph control plane, the storage isn’t as you only have OSD’s on one server which causes a total failure in case the storage server goes down. However in case you have multiple OSDs on this single server, you might be able to recover from disk failures as the data gets spread around on this server between the OSDs.
  2. Run without MicroCeph. This requires a disk on each of the servers so that MicroCloud can configure local storage. At the end you will have a cluster wide local storage pool which uses an individual disk per server (no failover).
  3. Run only with MicroCeph. This requires a disk on each of the servers so that MicroCloud can configure an OSD per server. At the end you will have a cluster wide remote storage pool which uses an individual disk per server (with failover).
  4. Combine 2) and 3) which requires two disks per server.

Depending whether or not you can add additional disks, you might only be able to go with option 1.

Thanks for your reply Julian,

I’ll have a look at the disks I’ve got on the machines, I think I do have an additional disk on each machine although it is maybe on 1-2 TB in size where as the storage server has significantly more but I’ll have a look at Ceph and see if it can cope with that.

You point about resiliency is valid I think I’m ok with a lack of resilience since it is for home, if I was doing a Dev environment or Production environment at work then I would definitely be wanting to make everything fully resilient.

I’ll mark this a solved for now, but I’ll try to remember to come back and update this with what I went with and how it worked out.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.