thanks for sharing your brilliant ideas with us , as you can see from the numbers you are a shining star on homelab content creators. Keep it up good work.
Really looking forward to the config video!
this is becoming an paramount config to follow :)
Thanks for the demo and info, have a great day
For opensense on two or more MS-01'S, you might consider running VRRP, keepalived presenting a single virtual IP that would be active on only one of the opensense services at a time. I currently use that in my lab for my home DNS failover, and also for my default gateway for my route failover playground.
Glad your still doing these videos
Hi Jimm, Me again lol After a massive kearning curve, i have now ordered the USW-Agg you have, i had a layer 2 Mikrotik switch but obviously i cant make the most of the 10gb as it does have aggrigation. I ordered the 3x MS-01 before seeing your videos, now ive seen them ive set eveything up easy and rocking some serious homelab. Im hanging off from rhe 48port pro as I have a 48port HP switch thats running great.
I always explain link aggregation and such analogous to highway lanes: 1 NIC with a certain bandwith equals 1 highway lane with a certain speed limit, and all cars on that lane (electrons if you will) drive close to that speed limit. Another structurally equivalent/identical NIC added to the system as LAGG equals an additional highway lane with exactly the same speed limit as the first lane, and the cars on that lane drive at exactly the same speed as on the first lane. So, speed is the same for both lanes, yet you still have more data (cars) that can pass through, effectively somewhat doubling the bandwidth. 2 NICs on their own = 1 highway with 1 lane for each NIC. LAGG = 1 highway with 2 lanes for both NICs at the same time, so to speak.
Thank you for the update. I was thinking the MS-01 had more than enough network then I watched your video :) Just goes to show not every home lab is the same. I have the 9 port Sodola version of that switch running without issues for the past 6 months but I have just purchased a new switch to simplify my home lab (24 port 2.5gbe with 4 sfp+ ports managed).
put in 56g connectx cards in all modes - no switch needed with dual port cards - separate out opnsense and use discrete boxes with ha for redundancy - you can run 40gbe with cat8 on 56g cards - you will like 4x bandwidth on cluster nodes - improve speed, lower latency the dual port cards on ebay are like 50 bucks, using this approach you can use 10g for management network - use dual nas and put faster nic plus ssd cache - all these upgrades are affordable and performant. overall this upgrade is better than the original but you could still better for not much money and unleash the true power of the cluster by eliminating bottlenecks. good content - explore other netfs like ocfs2/nfs/sshfs/gluster and do some benchmarks comparing it to ceph
Another great video. I think it's worth testing a standard bridge. My Palo is virtual on Proxmox and use for all layer 3 (and layer 2 filtering). I can push multi Gbps through the Palo no problem. Proxmox is not limiting this and that's 10th gen Intel with 2x10Gbps LAG to the Proxmox bridge. Then you can do power on migrate between hosts if you want. Your virtual firewall will talk layer 2 to the Mac of the SFP ONT via the dedicated switch. There is no such thing as a network loop in Proxmox so no worries there.
I know others have said it but you can pipe your Internet into the switch on a vlan. This then allows you to put that ISP connection anywhere in your network. Then you can setup your router VM in HA and it just spins up on another node. This is how I do it in my own home lab. Also do this at work will hundreds of ISP connections.
Why not have the SFP of the ISP in the aggregation switch? Tagg its vlan to all the Nodes. Have 2 OpenSense VMs on any of the 3 nodes and give it a taged interface in that vlan? thus both vms can se the SFP and you are HA with the option of any node being able to fail and the VM that Failed migrating over the other one still existing? The extra switch is not needed and the extra cabling is also obsolete, This slimms down everything massively. It also simplefies setup as you can push the same config/same setup to all nodes. Edit: also no extra NIC needed as you can use the existing 2 Ports in LACP to the Ag Switch. I would also suggest getting maybe 2 Switches from the likes of Mikrotik to allow for MLAG which allows LACP accross switches for a higher redundancy state and more ports.
Looks cool. Would be pretty handy for running a Peertube instance. You can have runners (ffmpeg transcoding farm agents) using all that GPU grunt.
Looks good. However the SODOLA's reviews mentioned that they tend to crap out after a few weeks or months. Hopefully you won the lottery and don't have that happen. If you do, I suggest a quad-port x710 card. Also like others have mentioned, check out VRRP and HA (CARP) Mode. I'd love to see you tackle that project.
I accomplished this by passing my isp into a separate vlan that I define on the vnic and a second in for mylan traffic. My connection is into my unifi switch directly. This allows me to live migrate the pfsense VM with no interruptions
Part of the benefit of aggregation is added redundancy (not just throughput). I have definitely added aggregation where I don’t need the added bandwidth, but do want the link redundancy.
One of the reasons i haven't got into mini-pc, is how cheap it is to grab some mellanox 10GB SFP+ NICs and chuck them in spare pci slots (with great value Mikrotik SFP+ switches).
Nice video as always, is possible can you explain how to create that thunderbolt mesh between those ms1 I will be happy to know how you did it thank you
@joakimsilverdrake