A new NAS is born: a repurposed QNAP TS-412 case (kirkwood, single-core arm with 256Mb memory).
The nice thing is: this is actually a Mini-ITX board. So if you have a low-profile board, it might just fit. The only downside is the backplane: QNAP has the SATA controller on the actual backplane. I investigated a bit and someone got it working using a regular PCI-e slot, but this didn't sound very appealing since the SATA controller is pretty antique.
Instead I opted to replace the board with an Odroid H4 Ultra board (N305 CPU, 8 cores) with 32Gb of DDR5 memory.
The reasons for choosing this board:
- It's a lot more modern than that old Kirkwood
- The GPU is pretty OK when using Jellyfin (AV1 decoding support)
- It has onboard SATA (4 ports)
- The M.2 slot can be bifurcated into 4 separate lanes!
- The BIOS allows for IBECC (inline ECC: using a part of the memory for ECC without using special ECC memory). This is actually a lot like regular ECC, but instead of using a separate memory chip on the DIMM, the ECC is stored alongside the regular content. Performance is a bit worse, but it uses normall DIMMs as opposed to special ECC SODIMMs.
- It has a watchdog/tco and can redirect the console to a serial port (perfect for going headless).
So here is the NAS from the outside:
Just a regular NAS so far... A photo from the back reveals a bit more:
So what I have done is, after opening up the NAS and taking carefull measurements, I found that above the hotswap bays there is room for a 5,25" bay! Perfect. I was planning on adding an NVME drive, but this is even better: there is room for 6 SSDs in one of those 6-in-1 hotswap bays.
The larger white part is a (3d printer) backplate for the Odroid H4 motherboard. It's quite a bit smaller than an ITX board which leaves more room for routing cables.
On the bottom there are 4 extra network ports and the left-side (with the cable hanging out) is also a network port, both to which I will come later.
Opening up the case (just 3 screws) shows the Odroid mainboard:
This is the backside. More information on the board can be found at the
hardkernel site.
People who know this board might see I used the
M.2 4x1 card. This board gives you 4xM.2 slots, each with 1 lane of PCI-e 3.0 speed (about 950 megabyte per second). I've add 2 lexar NVMe drives here and 2 special cards:
- An m.2 to PCI-e extender (on the left)
- An m.2 to m.2 extender (on the right)
The m.2 to m.2 extender holds an ASM1166 controller give 6 extra SATA ports (idead for the 6-in-1 hotswap extension). You can see the 6 lanes of SATA on the right in the case.
When looking at the other side of the case we get a nice little surprise:
The red cable is connected to a little single-board-computer (SBC). Specifically a
Luckfox Pico. It's a small Arm based machine (using a Rockchip CPU) which has:
- Ethernet (100Mbit)
- 256 megabyte onboard bootable flash
- An SDK
- More pins than you can shake a stick at
This board is connected to the Odroid UART pins to allow remote control of the Odroid board in case of problems: the Odroid is configured for a Serial console during boot and grub and Linux that run on the Odroid als have their console on ttyS0. This means that even if the Odroid is having a hard time, login is possible. Combined with the built-in hardware watchdog of the Odroid this gives almost the same options as using a full IPMI solution (in my case I only use remote control and power-cycle options of the machines that have IPMI overhere. A full ipmi can do more, but this solutions covers the majority of my usecases)
The only thing missing is controller the power-button the Odroid via GPIO on the luckfox (still working on that bit)
The Luckfox is powered via the Odroid always-on 5V pin (gives about 500mA which is about 8 times more that this board uses)
The only thing missing is the 4 port ethernet at the bottom:
I found that below the hot-swap bays there is just a bit of room for a PCI-e card when there is no heatsink mounted. I was able to put an Intel I340 quad-port NIC in there which is connected to the m.2 to PCI-e adapter (the card is PCI-e 2.0 only, so the 1 lanes becomes about 500 megabyte per second. Just about enough for 4 ports at full speed).
As no proper mounting is possible here, electric tape and some plastic is used to insulate the card from the case.
I printed a special bracket that keeps the card in place while in normal, so it's fully usable in my setup.
You can also see the power cabling for the harddrives: I lost the ability to hotswap drives which would have been nice, but not absolutely required.
Software
The machine is running Proxmox on top of Debian. The NICs are bridged in Linux and the guests get a virtio NIC. This should alleviate the PPPOE performance hit of FreeBSD (tested previously, not yet on this machine)
The Luckfox board is running Buildroot linux with minicom so I can connect to the the Odroid (although SSH directly to the Odroid is must faster and convenient).
After testing Proxmox iTCO watchdog, I found out that the initial bios did not support the watchdog functionality of the hardware. After a quick forum request Hardkernel released a patched bios within a week or 2 that allows you to enable the watchdog in the bios (it's not iTCO_wdt but wdat_wdt). I configured proxmox to use this watchdog as well.
Furthermore, I uses powertop and
hd-idle to save power. When the machine powered on but doing nothing (all HDDs powered down) I get about 25W as measured by my powermeter (an old ELRO not sure about the accuracy). This is caused by the SSDs in the hot-swap bay: I've got some older enterprise drivers in there (intel S3700). I measured a naked board using my power-meter at about 12W.
When spinning up all HDDs and doing some work I see about 50W on the meter (spin-up peak at about 90W).
Proxmox is installed on an eMMC module on the board which is fast enough. Swap is configured on one of the older enterprise drives. This allows for hd-idle to spindown the actuall 18 terrabyte disks.
It's been a nice little project required my 3d printer, some elbow grease and a dremel to get it up and running and many many reboots/tries/tests to make sure everything kept working when abusing it: cooling is an issue in such a small case (it's about 45 degrees celcius when idle but ramps up to about 65 degrees when doing some more work. When stress testing the machine it starts to throttle the CPU. In my case not really an issue as the main things it will run are
- Jellyfin
- Opnsense
- Gitea
- Nextcloud
8 cores is plenty for all this, so there is still room for growing.
On the storage side: it's running ZFS raid-z1 on 4x18Tb drivers with 2 SSDs in a mirror as a special device (this really helps as small files and directory information is stored on the SSDs, so the drives only spinup when actual data is read from the HDDs)
This is my new 24x7 server, allowing an older Xeon to be powered of. That machine will be repurposed to run Proxmox Backup server or Rsync for backups (and will not have to be powered on 24x7)