Heimserver vs. NAS – Proxmox installation (english version)

Heim-Server vs NAS or how can I setup my own Intranet.

 

 

Part 1 – Introduction

 

Hello and Welcome !

Almost two years ago my old NAS began to show some issues, from time to time it did not worked anymore at all and the webfrontand was no more accessible, and on top of that the disks were pretty full and I needed more capacity.

It was time to start thinking about a new system, But what ? A new NAS ? I was not that much impressed by my old NAS, too slow, configuration not very trivial, no or almost no, extensions. I started to search for a new solution, I just don’t wanted to excluded NAS solutions, but it was definitely no more my main interest, I looked for new NAS solution and fand out that you can have some system that are affordable from a price point of view, but without a real performance or extensions possibilities, and if you choosed one of the systems that were able to perform as requested qith every possible extensions, then you had to pay the price of a high-end game computer. I ended with with three possibilities based on a new computer hardware, also no off the shelf NAS system. I started to experiment with an old notebook I had.

I kept three different solution for my first tests:

  • FreeNAS

  • VmWare ESXi server

  • Proxmox

 

Freenas falled out of the choices, why should I be limited to Freenas, if I was able to install it within one the of the virtualization platform.

I only had VmWare or Proxmox.

Those are virtualization platform , once installed and configured, you are then able to install new virtual machine or container and use them as physical server.

VmWare is a real powerful platform, but in its freeware version there is not so many functions, and it’s very restrictive, don’t be mistaken, for an enterprise solution I would definitely go on this way and buy the licenses for VmWare. But Proxmox is a freeware solution from the beginning, there is only a support license that can be purchase at Proxmox, but it’s not mandatory, if you don’t take it you will just be reminded about it, when opening the web frontend.

I will show you now, how to install and configure this solution. But it’s time to show you which hardware I chose for my solution.

I decided to set it as follow after a few researches.

 

 

 

I chose the case because of it’s size and disk capacity, this case is very small despite the amount of disks that can be hosted. All together 8 hotswap 3,5” HDD plus four 2,5” HDD internal. The mainboard was mostly the best compromise between performance and SATA support 4 SATA 3 and 2 M2 as well as a 2,5 Gb ethernet network.

The CPU has 6 core, also 12 CPU with hyperthreading, and it’s fully sufficient, in the almost two year where I am using it the CPU load never went above 20-25% usage.

I chose 64 GB of RAM ( which is the maximum supported by the mainboard), and until now I never went above 85% usage, and this only when everything is started and in use, otherwise I see an average of 50-60%.

And now the decision that will probably generate most of the comments, I chose to use NON-ECC RAM together with a ZFS filesystem.

Yes! I dare! It does not make any difference, when using any journaling filesystem you can come to the point where a memory error will generate an inconsistent or irrecoverable error and a file or data will be irreparably destroyed. Now the only difference between ZFS and any other Filesystem, is that ZFS always check the consistence of your data before any read or write cab be done. If it encounters an error the file will not be opened, at all. It’s there but unreachable, the only possibility is to erase it. On an other filesystem there is no check, the system will let the calling program open it and this one will then see that the file is corrupt and cannot be opened. This is definitely the same result, the file is corrupt, only ECC-RAM could have avoided this corruption. The problem is ECC-RAM would have costs more than three time the price of the actual RAM. And since now almost two year I never saw an error, like that. 

 

But now enough discussion. We need now 

  • a terminal session (Just look for Terminal on MacOS or Linux), on Windows you can use Putty or MobaXterm, both freeware.

  • An USB stick with 8GB at least

  • And your new computer where Proxmox should be installed.

 

Part 2 – Proxmox Installation

 

The first step is retrieving the the Operating system, in a browser just give https://www.proxmox.com from there click on Download and  Proxmox Virtual Environment, ISO Image and click on the last Version listed at the time (Nov 2022), no matter which OS you are using the file should be in the Download directory.

You can now flash the iso file onto your USB stick, you can use etcher, freeware downloadable from Internet, or use the command line under Linux or MacOS and the command „dd“

How do I start using Etcher?

Download Etcher

Step 1, Select the image you want to record. IMG, ISO, ZIP, DISK, GZ, RAW, and some other formats are supported.

Step 2, Select the USB drive or SD card to which you want to burn the image.

Step 3, click on the «Flash!» button to start the process. Once you click this button, Etcher will take the image you selected and burn it to the disk you selected. When this is done, it will check if everything went fine and safely remove the USB drive. It will then ask you if you want to flash the same image or a different one.

Without Etcher?

 

Put your USB stick in an USB port on your notebook/computer and open a terminal under MacOS or Linux or under Windows putty or MobaXterm, 

under MacOS ==> diskutil list

Under Linux ==> lsblk

Write down the disk name, e.G. in our example /dev/sdb, verify that you chose the right disk, then no matter which program you will use, once started it overwrites the destination disk without any confirmation.

First check if any part of your disk is mounted on your computer, if it’s the case unmount the disks

Linux ==> umount /dev/sdb4

MacOS ==> diskutil unmount /Volumes/mydisk

 

Now we are ready to flash the image onto the USB stick.

 

Linux ==> sudo dd if=proxmox-ve_7.2-1.iso of=/dev/sdb bs=1m status=progress

MacOS ==> sudo dd if=proxmox-ve_7.2-1.iso of=/dev/rdisk4 bs=1m

Where „r“ in the name is for raw device

 

As soon as the stick is flashed, change to your new server, put the stick in a slot, look for a static IP address, you should not set up a server over DHCP. And boot your new server

 

Part 3 – Proxmox Konfiguration

 

Our new Proxmox server is now installed, it can run as is, but our storage is still not configured, we want ot use our disks. We can do that directly from web frontend or from the command line where we have more configuration possibilities. 

Now is the time to have a few thought about our storage, I will show yu here the configuration I chose for my server.

I chose ZFS as a filesystem, this one has almost no restriction for capacity, the storage can be expandedup to 2128 Bytes per Pool, I think it’s more than we will ever need at least for our purpose.

My server has 8 WD RED 4TB 3,5“ HDD, also special for NAS, plus 2 x 2,5“ SSD disks. 

For the installation I had only 4 HDD configured in a pool, as my other disks were still in use in my old NAS.

config: 

config:

 

NAME

data

raidz1-0

sda

sdb

sdc

sdd

This is similar to a RAID5 configuration on an other filesystem and/or RAID controller. By the way ZFS cannot use disk already configured in a RAID, you must then chose individual HDD (no raid configuration). In my case here I will have a capacity of about 12TB out of 16TB, the last HDD is practically reserved for parity, even is this parity is spread over all disks in the reality.

First we need to check out disks configuration, for that go back to your terminal session and connect to your new server:

ssh root@192.168.0.248 (of course you should use your IP address here)

Geben Sie bei der Abfrage Ihren Passwort an und los geht

sudo lsblk -o NAME,MOUNTPOINT,PHY-SEC
NAME                         MOUNTPOINT PHY-SEC 
sda                                        4096
sdb                                        4096
sdc                                        4096
sdd                                        4096
nvme0n1                                     512
├─nvme0n1p1                                 512
├─nvme0n1p2                                 512
└─nvme0n1p3                                 512
 ├─pve-swap                 [SWAP]         512
 ├─pve-root                 /              512
 ├─pve-data_tmeta                          512
 │ └─pve-data                    512
 └─pve-data_tdata                512
   └─pve-data                    512

 

The parameter ashift for ZFS must bve set as follows:

Physical Sector

Ashift

512

9

4096

12

8192

13

 

For my example , as the disk are all SSD we have 512, we can set up our ZFS pool as following:

zpool create -f data -oashift=9 –oautoexpand=on raidz /dev/sdd /dev/sde /dev/sdf /dev/sdg

 

The traduction is create a new ZFS pool with a blocksize of 512, This pool should automatically be scaled up if bigger disks are build into the pool, we want RAIDZ1, meaning that our data will be spread in two and a parity is created, and last the disks name we want to use. 

We could set up the same thing from the web frontend

We have set up ZFS pool , we are able to lose maximum one disk without losing any data, for that we have spent the capacity of one disk for security.

As we want later store data in our pool we should set up a few vdev for each different part.

On the terminal again give the following commands:

 

zfs create -oxattr=sa -ocompression=lz4 -oatime=off -orecordsize=16k -omountpoint=/data/vm data/vm 
zfs create -oxattr=sa -ocompression=lz4 -oatime=off -orecordsize=16k -omountpoint=/data/videos data/videos 
zfs create -oxattr=sa -ocompression=lz4 -oatime=off -orecordsize=16k -omountpoint=/data/video-private data/videos-private 
zfs create -oxattr=sa -ocompression=lz4 -oatime=off -orecordsize=16k -omountpoint=/data/serien data/serien 
zfs create -oxattr=sa -ocompression=lz4 -oatime=off -orecordsize=16k -omountpoint=/data/musik data/musik 
zfs create -oxattr=sa -ocompression=lz4 -oatime=off -orecordsize=16k -omountpoint=/data/bilder data/bilder 
zfs create -oxattr=sa -ocompression=lz4 -oatime=off -orecordsize=16k -omountpoint=/data/dvr data/dvr

zfs create -oxattr=sa -ocompression=lz4 -oatime=off -orecordsize=16k -omountpoint=/data/backup data/backup 
zfs create -oxattr=sa -ocompression=lz4 -oatime=off -orecordsize=16k -omountpoint=/data/timemachine data/timemachine

 

Here a few explanation : 

 

xattr

Extended attribute : Needed for windows share or SMB

compression

The data on disk should be compressed

atime

No access time should be written.

recordsize

Size of a block within this disk

mountpoint

Where should the disk be mounted

 

ZFS has the possibility to store the filesystem logs or Journal on separate disks, this allow to accelerate the way ZFS store the data, as anything is stored on disk, if you put your journaling on faster disk (SSD or even nVMe) the servcie can aknowledge the OS write request faster even if the data are only stored in the logs, they will be written later on to the HDD in the background. 

To separate those logs from the HDD we have to set this up. We add some disks or partitions to our already defined pool.

zpool add data log mirror /dev/sdb /dev/sdc

We can check that with 

root@pve:~# zpool status

pool: data

state: ONLINE

config:

 

NAME STATE READ WRITE CKSUM

data ONLINE 0 0 0

raidz1-0 ONLINE 0 0 0

sdd ONLINE 0 0 0

sde ONLINE 0 0 0

sdf ONLINE 0 0 0

sdg ONLINE 0 0 0

logs

mirror-1 ONLINE 0 0 0

sdb ONLINE 0 0 0

sdc ONLINE 0 0 0

 

errors: No known data errors

 

We can define quotas on our disks, that mean that we are artificially restricting the capacity of the disk. 

For example:

 

zfs set quota=2T data/videos 
zfs set quota=500G data/videos_private 
zfs set quota=1T data/serien 
zfs set quota=250G data/musik 
zfs set quota=250G data/bilder 
zfs set quota=4T data/timemachine 
zfs set quota=500G data/backup
zfs set quota=500G data/dvr

 

So! Now our server is almost ready to start it’s service. We must now define our storage. We open again the web frontend, go to the datacenter, storage and define a new storage with our disk data/vm.

Part 4 – Updates

 

Our system is ready, one step is still missing, in order to always have a system that is up to the latest version, without bug or any errors, we have to setup the repositories to receive the periodic update from Proxmox. From scratch the server is configured for the enterprise version, as we don’t have any subscription with Proxmox we have to change the repository from enterprise top the open source version. We go back to our terminal and give the following:

echo “     “ >> /etc/apt/sources.list


echo „# Proxmox Updates with no subscription“ >> /etc/apt/sources.list


echo „deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription“ >> /etc/apt/sources.list

 

echo „#deb https://enterprise.proxmox.com/debian/pve bullseye pve-enterprise“ > /etc/apt/sources.list.d/pve-enterprise.list


We can now update the configuration 

 

We can run the update from the web frontend or from the terminal

apt update && apt upgrade -y

 

Once finished you will probably need to reboot your server.

We are now so far, our server is running without any issuesin the next video/article we will see how to backup our server and configuration.

 

Schreibe einen Kommentar