I have an idea to set up a home NAS on FreeBSD.
For this purpose, I bought a Lenovo ThinkCentre M720s SFF – it is quiet, compact, and offers the possibility of installing 2 SATA III SSDs and a separate M.2 slot for an NVMe SSD.
What is planned:
- On NVMe SSD: UFS and FreeBSD
- On SATA SSDs: ZFS with RAID1
While we wait for the drive to arrive, let’s test how it all works on a virtual machine.
We will install FreeBSD 14.3, although version 15 is already out, but it has some interesting changes that I will run through separately.
Of course, I could have gone with TrueNAS, which is based on FreeBSD – but I want “vanilla” FreeBSD to do everything manually.
All posts in this blog series:
- (Current) FreeBSD: Home NAS, Part 1 – Configure JFS Mirror (RAID1)
- FreeBSD: Home NAS, Part 2 – Introduction to Packet Filter (PF) Firewall
- FreeBSD: Home NAS, Part 3 – WireGuard VPN, Linux Peers, and Routing
- FreeBSD: Home NAS, Part 4 – Local DNS with Unbound
- FreeBSD: Home NAS, Part 5 – JFS Pools, Datasets, Snapshots, and JFS Monitoring
- FreeBSD: Home NAS, Part 6 – Samba Server and Client Connections
- … to be continued
We will do the installation using SSH bsdinstall – Boot the system in LiveCD mode, enable SSH, and then proceed with the installation from the workstation laptop.
The virtual machine has three disks – mirroring a future ThinkCentre setup:

Select Live System:

Login as root,

Bring Network:
# ifconfig em0 up # dhclient em0

configuring ssh on freebsd livecd
For SSH, we need to set up a root Password and change /etc/ssh/sshd_configBut currently, this doesn’t work because the system is mounted read-only:

Check current partition:

And apply a “dirty hack”:
- mount a new one
tmpfsFile system in RAM/mnt - copy the contents of
/etcfrom there livecd - Mountain
tmpfsAbove/etc(overlaying read-only directory from ISO) - Copy ready files from
/mntback to new/etc
To perform:
# mount -t tmpfs tmpfs /mnt # cp -a /etc/* /mnt/ # mount -t tmpfs tmpfs /etc # cp -a /mnt/* /etc/
mount syntax for tmpfs Is mount -t Since then source Value is required, we specify tmpfs again.
Now, set the password passwd and start sshd using the onestart,
# passwd # service sshd onestart
However, SSH will still deny access because root Login is disabled by default:
$ ssh [email protected] ([email protected]) Password for root@: ([email protected]) Password for root@: ([email protected]) Password for root@:
set PermitRootLogin yes In /etc/ssh/sshd_config and restart sshd,
# echo "PermitRootLogin yes" >> /etc/ssh/sshd_config # service sshd onerestart

Now we can log in:
$ ssh [email protected] ([email protected]) Password for root@: Last login: Sun Dec 7 12:19:25 2025 FreeBSD 14.3-RELEASE (GENERIC) releng/14.3-n271432-8c9ce319fef7 Welcome to FreeBSD! ... root@:~ #
run bsdinstall,
# bsdinstall
Select components to add to the system – ports Is necessary, src Optional but definitely worth it for a real NAS:
disk partition
We will do minimal disk partitioning, so select manual:

we will install the system ada0Select it, and click Create:

Next, choose a partition plan. This is the standard for 2025 – GPT:

Confirm the changes, and we now have a new partition table on the system drive ada0,


freebsd-boot PARTITION
Now we need to create the partition itself.
choose ada0 Again, click on Create and create a partition for freebsd-boot,
This is for virtual machines only; On the actual ThinkCentre, we would use the type efi With a size of around 200-500 MB.
For now, set:
- Type:
freebsd-boot - Size: 512K
- mountpoint: empty
- Label: empty

Confirm and proceed to the next partition.
freebsd-swap PARTITION
Click Rebuild to add swap.
Given that on ThinkCentre we will have:
- 8 – 16 GB RAM
- no sleep/hibernate
- UFS and ZFS
2 gigabytes will be enough.
set:
- Type:
freebsd-swap - Size: 2GB
- mountpoint: empty
- Label: empty

Root partition with UFS
The main system will be on UFS because it is very stable, does not require much RAM, it mounts quickly, is easy to recover, and lacks complex caching mechanisms (UPD: However, after getting to know ZFS and its capabilities better, I decided to use it for the system disk as well,
set:
- Type:
freebsd-ufs - Size: 14GB
- Mount point: /
- Label: rootfs – just a name for us

We will configure the remaining disks later; For now, choose Finish and Commit:

closing installation
Wait for the copy to complete:

Configure Network:


Select timezone:

In System Configuration – select sshdno mouse, enable ntpd And powerd,

System hardening – considering that this will be a home NAS, but I can open external access (even behind a firewall), it makes sense to tune the security a little:
read_msgbuf:allowdmesgroot access onlyproc_debug:allowptraceonly for rootrandom_pid:randomize the PID numbersclear_tmp: clear/tmpon rebootsecure_console: requirerootPassword for login from physical console

Add a user:

Everything is ready – reboot the machine:

Log in as a regular user:
$ ssh [email protected] ... FreeBSD 14.3-RELEASE (GENERIC) releng/14.3-n271432-8c9ce319fef7 Welcome to FreeBSD! ... setevoy@test-nas-1:~ $
to install vim
# pkg install vim
Check our disk.
using the geom disk for physical device information, and gpart show To view partitions on disk.
Check Disk – there are three:
root@test-nas-1:/home/setevoy # geom disk list Geom name: ada0 Providers: 1. Name: ada0 Mediasize: 17179869184 (16G) Sectorsize: 512 Mode: r2w2e3 descr: VBOX HARDDISK ident: VB262b53f7-adc5cd2c rotationrate: unknown fwsectors: 63 fwheads: 16 Geom name: ada1 Providers: 1. Name: ada1 Mediasize: 17179869184 (16G) Sectorsize: 512 Mode: r0w0e0 descr: VBOX HARDDISK ident: VB059f9d08-4b0e1f56 rotationrate: unknown fwsectors: 63 fwheads: 16 Geom name: ada2 Providers: 1. Name: ada2 Mediasize: 17179869184 (16G) Sectorsize: 512 Mode: r0w0e0 descr: VBOX HARDDISK ident: VB3941028c-3ea0d485 rotationrate: unknown fwsectors: 63 fwheads: 16
and together gpart – current ada0 Where was the system installed:
root@test-nas-1:/home/setevoy # gpart show
=> 40 33554352 ada0 GPT (16G)
40 1024 1 freebsd-boot (512K)
1064 4194304 2 freebsd-swap (2.0G)
4195368 29359024 3 freebsd-ufs (14G)
Disc ada1 And ada2 Will be used for ZFS and its mirrors (RAID1).
If there was anything on them – erase it:
root@test-nas-1:/home/setevoy # gpart destroy -F ada1 gpart: arg0 'ada1': Invalid argument root@test-nas-1:/home/setevoy # gpart destroy -F ada2 gpart: arg0 'ada2': Invalid argument
Since this is a VM and the disks are empty, “Invalid argument” is expected and fine.
Create GPT partition tables on ada1 And ada2,
root@test-nas-1:/home/setevoy # gpart create -s gpt ada1 ada1 created root@test-nas-1:/home/setevoy # gpart create -s gpt ada2 ada2 created
check:
root@test-nas-1:/home/setevoy # gpart show ada1
=> 40 33554352 ada1 GPT (16G)
40 33554352 - free - (16G)
Create partition for ZFS:
root@test-nas-1:/home/setevoy # gpart add -t freebsd-zfs ada1 ada1p1 added root@test-nas-1:/home/setevoy # gpart add -t freebsd-zfs ada2 ada2p1 added
check again:
root@test-nas-1:/home/setevoy # gpart show ada1
=> 40 33554352 ada1 GPT (16G)
40 33554352 1 freebsd-zfs (16G)
Creating a ZFS mirror with zpool
The “magic” of ZFS is that everything works “out of the box” – you don’t need a separate LVM and its clusters, and you don’t need mdadm For RAID.
The main utility for disk management in ZFS is zpoolAnd to manage data (datasets, file systems, snapshots), it is zfs,
To combine one or more disks into a single logical storage, ZFS is used. pool – Equivalent to volume group in Linux LVM.
build a bridge:
root@test-nas-1:/home/setevoy # zpool create tank mirror ada1p1 ada2p1
Here, tank The name of the pool is, mirror Specify that this will be RAID1, and we provide a list of the partitions included in this pool.
check:
root@test-nas-1:/home/setevoy # zpool status
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada1p1 ONLINE 0 0 0
ada2p1 ONLINE 0 0 0
errors: No known data errors
ZFS immediately mounts this pool /tank,
root@test-nas-1:/home/setevoy # mount /dev/ada0p3 on / (ufs, local, soft-updates, journaled soft-updates) devfs on /dev (devfs) tank on /tank (zfs, local, nfsv4acls)
Partition check now:
root@test-nas-1:/home/setevoy # gpart show
=> 40 33554352 ada0 GPT (16G)
40 1024 1 freebsd-boot (512K)
1064 4194304 2 freebsd-swap (2.0G)
4195368 29359024 3 freebsd-ufs (14G)
=> 40 33554352 ada1 GPT (16G)
40 33554352 1 freebsd-zfs (16G)
=> 40 33554352 ada2 GPT (16G)
40 33554352 1 freebsd-zfs (16G)
If we want to change the mountpoint – execute zfs set mountpoint,
root@test-nas-1:/home/setevoy # zfs set mountpoint=/data tank
And it gets mounted to the new directory immediately:
root@test-nas-1:/home/setevoy # mount /dev/ada0p3 on / (ufs, local, soft-updates, journaled soft-updates) devfs on /dev (devfs) tank on /data (zfs, local, nfsv4acls)
Enable data compression – useful for NAS, see Compression and Compression of the ZFS file system.
lz4 is the current default option, let’s enable it:
root@test-nas-1:/home/setevoy # zfs set compression=lz4 tank
Since we installed the system on UFS, we need to add some parameters to autostart for ZFS to work.
configure bootloader in /boot/loader.conf To load a kernel module:
zfs_load="YES"
Or, to avoid manual editing, use sysrc with -f flag:
root@test-nas-1:/home/setevoy # sysrc -f /boot/loader.conf zfs_load="YES"
add more /etc/rc.conf to start zfsd Mount daemons and file systems:
root@test-nas-1:/home/setevoy # sysrc zfs_enable="YES" zfs_enable: NO -> YES
Reboot and check:
root@test-nas-1:/home/setevoy # zpool status
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada1p1 ONLINE 0 0 0
ada2p1 ONLINE 0 0 0
Everything is in place.
You can now proceed with further tuning – configuring individual datasets, snapshots, etc.
For web UI, you can try Seafile or FileBrowser.
![]()
<a href