Disk configuration for 'charisma'
home |
airgap |
charisma |
verve |
trail |
commitment |
discernment |
lore
For more info see 'charisma'.
nvme0n1 (1TB)
PNY21242106180100095
nvme0n1p1 (1G)
/boot/efi
nvme0n1p2 (2G)
md0
nvme0n1p3 (10G)
swap
nvme0n1p3 (150G)
md1
nvme0n1p5 (700G)
data:cache
nvme2n1 (1TB)
PNY21242106180100092
nvme0n1p1 (1G)
/boot/efi
nvme0n1p2 (2G)
md0
nvme0n1p3 (10G)
swap
nvme0n1p3 (150G)
md1
nvme0n1p5 (700G)
data:cache
nvme1n1 (2TB)
S6Z2NJ0W215164J
fast
nvme4n1 (2TB)
S6Z2NJ0W215171W
fast
sda (6TB)
ZR14L0LE
data
sdb (6TB)
WSB076SN
data
sdc (6TB)
ZA16N4ZH
data
Kioxia (256GB)
135C11MCEMNK
data:cache
MD RAID
Note: all MD RAID devices are used with a single partition.
Device |
RAID |
Components |
Capacity |
/dev/md0 |
RAID1 |
2x 2GB |
2GB |
/dev/md1 |
RAID1 |
2x 150GB |
150GB |
Mounts
Partition |
Capacity |
File-system |
Mount point |
Mount options |
/dev/md0p1 |
2GB |
ext4 |
/boot |
noatime |
/dev/md1p1 |
150GB |
ext4 |
/ |
noatime |
ZFS zpools
Pool |
RAID |
Components |
Capacity |
fast |
RAID1 |
2x 2TB |
2TB |
data |
RAIDZ |
3x 6TB |
12TB |
ZFS datasets
These are the datasets we create on our zpools.
Dataset |
Mount |
Compression |
Dedup |
fast |
/fast |
lz4 |
on |
fast/download |
/fast/download |
off |
off |
fast/home |
/home |
lz4 |
on |
fast/home/jj5 |
/home/jj5 |
lz4 |
on |
fast/mysql |
/var/lib/mysql |
lz4 |
on |
fast/session |
/var/log/session |
zstd |
off |
fast/state |
/var/state |
lz4 |
on |
fast/vbox |
/fast/vbox |
zstd |
on |
fast/virt |
/fast/virt |
zstd |
on |
data |
/data |
zstd |
on |
data/backup |
/data/backup |
zstd |
on |
data/image |
/data/image |
off |
off |
data/staging |
/data/staging |
zstd |
on |
data/temp |
/temp |
zstd |
on |
data/temp/extract |
/temp/extract |
zstd |
on |
data/temp/rubbish |
/temp/rubbish |
zstd |
on |
data/vbox |
/data/vbox |
zstd |
on |
Commands
-------------------
Sun Aug 13 23:46:57 [bash:5.2.15 jobs:0 error:0 time:27]
root@charisma:/root
# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# systemd generates mount units based on this file, see systemd.mount(5).
# Please run 'systemctl daemon-reload' after making changes here.
#
#
# / was on /dev/md1 during installation
UUID=0e251b13-9d33-4714-84f9-02b6869eb580 / ext4 discard,noatime,nodiratime,errors=remount-ro 0 1
# /boot was on /dev/md0 during installation
UUID=f486e8ac-85bc-4e06-8365-1a5dcd91d875 /boot ext4 discard,noatime,nodiratime 0 2
# /boot/efi was on /dev/nvme0n1p1 during installation
UUID=AD69-3D4B /boot/efi vfat umask=0077 0 1
# swap was on /dev/nvme0n1p3 during installation
UUID=b0ba6a7d-5dc2-4f06-967a-115816799d40 none swap sw 0 0
# swap was on /dev/nvme2n1p3 during installation
UUID=7d14a24c-126f-4fb5-accd-b6575d88277f none swap sw 0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
longing:/data/share /host/longing/data/share nfs noatime 0 2
longing:/data/archive /host/longing/data/archive nfs noatime 0 2
#//WINDOWS_MACHINE_IP/SHARE_NAME /mnt/windows_share cifs username=WINDOWS_USERNAME,password=WINDOWS_PASSWORD,iocharset=utf8,sec=ntlm 0 0
//wonder/video /host/wonder/video cifs credentials=/root/credentials/wonder.smb,uid=1000,gid=1000,file_mode=0660,dir_mode=0770,noauto 0 2
-------------------
Sun Aug 13 23:54:14 [bash:5.2.15 jobs:0 error:0 time:464]
root@charisma:/root
# cat setup-zfs-best.sh
#!/bin/bash
set -euo pipefail;
BEST_1=/dev/disk/by-id/nvme-PNY_CS3140_1TB_SSD_PNY21242106180100095-part5
BEST_2=/dev/disk/by-id/nvme-PNY_CS3140_1TB_SSD_PNY21242106180100092-part5
zpool create -f \
-o ashift=12 -o autotrim=on \
-O acltype=posixacl -O compression=off \
-O dnodesize=auto -O normalization=formD -O atime=off -O dedup=off \
-O xattr=sa \
best ${BEST_1} ${BEST_2}
zfs create best/download
chown jj5:jj5 /best/download
#zfs create best/vbox
#chown jj5:jj5 /best/vbox
-------------------
Sun Aug 13 23:54:17 [bash:5.2.15 jobs:0 error:0 time:467]
root@charisma:/root
# cat setup-zfs-fast.sh
#!/bin/bash
set -euo pipefail;
FAST_1=/dev/disk/by-id/nvme-Samsung_SSD_990_PRO_2TB_S6Z2NJ0W215164J
FAST_2=/dev/disk/by-id/nvme-Samsung_SSD_990_PRO_2TB_S6Z2NJ0W215171W
zpool create -f \
-o ashift=12 -o autotrim=on \
-O acltype=posixacl -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O atime=off -O dedup=on \
-O xattr=sa \
fast mirror ${FAST_1} ${FAST_2}
-------------------
Sun Aug 13 23:54:19 [bash:5.2.15 jobs:0 error:0 time:469]
root@charisma:/root
# cat setup-zfs-data.sh
#!/bin/bash
set -euo pipefail;
DISK1=/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N0D5506W
DISK2=/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N0D8E3C9
DISK3=/dev/disk/by-id/ata-WDC_WD30EFZX-68AWUN0_WD-WX42D51P1TD3
zpool create -f \
-O acltype=posixacl -O compression=zstd \
-O dnodesize=auto -O normalization=formD -O atime=off -O dedup=on \
-O xattr=sa \
data raidz ${DISK1} ${DISK2} ${DISK3}
-------------------
-------------------
Mon Nov 04 05:07:29 [bash:5.2.15 jobs:0 error:0 time:23]
root@charisma:/home/jj5
# zpool status
pool: data
state: ONLINE
scan: scrub repaired 0B in 1 days 02:47:39 with 0 errors on Mon Oct 14 03:11:40 2024
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-ST6000VN0041-2EL11C_ZA16N4ZH ONLINE 0 0 0
ata-ST6000VN001-2BB186_ZR14L0LE ONLINE 0 0 0
ata-ST6000DM003-2CY186_WSB076SN ONLINE 0 0 0
cache
nvme-KBG50ZNV256G_KIOXIA_135C11MCEMNK ONLINE 0 0 0
nvme-PNY_CS3140_1TB_SSD_PNY21242106180100092-part5 ONLINE 0 0 0
nvme-PNY_CS3140_1TB_SSD_PNY21242106180100095-part5 ONLINE 0 0 0
errors: No known data errors
pool: fast
state: ONLINE
scan: scrub repaired 0B in 00:16:18 with 0 errors on Sun Oct 13 00:40:21 2024
config:
NAME STATE READ WRITE CKSUM
fast ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme-Samsung_SSD_990_PRO_2TB_S6Z2NJ0W215164J ONLINE 0 0 0
nvme-Samsung_SSD_990_PRO_2TB_S6Z2NJ0W215171W ONLINE 0 0 0
errors: No known data errors
-------------------
-------------------
Mon Nov 04 05:10:38 [bash:5.2.15 jobs:0 error:0 time:212]
root@charisma:/home/jj5
# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 nvme0n1p4[0] nvme2n1p4[1]
146352128 blocks super 1.2 [2/2] [UU]
bitmap: 1/2 pages [4KB], 65536KB chunk
md0 : active raid1 nvme0n1p2[0] nvme2n1p2[1]
1950720 blocks super 1.2 [2/2] [UU]
unused devices:
-------------------