16 minutes ago

Nice #NixOS is already supports the new #zfs #docker storage backend. Good opportunity to replace my one docker-compose file with `virtualisation.oci-containers.containers`.

screenshot showing  `zfs list` output with bunch of rpool/docker/... mountpoints for each layer

$ zfs list
NAME                                                                                 USED  AVAIL  REFER  MOUNTPOINT
rpool                                                                                849G   933G    96K  /
rpool/docker                                                                        1.13G   933G  9.00M  /var/lib/docker
rpool/docker/05f94823c9182f25ef468a267d9e798e2cf885e0f5746b876b399c5f526dbb2f       37.8M   933G  41.4M  legacy
rpool/docker/08d8508375e71ed8ffb81d2ccbe62aa0f4bfd4c76f95a9b30174185c883f6c00        984K   933G   171M  legacy
rpool/docker/24566ee72a131cc685f46eb855a476fdad79154f3f78c7084351f2164384cadf        172K   933G   171M  legacy
rpool/nixos                                                                          848G   933G   192K  none
rpool/nixos/home                                                                     648G   933G   609G  /home
rpool/nixos/root                                                                     169G   933G   168G  /
rpool/nixos/var                                                                     31.1G   933G   192K  /var
rpool/nixos/var/lib                                                                 30.5G   933G  18.1G  /var/lib
rpool/nixos/var/log                                                                  615M   933G   117M  /var/log
8 hours ago

@cslinuxboy This leaves you with the choice between #ZFS or #btrfs

15 hours ago

Weird behaviour smb.conf create mask #permissions #2204 #samba #zfs #samba4

Stéphane Graber
23 hours ago

Updated my #ZFS package repository for ZFS 2.2.2, update to avoid potential data loss!

Harry Sintonen
1 day ago

#ZFS 2.2.2 and ZFS 2.1.14 have been released with the fix to the data corruption bug.

1 day ago

OpenZFS 2.2.2 is out now to fix a data corruption bug and other issues. Get it at

#ZFS #Linux #OpenSource

And thus begins the prep for moving my #NAS from EXT4 to #zfs I still have a lot of work to do. I picked up 4 more 18TB drives, for a total of 6 to backup my system. Those drives are now being stress tested with badblocks. I have also documented the file structure and making sure I backup everything. I'm going to leave about 1,000GB free on each drive. Which means I need to manually map out what gets copied to what drive. Hoping to start the migration after Christmas but before New Years eve. But obviously the backups I can start before that. Thinking maybe around Dec 15. The stress test of these 4 new drives in parallel is going to take about 2 weeks. (4 passes with different patterns + a verify.)

Spreadsheet screenshot 3 that documents storage use and if a directory is backed up and where it is being restored.
Spreadsheet screenshot 2 of my storage use.
Spreadsheet screenshot 3 of my storage use.
4x Seagate Exos 18TB drives.
Filip Chabik 👻
3 days ago

I’m linking to this comment as I think it’s a good final note in the subject: tl;dr version of it is that if you are not running any build system (or #Gentoo #Linux) on top of #ZFS, it’s super unlikely (but not impossible) to hit this bug and end up with inconsistent data. Uff 😮‍💨

Graham Perrin
4 days ago

@emaste incidentally:

I know that BR for bug report defies FreeBSD tradition, however — with forges such as GitHub and Codeberg so widely used, nowadays — commonplace parlance probably equates PR more often with "pull request" than with "problem report".

So, I'm pushing the boundaries at times such as this (issues, pull requests and bug reports closely and rapidly intertwined).

CC @Codeberg

#Codeberg #GitHub #PR #FreeBSD #ZFS #OpenZFS

Graham Perrin
4 days ago

@emaste FYI


Will FreeBSD BR 275308, for the errata notice, broaden?

To have a single EN for both:

a) what's already merged to FreeBSD src

b) openzfs/zfs PR 15602 for 2.2.2.


No need to reply here. Food for thought. Thanks.

#FreeBSD #ZFS #OpenZFS

Harry Sintonen
4 days ago

Apparently #ZFS had a data corruption bug for quite some time. The frequency of corruption can be reduced (but not fully eliminated) by issuing:

echo 0 | sudo tee /sys/module/zfs/parameters/zfs_dmu_offset_next_sync

The issue triggers far the easiest with ZFS 2.2 series, but earlier ZFS versions are apparently also affected.

4 days ago

This zfs raidz shit is a rabbit hole in a half.. huh! This is going to be like trying to learn ipv6 cause all the cool kids are doing it. ahh fuck me!

#zfs #homelab #storage

Justine Smithies
5 days ago

Cracked it at last ! I now have #VoidLinux running #ZFS on my test laptop. Now to make an installer script to make it even easier to install and to test it out before deciding if I'm going to install it onto my ThinkPad P14s AMD Ryzen 7 Pro.

Michal 🇨🇿
5 days ago

Data-destroying defect found in OpenZFS 2.2.0

Check if file block cloning is on and disable it – or upgrade to 2.2.1 ASAP

#Freebsd #freebsd140release #zfs

6 days ago

about the #freebsd #zfs data corruption issue, what are the recommended actions for now? I see setting vfs.zfs.dmu_offset_next_sync=0 - is there anything else that can help? if I'm on 13.2, is it a good idea to upgrade to 14.0 over this?

Justine Smithies
6 days ago

Been playing with the hrmpf rescue system, built on #VoidLinux on my test laptop to try and get Void on #ZFS working but so far no joy. I've tried various install scripts and failed but It's not going to beat me as I will crack it yet.

Dan McDonald
6 days ago

Seeking advice: a good matched/mirrorable pair of small(ish) M.2 #nvme drives to serve as slog for a 14TB #ZFS pool?

Being plugged into PCIe 2.0 slots FWIW.

Graham Perrin
6 days ago

@0x0177b11f hello (and thank you).

A run such as this completes in less than one second:

zfs-issue-15526-check-file --path /usr/home

Please, am I doing something wrong?



#ZFS #zfs15526 #zfsissue15526 #FreeBSD

What the best practical end user guide for #zfs? ZFS mastery by @mwl ?

I want to learn how to use it properly - so where it shines and what are the real life use cases.

Stefano Marinelli
1 week ago even ZFS seems to be playing quite a significant trick on us, and it appears that all of this has been going on for over a decade. Ensuring long-term data stability is always a complex process, to be studied and implemented with care and caution.

#ZFS #DataStability #LongTermStorage #DataManagement

Seems #freebsd jail guide is updated with thin jails. They propose to create a #zfs snapshot with template and clone it for containers. This makes creating new jails quite a bit faster, it is there any benefit from running and managing a jail which is a zfs dataset?

Asking for a friend, „Michal the zfs noob”.

Mike Gerdts
1 week ago

@vermaden it seems this should be a simple follow-on of the "bp rewrite" project.

#zfs #snark #smop

1 week ago
1 week ago

Proton Drive rilascia una app per Mac OS ma solo per le versioni più aggiornate e non esistono piani per una versione #Linux e meno che mai per *BSD. Be', quando esisterà una versione per Linux forse considererò di non fare il mio storage con #RaspberryPi e #openvpn . Sto studiando di farlo con #FreeBSD e #ZFS.

1 week ago

If you are using #ZFS a long-standing 18 month bug has been discovered, a solution is provided to avoid data corruption.

Graham Perrin
1 week ago

@hadret given the length of time that e.g. 2.1.4 has been in the main branch of FreeBSD src, with many users of FreeBSD-CURRENT (from main): I'm not sure that 'on fire' is descriptive.

Now pinned, for the #FreeBSD community:


– linked from <>.

@emaste FYI

#ZFS #OpenZFS #FreeBSD

Graham Perrin
1 week ago

@hadret thanks! Fast-moving discussions in Reddit and GitHub.

> … For the last 18 months or so …

From the most recent comment (not authoritative):

> … If this is right, then the short explainer is that the "is dnode dirty?" check has been wrong for years (at least since 2013, maybe back to old ZFS; I'll need to do more research). …

My non-expert thought, based on that comment: whilst vfs.zfs.dmu_offset_next_sync=0 does seem prudent, if the bug has existed for a decade or more then (I guess) it's obscure enough for a majority of users to be not immediately alarmed.

That's not to downplay the potential impact, just to begin putting things in perspective.


Thanks again. I'll flag this for /u/perciva in /r/freebsd under <>.

#Linux #FreeBSD #ZFS #OpenZFS

Filip Chabik 👻
1 week ago

Seems like #ZFS, the last reliable filesystem, has fallen. For the last 18 months or so silent data corruption had been present: #Linux folks should probably follow instructions from that Reddit thread. #FreeBSD folks may want to set vfs.zfs.dmu_offset_next_sync=0 until proper fix lands…

@jaxu @Anachron hell, I’ve never had a drive fail on me, but #zfs still saved my bacon with a restore from a recent snapshot when I deleted (the otherwise only copy of) a bunch of work with an errant command

OpenZFS 2.2.1 Released to Fix a File Corruption Bug. Update Now.

#linux #bsd #opensource #zfs

amy bones
1 week ago

Oops. I forgot to finish the fnvlist fix for #zfs. I ran into some kind of segfault in some circumstances that were a pain to troubleshoot in a VM and then 2023 continued to 2023 at me and I got sidetracked.

On the other hand, I'm working more on the rust interface again. I think to make it really robust I'm going to need to make other patches to upstream because I'm currently having to copy the definitions of a ton of enums and tables since they all come from internal headers
and bindgen barfs on some stuff.

Stéphane Graber
1 week ago

Updated my #ZFS package repository for the 2.2.1 bugfix release, you're going to want to update immediately to avoid potential data corruption!

Gert van Dijk
1 week ago

"OpenZFS [filesystem] could allow unintended access to network services." 🤨

I just frowned, but it's in the NFS feature... 😅 #zfs #security

1 week ago

OpenZFS 2.2.1 Released Due To A Block Cloning Bug Causing Data Corruption


#zfs #openzfs

Justine Smithies
1 week ago

Tempted to explore #ZFS on #VoidLinux after spinning up a #FreeBSD test laptop. The setup for an encrypted system seems straightforward and boot up should be a tad quicker when unlocking compared to grub being slow. Also snapshots is a thing and being able to recover from accidentally deleting files. Any other Void users have experience with ZFS ??

Stefano Marinelli
1 week ago

I'll never stop emphasizing how underrated mfsBSD is and how convenient it can be.
This morning, I had to set up a FreeBSD server on an OVH machine. The remote console wasn't cooperating when trying to attach an ISO, and it was performing poorly (I'll open a ticket about it when I find the time). Thanks to mfsBSD, I swiftly installed FreeBSD 13.2, but when I attempted to upgrade to 14.0 and update ZFS, it wouldn't boot anymore.
In just five minutes, I prepared an mfsBSD image with FreeBSD 14.0 (not yet available on the official site) and got everything up and running.
Such a handy tool that has saved me from unpleasant situations countless times.

#mfsBSD #FreeBSD #ZFS #ServerSetup #SysAdmin

2 weeks ago

OpenZFS 2.2.1 is out now with support for #Linux kernel 6.6

#ZFS #OpenSource

Felix Palmen 📯
2 weeks ago

So #FreeBSD 14 is finally announced 🥳 – I already made my mind to not immediately jump on it, because I couldn't see any "killer feature" for me while 13.2 is working just fine. Upgrading to 13.0-RELEASE back then, I ran into several surprising issues. I could find workarounds for all of them, still it was a bit annoying...

But now, looking at the official announcement, this bullet point caught my attention:

"ZFS has been upgraded to OpenZFS release 2.2, providing significant performance improvements."

Performance of my #ZFS pool degrades badly under heavy I/O-load (a parallel poudriere build with lots of smaller ports and lots of ccache hits). The pool is backed by 4 spinning disks in a #raidz configuration.

Could I expect 14.0 to improve performance in that specific scenario? 🤔

Blabla Linux 🇧🇪♻️💻🐧🇫🇷
2 weeks ago

#Proxmox VE - Disque format #RAW d'une VM en provenance d'un stockage #ZFS est envoyé vers un stockage type "Directory" ❗
Merde 😲
On perd la fonctionnalité "Instantanés" (#snapshot) 😡
Pas de panique 😉
On convertit alors le format RAW vers le format #QCOW2 en procédant à un déplacement de disque 👊

Janne Moren
2 weeks ago

#ZFS question (and I'm deliberately keeping it a little vague):

If someone gives you a few disks with a zfs snapshot on it, and you want the data but don't have a zfs filesystem;

1. Do you need an actual zfs filesystem (my guess is yes)?

2. Do you need enough free space to copy the entire snapshot?

3. Do you need to mount all the drives at once?

Peter Mount
3 weeks ago

Had a power cur about 1.5 hours ago and it's taken me until now to get the nas back up 😢

ZFS had a real issue this time causing truenas to have issues restarting.

Turned out the main pool wouldn't come back online until I manually cleared errors then it reappeared.

I'm now having to do a scrub on a 24 disk array to see if that will fix the remaining issues

#zfs #homeLab #powerFailure

Stefano Marinelli
3 weeks ago

OpenZFS Lands Exciting RAIDZ Expansion Feature

#zfs #FreeBSD #Linux

3 weeks ago

Entität 1: Hej #hetzner StorageBox, speichere mal bitte diese 500 GiB Daten?

Entität 2: Jo, habe die Daten gespeichert. Aber meiner Meinung nach sind die nur 260 GiB groß.

Ich: Ich mag komprimiert speichernde Dateisysteme. 😃 #zfs #btrfs

Big news on the #ZFS front. RAIDZ expansion has been merged.

"This feature will be available in the OpenZFS 2.3 release, which is probably about a year out."

Dan McDonald
3 weeks ago

I've an oddly specific HW question. Please boost if you've HW-savvy followers?

Does anyone who has an ASRock Rack X470U motherboard use both M.2 slots with NVMe? If so, what drives can/should I use? I'm contemplating filling both with a matched-size pair for the ZFS log devices. Mostly it's about decreasing latency vs. the slice-of-SATA-SSD I'm using today.

For example, how about a pair of these for slog (in the X470U mobo's two slots)?

#zfs #illumos #amd #nvme

Stefano Marinelli
4 weeks ago

OpenZFS Lands Sync Parallelism To Drive Big Gains For Write Performance Scalability

#zfs #linux #freebsd

David Cantrell 🏏
4 weeks ago

I need a #book #recommendation. I want to muck about with #ZFS, on all kinds of platforms including #MacOS, #IllumOS, #FreeBSD and #Linux. Is the "FreeBSD Mastery: ZFS" book (and its more advanced follow-up) *very* FreeBSD-specific or is it pretty much generic aside from things like booting off zfs? Does it do a good job of explaining the underlying structures and philosophy? Does it make a good why-to as well as a how-to book?

PSA: if you are on #NixOS unstable and using syncoid to replicate your #zfs datasets and have updated recently (ie you are on zfs 2.2.0) your jobs might be failing!

The workaround to fix this is to put the following in your configuration:

services.syncoid.service.serviceConfig.PrivateUsers = lib.mkForce false;

@kurth well, #ZFS exists so nope...

1 month ago

Klamm heimlich ist das Dateisystem #bcachefs in #Linux 6.7 gewandert. Ich bin etwas hyped weil ich die Entwicklung schon seit Jahren verfolge und sich viele gute Ideen darin befinden. Ich hoffe es wird kein zweites #btrfs, sondern ein besseres #zfs. Ich habe gerade den Drang mir eine Maschine zum Spielen aufzusetzen 😇

Stefano Marinelli
1 month ago

Halloween for me starts with an "rm -Rf" in the wrong directory. A moment of panic.
Then, in a flash, I remember that the FreeBSD jail in question is on ZFS, and I have zfs-autobackup running every 15 minutes on the entire machine. Unable to roll back the entire jail (there were other data in motion), a "zfs clone" of the snapshot and an rsync from the snapshot to the running jail did the trick. I was back on track in 30 seconds.

Love ZFS, love FreeBSD. 🎃👻

#Halloween #ZFS #FreeBSD #DataRecovery #TechRescue

1 month ago

Habt ihr nen entspanntes script für zfs backups (zfs snapshot|zfs send|zfs receive) zur Hand?
Hab jetzt mal plan umgesetzt die Workstation mit nixos auf linux #zfs pool umzusetzen und würd einige fs (home etc) gerne ohne viel gefrickel regelmäßig inkrementell in den #truenas freebsd zfs pool zwecks backup rüber schieben.

amy bones
1 month ago

My testing in rust indicates that my quick fix to #ZFS resolved the ioctl issue. I'm running the full test suite now, but it's already exercised both new and legacy ioctl code paths, so I'm reasonably confident that nothing is broken.

I guess it's time to rebase the change onto the latest from upstream and make a PR while I'm waiting for the test suite to complete.

Stefano Marinelli
1 month ago

One of today's meetings started with a client asking me to implement Linux and Docker, without knowing why, just because they heard "that's the way it's done today." I responded with, "Why not FreeBSD and jails?" They had never heard of it. I spent about 20 minutes showing them BastilleBSD, ZFS, and some use cases.

Needless to say, tomorrow I'll begin implementing the first server, and if all goes well, it will be just the first of many. :freebsd:

#FreeBSD #Docker #BastilleBSD #ZFS #ServerImplementation

amy bones
1 month ago

Wow yeah making this #ZFS fix really is going to be easy. Aside from the part where I need to setup a VM with my custom build in order to test it because, uh, yeah not gonna test it on my daily driver laptop lmao

amy bones
1 month ago

Okay so I've been writing a pure[1] Rust #ZFS interface library after conclusively determining that the existing ones were written by people who have almost no experience writing userspace code, which, I guess sorta makes sense. (Edit: HI ZFS DEVS I LOVE YOUR WORK EVEN IF I'M BEING CRITICAL ABOUT THE STATE OF THE USERSPACE LIBRARIES!)

Anyway, I'd been debugging this problem, which was a huge blocker for doing anything interesting, where some commands I sent would crash the receiving kernel thread. Note, I don't mean returned an error code, I mean I was seeing stack traces in dmesg along with warnings about stuck kernel tasks.

After two full days of debugging I finally figured out what was going on, and it's bananas.

One of the core structures used everywhere in ZFS is this thing called an nvlist. It's basically just a dictionary but with a couple unique properties that make it weird to work with. One of the features it has is a native serialization format. This is how you exchange data with the kernel, mostly anyway, you create nvlists, serialize them, and send them to the kernel with an ioctl. Another is that it has a couple flags that control it's behavior, like whether or not it can behave as a multi-map (which is also it's default behavior.) These flags are propagated in the serialized form.

When the kernel receives an nvlist it has some validation functions that iterate over the key value pairs in the list to make sure it has the necessary keys with the right types in it for the command you sent.

Now, the flag that makes it behave like a regular dictionary is called
NVLIST_UNIQUE_NAMES. When you set this the nvlist will replace any value with a new value if you add the same key twice. If course, like a normal dictionary, there's functions for doing lookups that are named things like nvlist_lookup_int32, one for each supported data type.

Now, I had been constructing my nvlists without the unique names flag, because it didn't seem overly significant. I wasn't adding multiple keys so why would it matter? Well, it turns out that without that flag, nvlist lookup functions return an error code.

And the kernel asserts that the error code from these lookup functions is zero. Why didn't it error in the validation functions? Well, because it wasn't using the lookup interface, it was using the iteration interface which doesn't return errors because when you iterate it treats the dictionary like an array.

This is not documented anywhere at all, I had to read the source of the nvlist code really, really carefully to discover this.

Fucking. God. Damnit. Anyway, my Rust interface can now make snapshots and ostensibly do anything else the kernel supports. I'm now just in the process of writing a nice ergonomic interface instead of fighting with mysterious crashes. There's some legacy commands that don't use the nvlist interface, but I
think those shouldn't be a problem because I've got the full command structure implemented, I'm just not using every field in it yet.

[1] the only thing I'm linking with is the nvlist library because they have a really specialized structure and serialization format that I don't want to re-implement in Rust right now.

Andreas Braukmann
1 month ago

Orr. Shit happens. 2 disk zfs mirror. One device died. The other device had a data error in /lib/

(And yes this unattended running host never executed the regular scrub job, because I never run the initial scrub. yuk.)
#FreeBSD #zfs #pebcak

Graham Perrin
1 month ago

@alexr @fluxwatcher @dvl I was only vaguely aware of it.

From <>:

"A periodic script for zfs scrub has been added. For more details, see the periodic.conf(5) manual page."


#freebsd #zfs

Stefano Marinelli
1 month ago

FreeBSD 14.0-RC2 Pulls In OpenZFS 2.2, OpenSSH 9.5p1


Æva Winterschön
1 month ago

4:32am feels like a good time to relax and watch this system rebuild kernel 6.5, zfs 2.2, and vim9... oh it's already complete. :ablobfoxbongo:

#linux #gentoo #vim #zfs #ibm #ppc

POWER9 system rebuilding the Linux kernel, zfs, and vim.
Garrett Wollman
2 months ago

One thing that #ZFS is manifestly very not good at is deleting huge directory trees. (Tens or hundreds of millions of files.) One user has hundreds if not thousands of directories with 150,000 9KiB files each. If it were in its own filesystem, as I try to nudge users toward, I could just `zfs destroy` and that would be it. Instead I have to wait for `rm -rf` to complete because it's one big cesspit.

M. Hamzah Khan
2 months ago

Question for the #ZFS people here. I have an ancient #TrueNAS SCALE machine which I am renting from my hosting provider. It has a Xeon W3520 (4C/8T @ 2.93GHz), 8GB RAM, 4x3TB spinning rust, in a single pool with two mirrored vdevs.

I'm using this machine as iSCSI backend for my virtualization hosts. It's slow AF. I get extremely poor IOPS. I'm pretty sure it's the disks that are the bottleneck. ... (continued)

#devops #sysadmin

2 months ago

@itsfoss what I've never understood (ignorance warning...) about #zfs is the huge ram requirements, like 4gb recommended minimum (sure I read that somewhere). What if you need your ram for other things? It's cheating to say it's fast if it's just using ram!

Evert Pot
2 months ago

#btrfs is neat! Added an extra disk to my pool and rebalanced for RAID1 in 2-3 commands. Even supports raid1 on different sized disks.

I _almost_ went for #ZFS but got warnings and errors on my #raspberrypi. Had no idea btrfs supports these features.

Would really like to see stable RAID5 though. Losing a whole disk with RAID1.

#linux #nas

Screenshot of the output of sudo btrfs, showing 4 2TB disks in a RAID1 setup
2 months ago

Looks like its time for a new #introduction :blobfoxwave:​

I am an engineer living near San Francisco with my partner #migrating to this instance from As a gay/queer furry (adjacent?) engineer, I am excited to be joining this community!

My projects and interests cover all sorts of things, from tech (#fpga #eink #cpp #python #arduino #selfhosting #zfs #VintageComputing) to #travel (and #transit) and creative outlets (#photography #DigitalArt #music #gardening #movies #cooking).

(NB: I'm just kicking off the migration process. If you see follow requests from this account over the next few days, it's likely I was following you from

I am aware my sites are down, which are and This in turn affects my kickass #ZFS administration guide and my kickass #password generator.

It's hosted in an ATX case with a single PSU in a datacenter in SLC. I cannot get to it this week due to our annual SOC-2 audit. I should be able to get to it next week however.

Sorry for the inconvenience.

Stefano Marinelli
3 months ago

Client (a bit clumsy but positive and honest) calls: "Help! Just came back from lunch break and accidentally deleted all files on the file server!"
Me, unfazed: "Alright, besides you, who else worked on the file server during lunch break?"
Client: "No one, we were all away and I'm the first one back. Others will be back by 15:00"
Me, looking at the clock and noticing it's 14:30: "Okay, what time did you go to lunch?"
Client: "At 13:30. How long will it take to restore from the backup? Do you think we'll be able to work tomorrow?"
Me, without flinching as I type "zfs rollback *dataset-13:45-snapshot": "Done"
Client: "All the files reappeared!"
Me: "Thank #ZFS, #FreeBSD, and whoever set up automatic snapshots every 15 minutes."

#ITSupport #DataRecovery #BackupHeroes #SysAdmin #Snapshots #BSD

Rob Thomas
3 months ago

This was awesome, and I knew I was now on the downhill stretch. I imported the volume, and everything instantly sprung back to life! #zfs is #bestfs

Well. That's what I THOUGHT. As part of my EARLIER debugging, I had messed with a lot of networking things, because I assUmeD that it was a networking issue, again. It wasn't, obviously.

This meant that I had manually fiddled with a pile of things that slightly broke OTHER things, and they all needed to be put back to what they should have been. For the nerds - I had disabled jumbo frames partially, in a bunch of places, trying to see if there was a MTU issue somewhere in the path.

So there was then MORE time fiddling with all that, and putting everything back.


OK, according to `top -mio` smbd (the Samba daemon) is using most of the IO it could get. I wonder there's something my #ZFS pool. If I remember correctly, I've optimised the I/O #performance, the block size of the zpool should be based on the physical sector of the hard drive...

It's obviously not the right time to performance test the disks, it looks like I'll just have to wait until the backup is fully restored. 💤

@chromakode #ZFS is just awesome!

And this is why everyone should use it!

Max Goodhart
3 months ago

Yesterday morning, I pulled open my laptop to send a quick email. It had a frozen black screen, so I rebooted it, and… oh crap.

My 2-year-old SSD had unceremoniously died.

This was a gut punch, but I had an ace in the hole. I'm typing this from my restored system on a brand new drive.

In total, I lost about 10 minutes of data. Here's how. (Spoilers: #zfs #zrepl)

A laptop with a blue BIOS screen reading:

Default Boot Device Missing or Boot Failed

The photographer is making a silly surprised face in the screen reflection.
4 months ago

Hey #ZFS fans, you might want to subscribe to

For a Mastodon #bot that updates the latest topics on the Discourse server.

Question for all the #sysadmin peeps out there:

What do you use to track historical failures of hardware, such as drives?

We recently had several drives all fail simultaneously in a #ZFS pool that I can only figure out must be a bad backplane or some electronics getting tripped up due to heat.

We've had enough drive failures in this chassis that having historical notes would be nice. A spreadsheet could work, but curious what else is out there.


#lazyweb #Linux

@tubetime it's pre-#Y2K compliant and AFAIK #SunOS that low doesn't even support #ZFS...

Stefano Marinelli
4 months ago

Good morning, friends of the #BSDcafe and #fediverse
I'd like to share some details on the infrastructure of with you all.

Currently, it's quite simple (we're not many and the load isn't high), but I've structured it to be scalable. It's based on #FreeBSD, connected in both ipv4 and ipv6, and split into jails:

* A dedicated jail with nginx acting as a reverse proxy - managing certificates and directing traffic
* A jail with a small #opensmtpd server - handling email dispatch - didn't want to rely on external services
* A jail with #redis - the heart of the communication between #Mastodon services - the nervous system of BSDcafe
* A jail with #postgresql - the database, the memory of BSDcafe
* A jail for media storage. The 'multimedia memory' of BSDcafe. This jail is on an external server with rotating disks, behind #cloudflare. Aim is georeplicated caching of multimedia data to reduce bandwidth usage.
* A jail with Mastodon itself - #sidekiq, #puma, #streaming. Here is where all processing and connection management takes place.

All communicate through a private LAN (in bridge) and is set up for VPN connection to external machines - in case I want to move some services, replicate or add them. The VPN connection can occur via #zerotier or #wireguard, and I've also set up a bridge between machines through a #vxlan interface over #wireguard.

Backups are constantly done via #zfs snapshots and external replication on two different machines, in two different datacenters (and different from the production VPS datacenter).

#sysadmin #tech #servers #ITinfrastructure #BSD

Sam Weston
4 months ago

I'm still waiting on the SATA card for my home server (re-)build so I can't connect all my disks. However because I felt like it I've connected the two SSDs and installed #NixOS with a #ZFS on root.

Arguably this is complete overkill but I've wanted to do it for ages and I had two matching 64GB SSDs hanging around. I followed this excellent guide that also conveniently uses flakes, which I've wanted to learn pretty much since I picked up NixOS a few months ago:

David Cantrell 🏏
4 months ago

I want to buy a pre-built and tested PC for running #TrueNAS (specifically #TrueNAScore) with sufficient CPU and memory to do several TB of #ZFS well, with an internal SSD for the OS, an internal SSD or similar for cache, at least 10 front-facing hot-swappable SATA bays, a whatever-the-hell-it-is port so I can add an external box with more drives in the future, from a UK seller. Can any of you recommend anyone?

My old #debian desktop computer is still going strong after ~14 years; needed a new SSD and I put five HDDs in to make it a file server with #ZFS.

Some time ago, one memory bank died, bringing me down to 2GB RAM, which OOMs #gitannex on large repo clones.
Decided to upgrade the RAM to 16GB and... The core i5 750 takes DDR3-1333 at max. That RAM is so old, it's _more_ expensive than the quicker RAM like DDR3-1600.

Here's to the next ten years of my trusty little server.

Antonio J. Delgado
5 months ago

@Mossop @gabrielesvelto If you pool consist of full disks, then might be tricky to test the disks speeds out of #ZFS but you can do it with the dd command. If you use partitions of the disks and there is a partition available to write data, then you can test on that partition with `dd if=/dev/zero of=/dev/partition_to_test bs=10M status=progress`

Oops I never made an #introduction post.

Hi my name is Rynn! I'm a 29 years old #furry #trans woman located in the eastern United States. I'm also #polyamorous, #asexual, and working with a neuro-psychologist for undaignosed #autism.

I left my corporate IT job at the end of May 2023 and somehow landed a client that needed server help. Now I run my own IT Consulting LLC!

I've been doing IT for a decade now and regularly post about #linux #zfs #informationsecurity #cybersecurity and other tech topics. I have also been cooking and drawing since I was a pre-teen!

My Twitch Channel just made Affiliate! Right now my schedule consists of #minecraft, NES Roulette, and #pokemon Crystal PLUS. You can find me at - live Fri, Sat, Sun at 8PM EDT.

Lastly I have a lot of creative hobbies! I'm a digital artist who sketches but rarely finishes pieces, I make #perler pokemon sprites, and I've been the primary home cook in my house since I was a pre-teen!

5 months ago

It has been over a week since the #redditMigration began - lots of people still cutting ties with #reddit.

We return this week with another big update - including migrating communities with discussions surrounding the #zfs filesystem, #grapheneos, the #raspberrypi pico, #childfree, #adobe #illustrator, #girls #gaming, the #blind community and many more.

Please share if this helped you migrate to communities on #lemmy #kbin or elsewhere on the web!

#ProxMox8 is out for all you #HomeLab'ers!
#Proxmox VE 8.0 released based on the great #Debian 12 "Bookworm"
Major Additions:
#Debian12, but using a newer #Linux kernel 6.2
#QEMU 8.0.2
#LXC 5.0.2
#ZFS 2.1.12
#Ceph Quincy

Jim Salter
5 months ago

It's official: in accordance with overwhelming community consensus and support, is now your one stop shop for discussion of and memes about, Zalando Fulfillment Solutions, and zinc formaldehyde sulfate.

Don't like it? Blame CEO Steve Huffman.

#redditblackout #zfs