#ZFS
![screenshot showing `zfs list` output with bunch of rpool/docker/... mountpoints for each layer
$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 849G 933G 96K /
rpool/docker 1.13G 933G 9.00M /var/lib/docker
rpool/docker/05f94823c9182f25ef468a267d9e798e2cf885e0f5746b876b399c5f526dbb2f 37.8M 933G 41.4M legacy
rpool/docker/08d8508375e71ed8ffb81d2ccbe62aa0f4bfd4c76f95a9b30174185c883f6c00 984K 933G 171M legacy
rpool/docker/24566ee72a131cc685f46eb855a476fdad79154f3f78c7084351f2164384cadf 172K 933G 171M legacy
[..]
rpool/nixos 848G 933G 192K none
rpool/nixos/home 648G 933G 609G /home
rpool/nixos/root 169G 933G 168G /
rpool/nixos/var 31.1G 933G 192K /var
rpool/nixos/var/lib 30.5G 933G 18.1G /var/lib
rpool/nixos/var/log 615M 933G 117M /var/log
sh-5.2$](https://s3.eu-central-2.wasabisys.com/mastodonworld/cache/media_attachments/files/111/511/308/112/436/077/small/0cc5d58ec6e22f2c.png)
@cslinuxboy This leaves you with the choice between #ZFS or #btrfs
Weird behaviour smb.conf create mask #permissions #2204 #samba #zfs #samba4
Updated my #ZFS package repository for ZFS 2.2.2, update to avoid potential data loss!
https://github.com/zabbly/zfs
#ZFS 2.2.2 and ZFS 2.1.14 have been released with the fix to the data corruption bug.
- https://github.com/openzfs/zfs/releases/tag/zfs-2.2.2
- https://github.com/openzfs/zfs/releases/tag/zfs-2.1.14
OpenZFS 2.2.2 is out now to fix a data corruption bug and other issues. Get it at https://github.com/openzfs/zfs/releases/tag/zfs-2.2.2
#ProxmoxBackupServer 3.1 has been released (#Proxmox / #ProxmoxBS / #BackupServer / #ZFS / #OpenZFS / #Linux / #Debian / #Bookworm / #DebianBookworm) https://proxmox.com/
And thus begins the prep for moving my #NAS from EXT4 to #zfs I still have a lot of work to do. I picked up 4 more 18TB drives, for a total of 6 to backup my system. Those drives are now being stress tested with badblocks. I have also documented the file structure and making sure I backup everything. I'm going to leave about 1,000GB free on each drive. Which means I need to manually map out what gets copied to what drive. Hoping to start the migration after Christmas but before New Years eve. But obviously the backups I can start before that. Thinking maybe around Dec 15. The stress test of these 4 new drives in parallel is going to take about 2 weeks. (4 passes with different patterns + a verify.)




I’m linking to this comment as I think it’s a good final note in the subject: https://github.com/openzfs/zfs/issues/15526#issuecomment-1826287826 tl;dr version of it is that if you are not running any build system (or #Gentoo #Linux) on top of #ZFS, it’s super unlikely (but not impossible) to hit this bug and end up with inconsistent data. Uff 😮💨
@emaste incidentally:
I know that BR for bug report defies FreeBSD tradition, however — with forges such as GitHub and Codeberg so widely used, nowadays — commonplace parlance probably equates PR more often with "pull request" than with "problem report".
So, I'm pushing the boundaries at times such as this (issues, pull requests and bug reports closely and rapidly intertwined).
CC @Codeberg
@emaste FYI
<https://github.com/openzfs/zfs/pull/15602>
Will FreeBSD BR 275308, for the errata notice, broaden?
To have a single EN for both:
a) what's already merged to FreeBSD src
b) openzfs/zfs PR 15602 for 2.2.2.
<https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=275308>
No need to reply here. Food for thought. Thanks.
Apparently #ZFS had a data corruption bug for quite some time. The frequency of corruption can be reduced (but not fully eliminated) by issuing:
echo 0 | sudo tee /sys/module/zfs/parameters/zfs_dmu_offset_next_sync
The issue triggers far the easiest with ZFS 2.2 series, but earlier ZFS versions are apparently also affected.
#OpenZFS 2.2.1 has been released (#ZFSonLinux / #ZFS / #ZettabyteFileSystem / #Linux / #Kernel / #FreeBSD / #BSD) https://openzfs.org/
#ProxmoxVE 8.1 has been released (#Proxmox / #VirtualEnvironment / #Virtualization / #VirtualMachine / #VM / #Linux / #Debian / #Bookworm / #DebianBookworm / #QEMU / #LXC / #KVM / #ZFS / #OpenZFS / #Ceph) https://proxmox.com/
Cracked it at last ! I now have #VoidLinux running #ZFS on my test laptop. Now to make an installer script to make it even easier to install and to test it out before deciding if I'm going to install it onto my ThinkPad P14s AMD Ryzen 7 Pro.
Data-destroying defect found in OpenZFS 2.2.0
Check if file block cloning is on and disable it – or upgrade to 2.2.1 ASAP
Been playing with the hrmpf rescue system, built on #VoidLinux on my test laptop to try and get Void on #ZFS working but so far no joy. I've tried various install scripts and failed but It's not going to beat me as I will crack it yet.
@0x0177b11f hello (and thank you).
A run such as this completes in less than one second:
zfs-issue-15526-check-file --path /usr/home
Please, am I doing something wrong?
<http://paste.purplehat.org/view/d61d1a2b>
...so even ZFS seems to be playing quite a significant trick on us, and it appears that all of this has been going on for over a decade. Ensuring long-term data stability is always a complex process, to be studied and implemented with care and caution.
https://old.reddit.com/r/freebsd/comments/182pgki/freebsd_sysctl_vfszfsdmu_offset_next_sync_and/
Seems #freebsd jail guide is updated with thin jails. They propose to create a #zfs snapshot with template and clone it for containers. This makes creating new jails quite a bit faster, it is there any benefit from running and managing a jail which is a zfs dataset?
Asking for a friend, „Michal the zfs noob”.
Cannot enter correct ZFS password #keyboard #password #encryption #zfs
Proton Drive rilascia una app per Mac OS ma solo per le versioni più aggiornate e non esistono piani per una versione #Linux e meno che mai per *BSD. Be', quando esisterà una versione per Linux forse considererò di non fare il mio storage con #RaspberryPi e #openvpn . Sto studiando di farlo con #FreeBSD e #ZFS.
If you are using #ZFS a long-standing 18 month bug has been discovered, a solution is provided to avoid data corruption. https://www.reddit.com/r/zfs/s/WxY9EMqvaM
@hadret given the length of time that e.g. 2.1.4 has been in the main branch of FreeBSD src, with many users of FreeBSD-CURRENT (from main): I'm not sure that 'on fire' is descriptive.
Now pinned, for the #FreeBSD community:
<https://old.reddit.com/r/freebsd/comments/182pgki/freebsd_sysctl_vfszfsdmu_offset_next_sync_and/>
– linked from <https://forums.freebsd.org/threads/freebsd-sysctl-vfs-zfs-dmu_offset_next_sync-and-openzfs-zfs-issue-15526.91136/>.
@emaste FYI
@hadret thanks! Fast-moving discussions in Reddit and GitHub.
> … For the last 18 months or so …
From the most recent comment (not authoritative):
> … If this is right, then the short explainer is that the "is dnode dirty?" check has been wrong for years (at least since 2013, maybe back to old ZFS; I'll need to do more research). …
My non-expert thought, based on that comment: whilst vfs.zfs.dmu_offset_next_sync=0 does seem prudent, if the bug has existed for a decade or more then (I guess) it's obscure enough for a majority of users to be not immediately alarmed.
That's not to downplay the potential impact, just to begin putting things in perspective.
<https://github.com/openzfs/zfs/issues/15526#issuecomment-1825181463>
Thanks again. I'll flag this for /u/perciva in /r/freebsd under <https://old.reddit.com/r/freebsd/comments/180nzh8/-/>.
Seems like #ZFS, the last reliable filesystem, has fallen. For the last 18 months or so silent data corruption had been present: https://www.reddit.com/r/zfs/comments/1826lgs/psa_its_not_block_cloning_its_a_data_corruption #Linux folks should probably follow instructions from that Reddit thread. #FreeBSD folks may want to set vfs.zfs.dmu_offset_next_sync=0 until proper fix lands…
OpenZFS 2.2.1 Released to Fix a File Corruption Bug. Update Now.
https://debugpointnews.com/openzfs-2-2-1/

Our Experience with #PostgreSQL on #ZFS https://lackofimagination.org/2022/04/our-experience-with-postgresql-on-zfs/
Oops. I forgot to finish the fnvlist fix for #zfs. I ran into some kind of segfault in some circumstances that were a pain to troubleshoot in a VM and then 2023 continued to 2023 at me and I got sidetracked.
On the other hand, I'm working more on the rust interface again. I think to make it really robust I'm going to need to make other patches to upstream because I'm currently having to copy the definitions of a ton of enums and tables since they all come from internal headers and bindgen barfs on some stuff.
Updated my #ZFS package repository for the 2.2.1 bugfix release, you're going to want to update immediately to avoid potential data corruption!
https://github.com/zabbly/zfs

OpenZFS 2.2.1 Released Due To A Block Cloning Bug Causing Data Corruption
Woops...
Tempted to explore #ZFS on #VoidLinux after spinning up a #FreeBSD test laptop. The setup for an encrypted system seems straightforward and boot up should be a tad quicker when unlocking compared to grub being slow. Also snapshots is a thing and being able to recover from accidentally deleting files. Any other Void users have experience with ZFS ??
https://docs.zfsbootmenu.org/en/latest/guides/void-linux/uefi.html#zfs-pool-creation
I'll never stop emphasizing how underrated mfsBSD is and how convenient it can be.
This morning, I had to set up a FreeBSD server on an OVH machine. The remote console wasn't cooperating when trying to attach an ISO, and it was performing poorly (I'll open a ticket about it when I find the time). Thanks to mfsBSD, I swiftly installed FreeBSD 13.2, but when I attempted to upgrade to 14.0 and update ZFS, it wouldn't boot anymore.
In just five minutes, I prepared an mfsBSD image with FreeBSD 14.0 (not yet available on the official site) and got everything up and running.
Such a handy tool that has saved me from unpleasant situations countless times.
OpenZFS 2.2.1 is out now with support for #Linux kernel 6.6 https://github.com/openzfs/zfs/releases/tag/zfs-2.2.1
So #FreeBSD 14 is finally announced 🥳 – I already made my mind to not immediately jump on it, because I couldn't see any "killer feature" for me while 13.2 is working just fine. Upgrading to 13.0-RELEASE back then, I ran into several surprising issues. I could find workarounds for all of them, still it was a bit annoying...
But now, looking at the official announcement, this bullet point caught my attention:
"ZFS has been upgraded to OpenZFS release 2.2, providing significant performance improvements."
Performance of my #ZFS pool degrades badly under heavy I/O-load (a parallel poudriere build with lots of smaller ports and lots of ccache hits). The pool is backed by 4 spinning disks in a #raidz configuration.
Could I expect 14.0 to improve performance in that specific scenario? 🤔

#ZFS question (and I'm deliberately keeping it a little vague):
If someone gives you a few disks with a zfs snapshot on it, and you want the data but don't have a zfs filesystem;
1. Do you need an actual zfs filesystem (my guess is yes)?
2. Do you need enough free space to copy the entire snapshot?
3. Do you need to mount all the drives at once?
Had a power cur about 1.5 hours ago and it's taken me until now to get the nas back up 😢
ZFS had a real issue this time causing truenas to have issues restarting.
Turned out the main pool wouldn't come back online until I manually cleared errors then it reappeared.
I'm now having to do a scrub on a 24 disk array to see if that will fix the remaining issues
OpenZFS Lands Exciting RAIDZ Expansion Feature
Big news on the #ZFS front. RAIDZ expansion has been merged.
"This feature will be available in the OpenZFS 2.3 release, which is probably about a year out."
I've an oddly specific HW question. Please boost if you've HW-savvy followers?
Does anyone who has an ASRock Rack X470U motherboard use both M.2 slots with NVMe? If so, what drives can/should I use? I'm contemplating filling both with a matched-size pair for the ZFS log devices. Mostly it's about decreasing latency vs. the slice-of-SATA-SSD I'm using today.
For example, how about a pair of these for slog (in the X470U mobo's two slots)?
OpenZFS Lands Sync Parallelism To Drive Big Gains For Write Performance Scalability
I need a #book #recommendation. I want to muck about with #ZFS, on all kinds of platforms including #MacOS, #IllumOS, #FreeBSD and #Linux. Is the "FreeBSD Mastery: ZFS" book (and its more advanced follow-up) *very* FreeBSD-specific or is it pretty much generic aside from things like booting off zfs? Does it do a good job of explaining the underlying structures and philosophy? Does it make a good why-to as well as a how-to book?
PSA: if you are on #NixOS unstable and using syncoid to replicate your #zfs datasets and have updated recently (ie you are on zfs 2.2.0) your jobs might be failing!
The workaround to fix this is to put the following in your configuration:
services.syncoid.service.serviceConfig.PrivateUsers = lib.mkForce false;
https://github.com/NixOS/nixpkgs/issues/264071#issuecomment-1793583098
Klamm heimlich ist das Dateisystem #bcachefs in #Linux 6.7 gewandert. Ich bin etwas hyped weil ich die Entwicklung schon seit Jahren verfolge und sich viele gute Ideen darin befinden. Ich hoffe es wird kein zweites #btrfs, sondern ein besseres #zfs. Ich habe gerade den Drang mir eine Maschine zum Spielen aufzusetzen 😇
Halloween for me starts with an "rm -Rf" in the wrong directory. A moment of panic.
Then, in a flash, I remember that the FreeBSD jail in question is on ZFS, and I have zfs-autobackup running every 15 minutes on the entire machine. Unable to roll back the entire jail (there were other data in motion), a "zfs clone" of the snapshot and an rsync from the snapshot to the running jail did the trick. I was back on track in 30 seconds.
Love ZFS, love FreeBSD. 🎃👻
Habt ihr nen entspanntes script für zfs backups (zfs snapshot|zfs send|zfs receive) zur Hand?
Hab jetzt mal plan umgesetzt die Workstation mit nixos auf linux #zfs pool umzusetzen und würd einige fs (home etc) gerne ohne viel gefrickel regelmäßig inkrementell in den #truenas freebsd zfs pool zwecks backup rüber schieben.
My testing in rust indicates that my quick fix to #ZFS resolved the ioctl issue. I'm running the full test suite now, but it's already exercised both new and legacy ioctl code paths, so I'm reasonably confident that nothing is broken.
I guess it's time to rebase the change onto the latest from upstream and make a PR while I'm waiting for the test suite to complete.
One of today's meetings started with a client asking me to implement Linux and Docker, without knowing why, just because they heard "that's the way it's done today." I responded with, "Why not FreeBSD and jails?" They had never heard of it. I spent about 20 minutes showing them BastilleBSD, ZFS, and some use cases.
Needless to say, tomorrow I'll begin implementing the first server, and if all goes well, it will be just the first of many. :freebsd:
Wow yeah making this #ZFS fix really is going to be easy. Aside from the part where I need to setup a VM with my custom build in order to test it because, uh, yeah not gonna test it on my daily driver laptop lmao
Okay so I've been writing a pure[1] Rust #ZFS interface library after conclusively determining that the existing ones were written by people who have almost no experience writing userspace code, which, I guess sorta makes sense. (Edit: HI ZFS DEVS I LOVE YOUR WORK EVEN IF I'M BEING CRITICAL ABOUT THE STATE OF THE USERSPACE LIBRARIES!)
Anyway, I'd been debugging this problem, which was a huge blocker for doing anything interesting, where some commands I sent would crash the receiving kernel thread. Note, I don't mean returned an error code, I mean I was seeing stack traces in dmesg along with warnings about stuck kernel tasks.
After two full days of debugging I finally figured out what was going on, and it's bananas.
One of the core structures used everywhere in ZFS is this thing called an nvlist. It's basically just a dictionary but with a couple unique properties that make it weird to work with. One of the features it has is a native serialization format. This is how you exchange data with the kernel, mostly anyway, you create nvlists, serialize them, and send them to the kernel with an ioctl. Another is that it has a couple flags that control it's behavior, like whether or not it can behave as a multi-map (which is also it's default behavior.) These flags are propagated in the serialized form.
When the kernel receives an nvlist it has some validation functions that iterate over the key value pairs in the list to make sure it has the necessary keys with the right types in it for the command you sent.
Now, the flag that makes it behave like a regular dictionary is called NVLIST_UNIQUE_NAMES
. When you set this the nvlist will replace any value with a new value if you add the same key twice. If course, like a normal dictionary, there's functions for doing lookups that are named things like nvlist_lookup_int32
, one for each supported data type.
Now, I had been constructing my nvlists without the unique names flag, because it didn't seem overly significant. I wasn't adding multiple keys so why would it matter? Well, it turns out that without that flag, nvlist lookup functions return an error code.
And the kernel asserts that the error code from these lookup functions is zero. Why didn't it error in the validation functions? Well, because it wasn't using the lookup interface, it was using the iteration interface which doesn't return errors because when you iterate it treats the dictionary like an array.
This is not documented anywhere at all, I had to read the source of the nvlist code really, really carefully to discover this.
Fucking. God. Damnit. Anyway, my Rust interface can now make snapshots and ostensibly do anything else the kernel supports. I'm now just in the process of writing a nice ergonomic interface instead of fighting with mysterious crashes. There's some legacy commands that don't use the nvlist interface, but I think those shouldn't be a problem because I've got the full command structure implemented, I'm just not using every field in it yet.
[1] the only thing I'm linking with is the nvlist library because they have a really specialized structure and serialization format that I don't want to re-implement in Rust right now.
@alexr @fluxwatcher @dvl I was only vaguely aware of it.
From <https://www.freebsd.org/releases/8.2R/relnotes/>:
"A periodic script for zfs scrub has been added. For more details, see the periodic.conf(5) manual page."
<https://man.freebsd.org/cgi/man.cgi?query=periodic.conf&sektion=5&manpath=freebsd-release>
FreeBSD 14.0-RC2 Pulls In OpenZFS 2.2, OpenSSH 9.5p1
One thing that #ZFS is manifestly very not good at is deleting huge directory trees. (Tens or hundreds of millions of files.) One user has hundreds if not thousands of directories with 150,000 9KiB files each. If it were in its own filesystem, as I try to nudge users toward, I could just `zfs destroy` and that would be it. Instead I have to wait for `rm -rf` to complete because it's one big cesspit.
Question for the #ZFS people here. I have an ancient #TrueNAS SCALE machine which I am renting from my hosting provider. It has a Xeon W3520 (4C/8T @ 2.93GHz), 8GB RAM, 4x3TB spinning rust, in a single pool with two mirrored vdevs.
I'm using this machine as iSCSI backend for my virtualization hosts. It's slow AF. I get extremely poor IOPS. I'm pretty sure it's the disks that are the bottleneck. ... (continued)
#btrfs is neat! Added an extra disk to my pool and rebalanced for RAID1 in 2-3 commands. Even supports raid1 on different sized disks.
I _almost_ went for #ZFS but got warnings and errors on my #raspberrypi. Had no idea btrfs supports these features.
Would really like to see stable RAID5 though. Losing a whole disk with RAID1.
Looks like its time for a new #introduction :blobfoxwave:
I am an engineer living near San Francisco with my partner #migrating to this instance from @greg@pettingzoo.co. As a gay/queer furry (adjacent?) engineer, I am excited to be joining this community!
My projects and interests cover all sorts of things, from tech (#fpga #eink #cpp #python #arduino #selfhosting #zfs #VintageComputing) to #travel (and #transit) and creative outlets (#photography #DigitalArt #music #gardening #movies #cooking).
(NB: I'm just kicking off the migration process. If you see follow requests from this account over the next few days, it's likely I was following you from @greg@pettingzoo.co)
I am aware my sites are down, which are pthree.org and ae7.st. This in turn affects my kickass #ZFS administration guide and my kickass #password generator.
It's hosted in an ATX case with a single PSU in a datacenter in SLC. I cannot get to it this week due to our annual SOC-2 audit. I should be able to get to it next week however.
Sorry for the inconvenience.
Client (a bit clumsy but positive and honest) calls: "Help! Just came back from lunch break and accidentally deleted all files on the file server!"
Me, unfazed: "Alright, besides you, who else worked on the file server during lunch break?"
Client: "No one, we were all away and I'm the first one back. Others will be back by 15:00"
Me, looking at the clock and noticing it's 14:30: "Okay, what time did you go to lunch?"
Client: "At 13:30. How long will it take to restore from the backup? Do you think we'll be able to work tomorrow?"
Me, without flinching as I type "zfs rollback *dataset-13:45-snapshot": "Done"
Client: "All the files reappeared!"
Me: "Thank #ZFS, #FreeBSD, and whoever set up automatic snapshots every 15 minutes."
#ITSupport #DataRecovery #BackupHeroes #SysAdmin #Snapshots #BSD
This was awesome, and I knew I was now on the downhill stretch. I imported the volume, and everything instantly sprung back to life! #zfs is #bestfs
Well. That's what I THOUGHT. As part of my EARLIER debugging, I had messed with a lot of networking things, because I assUmeD that it was a networking issue, again. It wasn't, obviously.
This meant that I had manually fiddled with a pile of things that slightly broke OTHER things, and they all needed to be put back to what they should have been. For the nerds - I had disabled jumbo frames partially, in a bunch of places, trying to see if there was a MTU issue somewhere in the path.
So there was then MORE time fiddling with all that, and putting everything back.
(Continued)
OK, according to `top -mio` smbd (the Samba daemon) is using most of the IO it could get. I wonder there's something my #ZFS pool. If I remember correctly, I've optimised the I/O #performance, the block size of the zpool should be based on the physical sector of the hard drive...
It's obviously not the right time to performance test the disks, it looks like I'll just have to wait until the backup is fully restored. 💤
@chromakode #ZFS is just awesome!
And this is why everyone should use it!
Yesterday morning, I pulled open my laptop to send a quick email. It had a frozen black screen, so I rebooted it, and… oh crap.
My 2-year-old SSD had unceremoniously died.
This was a gut punch, but I had an ace in the hole. I'm typing this from my restored system on a brand new drive.
In total, I lost about 10 minutes of data. Here's how. (Spoilers: #zfs #zrepl)
Hey #ZFS fans, you might want to subscribe to
https://mastodon.social/@PracticalZfs@feedsin.space
For a Mastodon #bot that updates the latest topics on the Discourse server.
Question for all the #sysadmin peeps out there:
What do you use to track historical failures of hardware, such as drives?
We recently had several drives all fail simultaneously in a #ZFS pool that I can only figure out must be a bad backplane or some electronics getting tripped up due to heat.
We've had enough drive failures in this chassis that having historical notes would be nice. A spreadsheet could work, but curious what else is out there.
Thanks.
Good morning, friends of the #BSDcafe and #fediverse
I'd like to share some details on the infrastructure of BSD.cafe with you all.
Currently, it's quite simple (we're not many and the load isn't high), but I've structured it to be scalable. It's based on #FreeBSD, connected in both ipv4 and ipv6, and split into jails:
* A dedicated jail with nginx acting as a reverse proxy - managing certificates and directing traffic
* A jail with a small #opensmtpd server - handling email dispatch - didn't want to rely on external services
* A jail with #redis - the heart of the communication between #Mastodon services - the nervous system of BSDcafe
* A jail with #postgresql - the database, the memory of BSDcafe
* A jail for media storage. The 'multimedia memory' of BSDcafe. This jail is on an external server with rotating disks, behind #cloudflare. Aim is georeplicated caching of multimedia data to reduce bandwidth usage.
* A jail with Mastodon itself - #sidekiq, #puma, #streaming. Here is where all processing and connection management takes place.
All communicate through a private LAN (in bridge) and is set up for VPN connection to external machines - in case I want to move some services, replicate or add them. The VPN connection can occur via #zerotier or #wireguard, and I've also set up a bridge between machines through a #vxlan interface over #wireguard.
Backups are constantly done via #zfs snapshots and external replication on two different machines, in two different datacenters (and different from the production VPS datacenter).
I'm still waiting on the SATA card for my home server (re-)build so I can't connect all my disks. However because I felt like it I've connected the two SSDs and installed #NixOS with a #ZFS on root.
Arguably this is complete overkill but I've wanted to do it for ages and I had two matching 64GB SSDs hanging around. I followed this excellent guide that also conveniently uses flakes, which I've wanted to learn pretty much since I picked up NixOS a few months ago: https://openzfs.github.io/openzfs-docs/Getting%20Started/NixOS/Root%20on%20ZFS.html
We have embarked on a complex, continuous, and not always linear operation—to migrate, where possible, the majority of our servers from #Linux to #FreeBSD. Here's why.
https://it-notes.dragas.net/2022/01/24/why-were-migrating-many-of-our-servers-from-linux-to-freebsd/
#SysAdmin #OSS #ItNotes #OperatingSystems #SysAdmin #ServerMigration #Stability #JailsVsContainers #FileSystems #ZFS #BootProcedure #NetworkStack #Performance #SystemAnalysis #BhyveVsKVM #OpenSource #Bhyve #KVM
I want to buy a pre-built and tested PC for running #TrueNAS (specifically #TrueNAScore) with sufficient CPU and memory to do several TB of #ZFS well, with an internal SSD for the OS, an internal SSD or similar for cache, at least 10 front-facing hot-swappable SATA bays, a whatever-the-hell-it-is port so I can add an external box with more drives in the future, from a UK seller. Can any of you recommend anyone?
My old #debian desktop computer is still going strong after ~14 years; needed a new SSD and I put five HDDs in to make it a file server with #ZFS.
Some time ago, one memory bank died, bringing me down to 2GB RAM, which OOMs #gitannex on large repo clones.
Decided to upgrade the RAM to 16GB and... The core i5 750 takes DDR3-1333 at max. That RAM is so old, it's _more_ expensive than the quicker RAM like DDR3-1600.
Here's to the next ten years of my trusty little server.
@Mossop @gabrielesvelto If you pool consist of full disks, then might be tricky to test the disks speeds out of #ZFS but you can do it with the dd command. If you use partitions of the disks and there is a partition available to write data, then you can test on that partition with `dd if=/dev/zero of=/dev/partition_to_test bs=10M status=progress`
Oops I never made an #introduction post.
Hi my name is Rynn! I'm a 29 years old #furry #trans woman located in the eastern United States. I'm also #polyamorous, #asexual, and working with a neuro-psychologist for undaignosed #autism.
I left my corporate IT job at the end of May 2023 and somehow landed a client that needed server help. Now I run my own IT Consulting LLC!
I've been doing IT for a decade now and regularly post about #linux #zfs #informationsecurity #cybersecurity and other tech topics. I have also been cooking and drawing since I was a pre-teen!
My Twitch Channel just made Affiliate! Right now my schedule consists of #minecraft, NES Roulette, and #pokemon Crystal PLUS. You can find me at https://twitch.tv/lycanmatriarch - live Fri, Sat, Sun at 8PM EDT.
Lastly I have a lot of creative hobbies! I'm a digital artist who sketches but rarely finishes pieces, I make #perler pokemon sprites, and I've been the primary home cook in my house since I was a pre-teen!
It has been over a week since the #redditMigration began - lots of people still cutting ties with #reddit.
We return this week with another big update - including migrating communities with discussions surrounding the #zfs filesystem, #grapheneos, the #raspberrypi pico, #childfree, #adobe #illustrator, #girls #gaming, the #blind community and many more.
Please share if this helped you migrate to communities on #lemmy #kbin or elsewhere on the web!
https://www.quippd.com/writing/2023/06/15/unofficial-subreddit-migration-list-lemmy-kbin-etc.html
It's official: in accordance with overwhelming community consensus and support, reddit.com/r/zfs is now your one stop shop for discussion of and memes about zfsboutique.com, Zalando Fulfillment Solutions, and zinc formaldehyde sulfate.
Don't like it? Blame CEO Steve Huffman.