r/bcachefs

eBPF LSM runtime security agent for synchronous file/network denial — looking for technical feedback
▲ 10 r/bcachefs+5 crossposts

eBPF LSM runtime security agent for synchronous file/network denial — looking for technical feedback

I’m working on Aegis-BPF, an open-source Linux runtime security project built around eBPF LSM.

The goal is narrow: explore enforcement-first runtime security, where selected file and network operations can be denied before syscall completion, rather than only emitting post-event telemetry.

Current scope:

- BPF-LSM based file/network policy decisions

- cgroup-scoped policy

- OverlayFS/copy-up handling

- audit-mode fallback when enforcement is unavailable

- Prometheus metrics

- Kubernetes/Helm deployment path

I’m not claiming it is a production-ready replacement for Falco, Tetragon, or KubeArmor. I’m treating it as a focused enforcement model project and looking for criticism from people who understand eBPF, Linux security, or container runtime edge cases.

Main feedback I’m looking for:

- Are the hook choices reasonable?

- What enforcement edge cases am I probably missing?

- What would make the failure-mode model more trustworthy?

- What tests would you expect before taking this seriously?

- Are there obvious problems with cgroup-scoped policy or OverlayFS handling?

Repo:

https://github.com/ErenAri/Aegis-BPF

Technical criticism is more useful than general encouragement.

u/EreNN_42 — 4 days ago
▲ 3 r/bcachefs+1 crossposts

I've tried mount with UUID, /dev/sda:/dev/sdb, /dev/sda, and /dev/sdb.

fstab and mount -t bcachefs don't work with any of them and I always get the same error: mount: "/dev/sda:/dev/sdb": No such file or directory

However, bcachefs mount and mount.bcachefs work with all of those.

mount.bcachefs is a link to bcachefs. How and why would mount -t bcachefs fail and mount.bcachefs work? Any ideas on how to get fstab to work?

reddit.com
u/RalekBasa — 11 days ago

NASty v0.0.6

NASty is a NAS operating system built on NixOS and bcachefs. It turns commodity hardware into a storage appliance serving NFS, SMB, iSCSI, and NVMe-oF — managed from a single web UI, updated atomically, and rolled back when things go sideways.

Highlights of 0.0.6:

  • OIDC / Single Sign-On — Log in with Google, Authentik, Keycloak, or any OIDC provider. Configure from Access Control → Identity Providers.

  • Security hardening pass — Browser sessions moved to httpOnly cookies, per-IP login rate-limit with persisted lockouts, WebSocket origin validation, gated WS endpoints, legacy ?token= URL auth removed, tightened HTTP security headers, {@html} XSS sinks killed, compose deploys sandboxed, NFS exports hardened, secret files locked down.

  • Network bridges — Linux bridges as a virtual switch for VMs (and apps), composable with bonds and VLANs (closes #27).

  • MTU configuration — Configurable MTU on physical interfaces, bonds, bridges, and VLANs from the WebUI — including jumbo frames (9000) for SMB / NFS workloads (closes #62).

  • Filesystem wizard upgrades — Drive model / serial / vendor / transport on the disk picker, usable-capacity estimate that matches the filesystems list, and a summary line on the filesystem card showing device count, erasure coding, and encryption.

  • Apps allow_unsafe escape hatch — Deploy compose stacks (or simple apps) that need privileged options with explicit user opt-in, surfaced in the deploy form and app list. Internal port now editable on Apps.

  • Background alert evaluation — Alerts fire from the engine's background notifier instead of waiting for a browser to be connected.

  • Test & CI footprint — fmt / clippy / svelte-check / test gates in CI, pinned Rust toolchain, integration nixosTest that drives JSON-RPC over the appliance, bcachefs smoke test, and unit tests across JSON-RPC framing, alert evaluation, sharing config, storage parsers, update rollback, the WebSocket client, the toast queue, and IO history.

  • Dependency refresh — rusqlite 0.34 → 0.39, openidconnect 3 → 4, vitest 3 → 4, plus major bumps to sha2 / rand / x509-parser / bollard / reqwest, nixpkgs to 549bd84 (2026-05-05), and bcachefs-tools to v1.38.2.

  • Smaller polish — SSH banner is now dismissible and renamed to "Configure SSH", banner buttons actually navigate, VM-detect loop fix, audit log rotation fix, dead nft -f - spawn removed.

github.com
u/bfenski — 5 days ago

Starting to focus more on performance and crank through performance issues - this release fixes all the ones I've seen concrete data on

  • cycle detector/SIX locks improvements - these fix most of the "srcu lock held too long" warnings we've been seeing, and make performance under heavy contention much more robust

  • btree write buffer multithreading: after the journal pipelining improvements that landed recently, this was the next big bottleneck on the really big systems, and it's fixed thoroughly now

  • btree node merge attempts thrashing the btree cache are fixed, and 3:2 btree node merging and other improvements are coming in .3

Full changelog - https://evilpiepirate.org/git/bcachefs-tools.git/tree/Changelog.mdwn

So now, if you've got a workload where we're slower than btrfs/zfs, let me know :)

reddit.com
u/koverstreet — 11 days ago

This is a follow-up on https://www.reddit.com/r/bcachefs/comments/1swjqhw/my_first_bcachefs_array_very_impressed/

The general impressions on the FS are extremely positive: all the software running on these 2x slower old disks perform much better, as it was heavy on indexing and accessing millions of random cache files in a deep directory hierarchy.

Also, scrub and reconcile are done on LBA order. While a scrub on ZFS takes 8h (just checked with zpool status), it takes less than 2h here, without the drives seeking like crazy. It's as if the system was designed to behave correctly on HDD 😄

After having my 2 drives set up (750GB) I bought a new SSD (1TB) and added it to the pool without any specific parameters. All devices have durability=1. Here are the first findings:

  • data_replicas and metadata_replicas were ignored when echoing '2' to them using sysfs. I had to use bcachefs set-fs-option command to set them. Weird.
  • Reconcile immediately scanned the drives and replicated the data and metadata. It was extremely smooth (unlike ZFS cross FS operations)
  • However, I was expecting the new data allocated to be biased towards the most empty drive. What it did was replicate the second copy across all devices equally.

After few days of flawless operation, I've decided to follow this paragraph of the P.O.O:

>2.2.4.1 Tiered storage with replication >Replication interacts with tar- >gets naturally: the lesystem satises the congured data_replicas count us- >ing durability-weighted copies, regardless of which target they are on. For exam- >ple, with data_replicas=2, foreground_target=ssd, and background_target=hdd: >- Data is initially written to the SSD target >- The background mover copies it to the HDD target >- Since both SSD and HDD copies have durability=1 by default, each >counts as one replica, the two copies together satisfy the replica require- >ment >- To use the SSDs as a pure write cache (data evictable once on HDD), >set durability=0 on the SSD devices; the allocator will then write two >durable copies on the HDD target >See Device management in subsystem details for writeback vs. writearound >caching congurations and device lifecycle.

Findings:

  • Setting foreground_target=ssd and background_target=hdd using set-fs-option worked
  • I was expecting "Since both SSD and HDD copies have durability=1 by default, each counts as one replica, the two copies together satisfy the replica requirement" but it started migrating everything to HDD as if SDD had durability=0 (which it didn't)
  • Trying to stop it by resetting fg and bg target didn't work, the fs-set-option was ignoring the empty values. Fortunately I was able to echo '' > background_target on sysfs before reconcile ran out of space
  • Now, because the drives are >86% full the copygc is recovering space, and, unfortunately, this is not as smooth as scrub or reconcile (understandably)
  • Trying to get back to the original situation, I set replicas to 1. The Used data reported has reduced, but the percentage of utilization it's still the same before setting replicas to 1 and the original HDD are heavily fragmented while the SDD is underused.
  • I will evacuate the SDD, reformat the SDD, copy all data to it, then add the two HDD to the SDD FS. Still thinking on this.

Suggested improvements:

  • Reconcile and documentation still need to be reconciled
    • Better explain how the data is allocated/biased regarding different drive sizes and free space
    • Document all the combinations of foreground/background/promoted options
    • Update bcachefs-tools to block user from certain combinations where they don't make sense (hasn't happened to me, but there are reports in the bugtracker where it was user error that could've been prevented)
    • Rewrite the POO paragraph indicating that using durability=1 on SDD and HDD, it will count the SDD copy towards the replicas total, it doesn't or is misleading
    • Explain what happens when replicas is reduced from 2 to 1 and why space is freed but yet the use% is not updated
  • Set-fs-options must be reviewed to allow taking empty arguments for resetting background/foreground (maybe applies other options)
  • While reconcile updates are perfectly traceable on fs top, it's not so clear which actions are triggered by copygc or the status of the copygc

Conclusions:

  • Would I recommend bcachefs for critical data? Not yet (I'm using cached data for which I have copies)
  • There is not yet enough experience online about recommended configurations or troubleshooting guides, it'll eventually get there if the FS gets more usage
  • bcachefs has lots of potential due to the great design overcoming the worst limitations of ZFS and btrfs, but it's not there yet
  • I wish bcachefs would have the large commercial involvement that other FSs had, like ZFS, btrfs or ReiserFS (SuSE invested a lot on fixing it)

Having said that, I was indeed expecting some bumps in the road, my goal was to experiment with the current status of the FS, and all the things I've seen are minor stuff, the design and the coding is solid and robust.

And I still firmly believe that for the success of the FS, it'll have to be brought back in, no matter how easy it is to use DKMS.

reddit.com
u/awesomegayguy — 10 days ago

NASty is a NAS operating system built on NixOS and bcachefs. It turns commodity hardware into a storage appliance serving NFS, SMB, iSCSI, and NVMe-oF — managed from a single web UI, updated atomically, and rolled back when things go sideways.

Highlights of 0.0.5:

  • Backup system — Deduplicating, encrypted backups via rustic_core library. Local, S3, SFTP, REST, and B2 targets. Scheduled backups with retention policies. Backup Server (restic REST) as a managed service for NASty-to-NASty backups.

  • Sidebar reorganization — 15 flat menu items collapsed into groups (Storage, Sharing, Protection, Compute, System) with collapsible sections and a search bar for quick navigation.

  • Log viewer — Dedicated Logs page with real-time streaming (follow mode), server-side grep, and client-side search/filter.

  • Notifications — SMTP, Telegram, Webhook, ntfy, and Signal notification channels with test-before-save.

  • Networking — Multi-interface support with per-interface IPv4/IPv6, dynamic nftables firewall with per-service source restrictions, bonds and VLANs.

  • SMB Groups — Group-based share permissions via @groupname, inline user/group creation in share wizard.

  • Services Page — Unified page with per-service Configure panels: NFS, SMB, iSCSI, NVMe-oF, UPS, SSH, Docker, Backup Server.

  • Boot Reliability — Device wait with udevadm settle before mounting, critical alerts on mount failure.

  • ARM Support - ISO for aarch64 is now included.

And obviously bunch of bugfixes and some refactors to make future development easier. Also that's probably last BIG reorg release. Things should now start stabilizing.

Enjoy!

u/bfenski — 12 days ago