u/segdy

▲ 4 r/solar

Has anyone claimed performance warranty?

If yes, I'd love to hear experiences.

Is this just marketing bogus anyway or has anyone successfully claimed it?

My Panasonic EVPV400H 40 panels are supposed to loose 2% in the first year and 0.25%/year thereafter and this is backed by their performance warranty.

But no matter how I look at data (*), degradation of many panels in the array is much higher. I did a very thorough analysis using full per-panel dataset from Enphase.

I went through my installer but they don't see an issue because the yearly output is still higher than than their estimate (which was ultra conservative at ~6000kWh when first year output was ~9000kWh). That's fair but this is independent of the panel performance warranty.

Eventually they referred me to Panasonic and this is going back and forth and they are trying hard to shut this down. They hinted I would need to be responsible for hiring a third party to do STC measurements but at this point a performance warranty becomes absolutely worthless, especially if detailed Enphase data is crisp & clear.

(*) Yes, I considered the trivial caveats: I only consider peak power at the same cloud free days of year and normalize output power by NREL+SolCast irradiation data. There is no shading and I did professional cleaning a few times which did not change much.

reddit.com
u/segdy — 10 hours ago
▲ 7 r/zfs

Pool takes forever to mount (making system broken) and no further errors

I had a disaster last night on my proxmox server. Fortunately I had daily replication and lost just half day.

The system started to act weird. For example very unresponsive, processes (incl. KVM VMs) showed as stopped but were still running and couldn't even be killed with -9.

Reboot took forever and the next reboot revealed the culprit was my ZFS pool:

https://preview.redd.it/us1dxpo600ug1.png?width=2114&format=png&auto=webp&s=7b94edda1c13a7445c08dce6589f99e656b2cf00

After about 30 minutes it succeeded to boot with some processes timing out to start. From then on some parts of the system worked, some were extremely laggy. Access to all ZFS data worked flawlessly.

No issues are shown with zpool or zfs. No suspicious dmesg messages. Nothing.

It just seems that accessing the pool has become so slow that the system basically does no longer operate properly.

The pool is a mirror between an NVMe and a 2.5" SATA SSD.

Which options do I have to figure out what the heck is even going on and how to recover?

EDIT: This is concerning as the issue seems not perfectly reproducible and intermittent. When I rebooted again this morning, all worked as expected. Just after a short while the same issues (whole system lagging) re-appeard

EDIT2: No suspicious outputs to me in Smart. Short+Long tests are successful for the internal SSD (Crucial BX500 4TB, 6months old). The NVMe does not support self test. SMART ouputs here for reference: https://pastebin.com/ceLyB5DK,  https://pastebin.com/y6U1T9c8

EDIT3: All of a sudden I see a small number of failed writes:

# zpool status
  pool: rpool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: scrub repaired 0B in 02:27:11 with 0 errors on Sun Mar  8 03:51:12 2026
config:

        NAME                                                  STATE     READ WRITE CKSUM
        rpool                                                 ONLINE       0     0     0
          mirror-0                                            ONLINE       0     0     0
            nvme-HP_SSD_EX900_Plus_2TB_HBSE54170100735-part3  ONLINE       0     0     0
            ata-CT4000BX500SSD1_2529E9C69BDE-part3            ONLINE       0     8     0

Maybe this evening I try removing and re-attaching again to see if it's connector or disk?

And if disk, does it make sense to purposefully degrade my array (remove the SSD) and confirm the issue disappears? Until I replace+resilver, I am aware that I am living dangerously without redundancy ...

reddit.com
u/segdy — 15 days ago