Handling Current Hogging in a 7p77s Pack: How to scale SOC estimation from a single-cell simulation to pack-level?
Hi everyone,
I'm developing the BMS algorithm for our FSAE EV team. We are using LG M50LT cells in a 7p77s topology. Due to MCU compute limitations, we are currently relying on a conventional, open-loop Coulomb Counting (CC) algorithm for SOC estimation.
I’ve hit a massive wall regarding SOC drift and premature cut-offs, and I'd love to hear how experienced folks handle this.
The Architecture & The Problem:
Our BMS hardware only measures the total pack current Ipack and the voltages of the 77 series blocks V1 to V77.
Because we use standard Coulomb Counting, our algorithm assumes the current splits perfectly across the 7 parallel cells. It mathematically integrates Icell = Ipack/7 for its SOC calculation.
However, this makes the algorithm completely "blind" to current hogging. In reality, due to internal resistance variations and thermal gradients, the cells in the parallel block don't share the load equally.
- The cell with the lowest R0 pulls significantly more current than the Ipack/7 assumption.
- Its actual SOC depletes much faster, but our CC algorithm has no feedback loop to "see" this happening.
- The Result (Sudden Death): This overworked cell hits the 3.0V hardware Undervoltage (UV) cut-off limit way before it should. The BMS hardware does its job and trips the main contactor to save the cell. The car dies instantly on the track, while our dashboard—driven by the blind CC algorithm—still confidently displays 25% SOC.
My Questions:
- For teams/systems strictly running pure Coulomb Counting on parallel packs, how do you patch this algorithmic blindness?
- Are heuristic rules the standard workaround here? (e.g., hard-resetting the SOC to 0% the moment any block hits 3.0V, or forcing OCV-based resets during sleep/idle states).
- If we use voltage-based hard resets, how do you handle the SOC display "jumping" suddenly (e.g., from 25% dropping instantly to 0%) so it doesn't confuse the driver?
- Is strict cell-binning (matching R from the factory) and aggressive top-balancing the only way to make pure CC viable, or do we absolutely have to upgrade to a closed-loop algorithm like an Extended Kalman Filter (EKF) to survive this?
Any insights, reality checks, or shared experiences would be massively appreciated. Thanks!