Because you have a 15/4 EC policy, we say each partition has 19 "replicas". And since rebalance will only move one "replica" of any partition max at each rebalance: up to 100% of your partitions may have at least one replica assignment move.
That means, after you push out this ring, 100% of your object GET requests will experience at most one "replica" is out of place. But that's ok! In a 15/4 you only need 15 EC fragments to respond successfully and you have 18 total fragments that did NOT get reassigned.
It's unfortunate the language is a little ambiguous, but it is talking about % of *partitions* that had a replica moved. Since each object resides in single a partition - the % of partitions affected most directly communicates the % of client objects affected by the rebalance. We do NOT display % of *partition-replicas* moved because while the number would be smaller - it could never be 100% because of the restriction that only one "replica" may move.
When doing a large topology change - particularly with EC - it may be the case that more than one replica of each part will need to move (imagine doubling your capacity into a second zone on a 8+4 ring) - so it'll take a few cranks. Eventually you'll want to have moved 6 replicas of each part (6 in z1 and 6 in z2), but if we allowed you to move six replicas of 100% of your parts you'd only have 6/8 required parts to service reads!
Protip: when you push out the new ring you can turn on handoffs_only mode for the reconstructor for a little while to get things rebalanced MUCH more quickly - just don't forget to turn it off!