Hi, I have kind of a pet issue with heterogeneous btrfs raids (i.e. those that use devices of different size). They often run out of space early. This affects btrfs' that use raid0 or raid10 or combine raid5/6 with something else. Btrfs tries to maximise stripe width by always using the largest amount of drives possible. But in doing so it basically paints itself into a corner on differently sized drives. For example raid0 on 2,2,4 TB. The first 6 TB will be striped across all 3 of the drives. After that no further allocation is possible because raid0 requires a minimum of 2 drives but only 1 has empty space. The remaining 2 TB are wasted. (For more complex examples the calculator at http://carfax.org.uk/btrfs-usage/ can be used.) Here it would be possible to use all 8TB by e.g. limiting the stripe width to 2. The fact that btrfs prefers to allocate space on empty drives then takes care of properly distributing the chunks. Note that the too wide allocations in the data profile can cause the metadata profile to run into the problem. Imho the only reason to add a bigger drive to an array is because you want to use that additional space. Else you could have gone with a cheaper, smaller drive. Or partitioned it and used the remainder for something else. I therefore place a higher value on the usable space then on stripe width. If someone really wants the old behaviour there is always the possibility to set the dev size manually. The question is how to calculate how to best limit the stripe width. Basically Total_Space = Chunk_size * Stripe_width * Stripe_count which transforms to Stripe_width = Total_Space / (Chunk_size * Stripe_count) . Assuming Total_Space and Chunk_size to be constant, in order to maximise Stripe_width we need to minimise Stripe_count. The largest drive provides a lower bound for this. Every stripe can only use one chunk from it yet all of its chunks need to be used. Thus the minimum Stripe_count is the number of chunks that fit on this drive i.e. LargestDrive_Space / Chunk_Size Inserting and simplifying then yields Stripe_width = Total_Space / LargestDrive_Space as the basic goal. A fixed stripe width could of course could be calculated and set at mkfs but it is better to recalculate it at every allocation. It can then be modified to only use the free space in order to account for past allocations made with different width (rounding, different algorithms, ...). Additionally this is a neat way to account for the addition or removal of drives. The stripe width of course needs to be rounded in some way. I prefer to round toward the closest possible value. In the case of roughly equally sized drives this is equal to the number of drives. Thus there is no need for a special case for them. Generally speaking the affected filesystems need at least some stripes to be of limited width, but the above way of limiting all stripes to an average value is far from the only possibility. Considering a 2,2,2,4 TB raid0: Current btrfs: 8TB at width 4; fixed scheme: 10T@2 limit as late as possible: 4T@4 + 6T@2 limit to average: 6T@3 + 4T@2 (i.e. width 2.5 or 50% width 2 stripes + 50% width 3) Imho limiting to the average offers the best compromise of average speed, minimum speed, variation in speed and implementation difficulty. I have been running a PoC implementation of this on and off since 3.0 . Basically it calculates the optimal stripe_width as Total_Space / LargestDrive_Space , rounds it to the closest multiple of the drive increment, and clamps it between the minimum number of drives and the number of drives with free space. The principle behind rounding to the nearest step is adding half the step and rounding down to the next step. The integer division combined with the existing rounding takes care of this. www.unix-ag.uni-kl.de/~t_schmid/btrfs/balanced-allocate.patch I would like to hear what you all think of this.