At CES, I met a company of old Intel engineers who privied me on questions maddening every USB device developer during a dinner.
To understand extremely questionable design moments, they began the explanation how seemingly well meaning efforts to preserve USB 3 backward compatibility with USB 2 devices ended up repeating the story dating back to the earliest version of USB: extreme consideration for preserving backward compatibility ends up degrading backward compatibility in real life.
USB 1.0 had a provision for 2 speeds, selected with voltage levels without any electronics at all, but higher ones were made to be possible in the future through reserved bits in link configuration. They tried to keep electronics very simple, and dumb.
USB 2.0 used differential signalling, and completely different electrical properties of the link, so reserved bits weren't needed or touched, the speed detection only used voltage sensing.
For USB 3.0, Intel engineers really wanted to preserve both the same physical connector, AND backward compatibility at any cost. They faced conflicting design requirements: backward compatibility WITHOUT extra chips, AND forward compatibility without any chance of upsetting the old devices by sending them unknown configuration packets.
The problem they feared the most was that over a decade of USB 2.0, some device manufacturers may have started to "hardwire" the USB 2.0, without supporting the full link negotiation, and that new devices may in some unfathomable way mess up with existing ones because of that, if they were to add new setup packets.
They really hoped to keep the old cabling, and connectors. There was a proposal to basically replicate the POE ethernet, by DC biasing data lines, and making an extra differential line from power conductors, but they realised that the impedance of power wires cannot be guaranteed, nor was it easy to implement a fool-proof fallback without ICs (but which they were going to need anyway for supporting 3.0 signal).
So, and then, they decided on the crudest, and most obvious solution: prevent 2.0 and 3.0 signals from ever crossing — new connectors, and new cables, no need for an elaborate auto-configuration mechanism.
The reverse compatibility mechanism went from software level, to mechanical level: a USB 3.0 device can still have a completely separate set of chips connected to its USB 2.0 pair of pins, and the switch can be done with a single MOSFET powering, either one or another set of chips. And as a bonus, even the backward compatibility of USB hubs came for "free" that way.
However, and yet again, Intel has not used an auto-configuration mechanism which it itself previously put into the USB standard to provide forward compatibility. Something will repeat with USB 3.1, and 3.2, which use their own link negotiation mechanisms.
And now, here comes the era of Type-C connector.
Initially, they were led by a well meaning intention to make the USB Type-C the only physical connector a computer will need, encapsulating every common interface inside of itself: all modern interfaces were remarkable similar electrically (differential pairs,) similar line coding schemes, and they are based on packets. So, at the first glance it looked like a good idea.
One early idee-fix was to basically make Type-C a purely mechanical, pin-compatible replacement of the USB type-A connector, that went very well with device manufacturers. Another, to make orientation detection purely mechanical as well, since differential pairs don't care much about polarity. They wanted it to not need any new ICs, and work with all existing ones. And the third, was to provide significant power over the same connector by incorporating an already existing USB PD 1.0 standard.
The first casualty of idealism was the realisation that vendors actually care about polarity of differential pairs, because they already trying to do tons of non-standard-compliant, proprietary signalling over differential pair lines. The polarity had to be explicitly detected, and signals rerouted accordingly. Thus, this is how the need for the first pair of unused pins arose. Only this way, they were able to get 2.0 pins to connect first, and for their polarity to be obvious.
The second casualty was the desire to incorporate other interfaces, like DisplayPort. The data rate of any video signal precluded that the ICs which will "tunnel" other protocols inside USB interface would be cheap, and simple. And so, it was decided to just reroute them electrically as-is over the USB pins. That way, the Type-C monitor would ideally work without new ICs at all, satisfying the desires of "no new ICs camp." And this is how Type-C got SBU pins, which still nevertheless ended up requiring their own ICs for different SBU signals, and switching between them.
And the last blow was the attempt to incorporate USB Power Delivery proposal, which was already standardised for quite a few years. It was only one last problem: if they were to follow "no new ICs" principle, they had to reuse existing, and rare USB PD 1.0 governor chips. And to add an insult to injury, it OPTIONALLY required an extra pin, and to be completely pin-compatible with existing USB PD, they decided on one more pair of pins, in case somebody decides to connect USB PD 1.0 device through a Type-C cable, through an adaptor.
In the end, how it all was able to work together was only with dedicated Type-C controller ICs, which orchestrate the complex sequences of handshakes, speed, and power negotiations over 4, or more completely separate, and different signalling systems going into a Type-C connector. In the end, they ended up needing a ton of new ICs to work properly.
Now they regret it, and they say it would've been 100 times simpler to make it non-backward compatible, and redesign it from scratch, than to accommodate all that.
To understand extremely questionable design moments, they began the explanation how seemingly well meaning efforts to preserve USB 3 backward compatibility with USB 2 devices ended up repeating the story dating back to the earliest version of USB: extreme consideration for preserving backward compatibility ends up degrading backward compatibility in real life.
USB 1.0 had a provision for 2 speeds, selected with voltage levels without any electronics at all, but higher ones were made to be possible in the future through reserved bits in link configuration. They tried to keep electronics very simple, and dumb.
USB 2.0 used differential signalling, and completely different electrical properties of the link, so reserved bits weren't needed or touched, the speed detection only used voltage sensing.
For USB 3.0, Intel engineers really wanted to preserve both the same physical connector, AND backward compatibility at any cost. They faced conflicting design requirements: backward compatibility WITHOUT extra chips, AND forward compatibility without any chance of upsetting the old devices by sending them unknown configuration packets.
The problem they feared the most was that over a decade of USB 2.0, some device manufacturers may have started to "hardwire" the USB 2.0, without supporting the full link negotiation, and that new devices may in some unfathomable way mess up with existing ones because of that, if they were to add new setup packets.
They really hoped to keep the old cabling, and connectors. There was a proposal to basically replicate the POE ethernet, by DC biasing data lines, and making an extra differential line from power conductors, but they realised that the impedance of power wires cannot be guaranteed, nor was it easy to implement a fool-proof fallback without ICs (but which they were going to need anyway for supporting 3.0 signal).
So, and then, they decided on the crudest, and most obvious solution: prevent 2.0 and 3.0 signals from ever crossing — new connectors, and new cables, no need for an elaborate auto-configuration mechanism.
The reverse compatibility mechanism went from software level, to mechanical level: a USB 3.0 device can still have a completely separate set of chips connected to its USB 2.0 pair of pins, and the switch can be done with a single MOSFET powering, either one or another set of chips. And as a bonus, even the backward compatibility of USB hubs came for "free" that way.
However, and yet again, Intel has not used an auto-configuration mechanism which it itself previously put into the USB standard to provide forward compatibility. Something will repeat with USB 3.1, and 3.2, which use their own link negotiation mechanisms.
And now, here comes the era of Type-C connector.
Initially, they were led by a well meaning intention to make the USB Type-C the only physical connector a computer will need, encapsulating every common interface inside of itself: all modern interfaces were remarkable similar electrically (differential pairs,) similar line coding schemes, and they are based on packets. So, at the first glance it looked like a good idea.
One early idee-fix was to basically make Type-C a purely mechanical, pin-compatible replacement of the USB type-A connector, that went very well with device manufacturers. Another, to make orientation detection purely mechanical as well, since differential pairs don't care much about polarity. They wanted it to not need any new ICs, and work with all existing ones. And the third, was to provide significant power over the same connector by incorporating an already existing USB PD 1.0 standard.
The first casualty of idealism was the realisation that vendors actually care about polarity of differential pairs, because they already trying to do tons of non-standard-compliant, proprietary signalling over differential pair lines. The polarity had to be explicitly detected, and signals rerouted accordingly. Thus, this is how the need for the first pair of unused pins arose. Only this way, they were able to get 2.0 pins to connect first, and for their polarity to be obvious.
The second casualty was the desire to incorporate other interfaces, like DisplayPort. The data rate of any video signal precluded that the ICs which will "tunnel" other protocols inside USB interface would be cheap, and simple. And so, it was decided to just reroute them electrically as-is over the USB pins. That way, the Type-C monitor would ideally work without new ICs at all, satisfying the desires of "no new ICs camp." And this is how Type-C got SBU pins, which still nevertheless ended up requiring their own ICs for different SBU signals, and switching between them.
And the last blow was the attempt to incorporate USB Power Delivery proposal, which was already standardised for quite a few years. It was only one last problem: if they were to follow "no new ICs" principle, they had to reuse existing, and rare USB PD 1.0 governor chips. And to add an insult to injury, it OPTIONALLY required an extra pin, and to be completely pin-compatible with existing USB PD, they decided on one more pair of pins, in case somebody decides to connect USB PD 1.0 device through a Type-C cable, through an adaptor.
In the end, how it all was able to work together was only with dedicated Type-C controller ICs, which orchestrate the complex sequences of handshakes, speed, and power negotiations over 4, or more completely separate, and different signalling systems going into a Type-C connector. In the end, they ended up needing a ton of new ICs to work properly.
Now they regret it, and they say it would've been 100 times simpler to make it non-backward compatible, and redesign it from scratch, than to accommodate all that.