You're kind of saying "look over here!" but I'm not that easily distracted.
You said "Which will also become a historical artifact as new protocols are made to use little endian". It's never going to become a historical artifact in our lifetimes. As the peer poster pointed out, QUIC itself has big-endian header fields. IPv4/IPv6 both use big-endian at layer 3.
The OSI layer model is extremely relevant to the Cisco network engineers running the edges of the large FAANG companies, hyperscalers etc. that connect them to the internet.
I was wrong about QUIC, for some reason I was sure I'd read it's little-endian.
I'm just pointing out that UDP is an extremely thin wrapper over IP and the preferred way of implementing new protocols. It seems likely we'll eventually replace at least some of our protocols and deprecate old ones and I was under the impression new ones tended to be little endian.
Foolishly, QUIC is not little-endian [1]. The headers are defined to be big-endian. Though, obviously, none of UDP, TCP, or QUIC define the endianness of their payload so you can at least kill it at that layer.
Modern-ish CPUs have instructions to load big-endian data without having to switch into a special 'big-endian mode', and compilers can optimize into those instructions so language don't need to add special intrinsics:
It's still one more thing you need to keep in mind when writing code, at least in languages that don't have a separate data type for different-endian fields.
There's one area I wish we did differently which I think is a hang-over from big-endian. It's the order of bytes when we write out hex dumps of memory.
!tfel-ot-thgir eb dluow ,redro dleif sa llew sa ,sgnirts lla neht tuB
It could be argued that little endian is the more natural way to write numbers anyway, for both humans and computers. The positional numbering system came to the West via Arabic, after all.
Most of the confusion when reading hex dumps seems to arise from how the two nibbles of each byte being in the familiar left-to-right order clashes with the order of bytes in a larger number. Swap the nibbles, and you get "43 21", which would be almost as easy to read as "12 34".
Yep. We even have a free bit when writing hex numbers like 0x1234. Just flip that 0x to a 1x to indicate you are writing in little-endian and you get nice numbers like 1x4321 that are totally unambiguous little-endian hex representations.
You can apply that same formatting to little-endian bit representations by using 1b instead of 0b and you could even do decimal representations by prefixing with 1d.
For me I think the issue is the way you think of memory.
You can think of memory are a store of register sized values. Big endian sort of make some sense when you think of it that way.
Or you can think of it as arbitrarily sized data. It's arbitrary data then big endian is just a pain the ass. And code written to handle both big and little endian is obnoxious.