The question of endianness, whether we eat the big end or the little end of the integer first, is, common wisdom tells us, fundamentally arbitrary. The choice between big and little endian is usually made nowadays to conform to standards that were calcified in the past due to whim, convenience, or slight technical advantage. But nowadays machines could just as easily be built to be big or little endian, by whoever it is builds these things. So the question remains: setting aside compatibility with extant computer systems, what is the correct endianness, if any? With no further ado I tell you it is big endian. The reason being, the convention of big endianism was established in our writing system hundreds of years ago, and has been trained into each of us all our lives. This too seems to be largely arbitrary, but since we are unlikely to abandon that convention we might as well have computers work in harmony with it. I do not understand a man who, writing down a sensible integer constant like 0x12345678 should like to look into memory and see his bytes transposed to the ghastly 78 56 34 12. Nor do I have anything but pity for the man whose job occasionally calls for him to add written hex constants in his head, and also add bytes in little-endian memory in his head. I have done this occasionally and the effort of suddenly reversing the years of experience I have with addition produces a sort of numerical seasickness. Indeed, it's even worse than all that, because EACH BYTE IS STILL BIG ENDIAN! 78 56 34 12 is not so much "little endian" as it is "little big little big little big little big endian". I believe this is merely a matter of written convention for bytes, but it still baffles me that anyone could be satisfied with this state of affairs.