ONJava.com -- The Independent Source for Enterprise Java
oreilly.comSafari Books Online.Conferences.

advertisement

AddThis Social Bookmark Button
Article:
  How an Accident of Hardware Design Encouraged Open Source
Subject:   Endianness, Arabic numbers, etc.
Date:   2007-02-27 15:31:51
From:   Mark_Rosenthal
Response to: endianness


Taniwha wrote: "there were both little-endian and big endian machines before the '11, most of the word oriented evil back then probably had more to do with not having a power of 2 bytes in a word, if I remember '11 longs (or was it floats) were actually stored partially bigendian, etc etc"



Since endianness only applies to how bytes are numbered within a word, the issue only arises in architectures that are byte-addressable and have a wordsize larger than one byte. So the constraint is not whether the architecture has a power of 2 bytes in a word, but whether the wordsize is a multiple of the number of bits in a byte and whether each byte is assigned its own address.



Regardless of whether other manufacturers produced byte-addressable machines of either byte order, the PDP-11 is of critical importance because it's the architecture that the first ubiquitous portable operating system (Unix) grew up on. The 360 is important for the same reason that Microsoft Word is important today. For good or ill, its manufacturer was the behemoth that drove the industry.



You are correct about the 11's representation of multi-word data. The idea of giving low-order bytes lower addresses than high-order bytes was only maintained within a single word. High-order words within floats and doubles were stored before low-order words. It seems likely that the hardware implementation of these datatypes was designed by a different hardware engineer, and nobody noticed the inconsistency until too late.



Taniwha wrote: "Mostly though I suspect the 'storing stuff as ascii' idea was something that couldn't catch on until disk storage got cheap enough."



Disk capacities have always been way ahead of memory capacities. While disk storage may have had a small effect, the real limiting factors were the things I mentioned in the article. Manufacturing core memory was so expensive that no machine had a lot of memory until the industry switched to RAM. And CPU speeds were slow enough that when you considered storing numeric data as ASCII, the first thought that popped into your mind was, "How many instruction cycles will it take to convert the data to a format I can use for computing?"



Taniwha wrote: "The other big change Unix brought in - simple unstructured text files - needed us to break away from that record based 'every thing is a card image' mindset."



The break from the record based 'every thing is a card image' mindset really didn't apply in the DEC world -- at least not in the Small Systems Group which was responsible for OS-8 on the PDP-8 and RT-11 on the PDP-11. Unlike the mainframe world where the basic assumption was that input came from a card reader and output went to a line printer, our basic assumption in the Small Systems Group was that input came from and output went to an ASR-33 Teletype or similar device. Paper tape and keyboard input was assumed to be variable-length ASCII records delimited by \r\n.



Taniwha wrote: "apparently we got Arabic notation from Spanish monks working their way through the libraries the Moors left in Spain"



I'm not terribly familiar with the history of Spain, so I don't know what influence the Spanish monks had, but http://en.wikipedia.org/wiki/Arabic_numerals reports that our numbering system was originally called Hindu numerals and that a 9th century treatise by a Persian scientist entitled "On the Calculation with Hindu Numerals" was translated into Latin in the 12th century. It also says that Fibonacci, an Italian, promoted the system in Europe after learning it in Algeria. So the system may have entered Europe through multiple avenues, and what was originally called "Hindu numerals" later came to be known as "Arabic numerals."



Taniwha wrote: "they picked them up and brought them into western languages but weren't smart enough (I blame them for our current endianess malaise) to turn the digits around when they were included in a right-to-left writing system."



This is a very interesting observation. Apparently the inconsistencies in modern hardware design are nothing new.



Taniwha wrote: "(and BTW remember even in our modern little endian computers we still tend to number the bits within bytes in a big endian way ...)"



Since neither the 360 nor the 11 instruction sets included any instruction that identified bits by their bit address, the bit-numbering was really a paper exercise. But our modern little-endian Intel architecture computers have instructions like BSF and BSR (bit scan forward/reverse) which return the bit address of the lowest bit that's turned on in the source operand. And the bit numbering used is little-endian, not big-endian.