First published in Electronics World June 2003 pp47-51
In 1962, main memory was either
delay line or magnetic core memory. Logic was a completely different technology
and comprised circuits made up by wiring together individual transistors and
resistors. A limited communication channel between memory and processing, later
called 'The Von
Neumann Bottleneck', resulted inevitably from the use of two disparate technologies.
Following on from an article in the January issue of Electronics World by Nigel
Cook, Ivor Catt looks at what might have been, had anybody listened.
R&D at Ferranti Ltd., West Gorton. Manchester, where we designed and built computers,
there was an in-house one day conference in late 1962 to discuss
the significance for computer design of the newly arriving integrated
circuit which contained more than one component fabricated together
on a single chip. Semiconductor technology looked set to take over
the two previously separate roles; memory and processing.
brightest engineer, K. C. Johnson (Ken Johnson) drew a transistor
bistable (= memory bit, now called SRAM) on the board, and pointed
out that with a (totem pole of) one further transistor per bit.
a full column of bistables could be searched in one cycle to look
for the presence of one or more 'ones' stored as shown in Fig.1.
(DRAMs had not been thought of then). This had a traumatic effect
on me. It meant that the historic reason for separating processing
from memory (via the "Von Neumann Bottleneck") had disappeared.
1: SRAM upgraded to CAM.
first technically achievable stage in complexity would be for a single instruction
to cause every word with (say) 1 101 as its most significant four bits, to
exit memory one at a time. That was called a 'Content Addressable Memory' or
CAM. In England it was called an 'Associative Memory'. That was the next thing
to fabricate after
we had mastered the fabrication of semiconductor RAM.
These were comparatively primitive times. Note that even some years later, in 1966, the semiconductor house where I worked, Sperry Semiconductor, earned its spurs by successfully fabricating a copy of the Honeywell-Transitron 16-bit memory; a chip containing all of 16-bits of RAM, stored as 16 transistor bistables. This compares with today's l6Mbit RAM, a million times more on a single chip. The next step in sophistication, after causing the reading of words out of memory addressed by their content. would be to cause all words with a certain content to have a specified operation done on them in situ in memory, all in parallel.
Back to the
after success (i.e. a good yield) with the 16-bit memory chip, the first content
addressable memory should have been built, in around 1968. Not only was it
not built then; it has still not been built today, a quarter of a century
later. The first step towards distributed processing, or array processing, has not been made. We remain stuck
in 1966. My article on the problem, 'Dinosaur among the Data?' was published
in New Scientist, 6 March 1969, pp501/502, but only because the sub-editor, whom
I visited, thought
my article discussed the fashionable question;
IvorCatt.org | IvorCatt.com | Electromagnetism1 | Oldest
Website | IvorCatt 2004 Archive