Thoughts on Arrays The array, as I understand it, is a data structure with the following characteristics: every element has the same type, and its elements can be accessed in the same way as every other element. Records are composite data structures with named elements, whereas arrays are composite data structures with elements named by an ordered type such as the non-negative integers; any type isomorphic to integers will work, however. All languages I've seen limit the size of arrays in some way beyond mere memory exhaustion concerns. In Ada, which is most suited to finite-state machines in any case, arrays must be defined or declared with an enumeration type, which are necessarily finite. Even in Common Lisp, arrays are indexed by FIXNUMs, also finite, whereas LIST-LENGTH can return BIGNUMs with sufficiently large lists given to it. It's clear to me using arrays for indefinite problems is wishful thinking. I recall reading an old article about the wastefulness of working memories, how an entire line would be read out, only to discard most of it; this waste has been addressed by exposing the size of these lines and delivering the whole line as one unit, lowering the abstraction for reasons of efficiency. At the physical level, it seems obvious logarithmic time complexity is the best one can achieve with problems of arbitrary size. For a language like APL, the illusion of an array could perhaps remain, only because every element of an array is typically affected in a uniform way. If the illusion of a single working memory can be eliminated successfully, then the abstraction can be kept successfully. Constant time operations make perfect sense in suitably-limited contexts, like insertion into linked lists, and to prevent arrays from growing with the problem's size can maintain this for arrays. The trie is a data structure which well shows this quality in some of its varieties, being a linked data structure with a node, possibly using domain-bounded arrays, for each element of the input sequence. Time complexity analysis is certainly muddied by assumptions about hardware which don't seem to hold in practice. One would probably be served better by imagining hardware for specific problems to see the data flow, rather than drowning in equations whose assumptions will almost certainly go ignored. Undoubtedly, for many problems, none of this could possibly matter in practice, but is good to know.