Next / Previous / Shipman's Home Sweet Homepage / Site map

APL and me

I've been involved with the APL programming language since the late 1960s. APL was quite advanced for its time and influenced many later languages such as Matlab. This is the story of how APL affected my development as a programmer and continues to do so to this day.

Kenneth Iverson was working for IBM in the early 1960s when he invented APL, not as an actual programming language, but as a notation for describing the semantics of the internal operations of computers, such as the IBM System/360. His 1962 book, A programming language, described the notation. Several others worked up some implementations of parts of the notation, and eventually it became an official product. The Wikipedia page on APL is a good summary of the history.


A chance encounter with a strange book

From 1966 through 1970 I was a student at New Mexico Tech majoring in computer science. I worked for the Tech Computer Center all those years, starting as a keypunch “girl” (my Y chromosome notwithstanding) and machine operator and moving on to systems programming.

Right next door to the Computer Center, in the old Workman Center building, was the Research Library. Unlike the main college library, this room was open at all hours, and stocked with things like patent gazettes and back issues of Scientific American. I kept completely random hours as a student, so I often wandered into this room to find something interesting to read.

By chance I picked up a copy of Iverson's A Programming Language. It was pretty challenging and stimulating, especially compared to my background in FORTRAN and assembly. One of the major ideas of this work was the specification of operations on entire vectors and arrays. When you say “A <- B + C” in APL, B and C might be just scalar numbers, or they might both be 9 x 2 x 5000 three-dimensional arrays; the notation is the same.

The Tech Computer Center has developed a number of significant projects. Friends of mine in later days implemented COBOL and Pascal and SNOBOL. I managed to convince the director, Dr. Tom Nartker, to let me implement APL. Our processor, a 360/44, was sort of an orphan; it did not run the same software as the rest of the System/360 line. IBM had put together this processor as a cut-down scientific box to repel competition from Control Data, but the instructions they removed meant it had to run a separate cut-down operating system called PS (Programming System). Not long after the machine was installed, IBM dropped support for it, so the few customers who got stuck with those boxes, including a number of universities, continued to maintain and develop its software base. So I was hoping to build an APL implementation for those of us in this orphan community.

My advisor, John Slimick, got me started in the specialty of compilers and other software tools. I was quite interested in interpreted languages and storage allocators. My first project was an implementation of a little experimental language from Bell Labs called L6, Bell Labs Low Level Linked List Language, which is so obscure even Wikipedia has never heard of it.

Sadly, my APL implementation was never finished. I finished my bachelor's in computer science in December 1970, in the middle of a recession, and continued working on APL until the following May, when I went to work for Hewlett-Packard in the original HP/3000 software lab. At one point I got it to the point of adding two numbers, but had stalled on the generalization to all operators on arrays of all shape.


The HP Labs implementation: Streaker and APLGOL

Around 1974 I moved from the HP/3000 lab at Hewlett-Packard to the ill-fated Streaker project. This group's charter was to build a single-user workstation combining a processor, hard disk, and display in a single package. The target market was engineers who could do their work without relying on an in-house timesharing system. If that group had completed their charter, I think it would have been a successful product.

Unfortunately Streaker was merged with the Amigo group, whose charter was a guarantee of failure: a computer line that would be all things to all people, from tiny little process control stations to large timesharing systems. The Wikipedia page for the HP 300 describes the eventual outcome and failure of this product.

My group was to implement APL for the Streaker. Meanwhile, two powerful developers had just joined HP Laboratories, and were working on an advanced implementation of APL based on "An APL Machine", a 1970 Stanford dissertation by Philip S. Abrams. Dr. John Walters was in his mid-forties then, and brought his phenomenal young protégé Rob Kelly, who was around 20. They came from IBM where they had worked on an alternate processor microcode for the 360/168 that allowed it to execute APL with extremely high performance.

My job was to track their experimental implementation and determine whether it was suitable for use in the Streaker APL implementation. My last act at HP was to insist on the use of their technique, because a “naive interpreter” would have been unusably slow on that architecture.

Working around the brilliant John and Rob was pretty inspiring. Among their innovations was a language called APLGOL. They were both followers of structured programming and were quite appalled by APL's branch syntax. APL did not actually have conditional branch constructs like 'if...then'! APL functions are a numbered set of lines, so “→ 5” was an unconditional branch to line 5. To perform a conditional branch, you had to write an expression to the right of the “→” that would evaluate to a line number in one case, or to an empty vector in the other case; in APL a branch to an empty vector meant “fall through to the next line”. So APLGOL combined Algol branch constructs like “if” and “while” with APL's expression syntax.

Although I left the company in 1975 before the HP 300 implementation was very far along, I was sporadically in touch with the developers who completed it, and I heard that it had excellent performance for that hardware, and I believe it supported APLGOL.

A lot of the innovations of John and Rob's testbed are routine nowadays, such as just-in-time compilation. When an APL function started execution, the internal representation was a sequence of empty placeholders, one for each statement of the function. At the first execution of each line, it would be compiled, and the code inserted into the function's framework. Lines never executed were never compiled, and that saved the work of compiling them.

Consider a statement like “A ← B + C - D”. An unoptimized or “naive” interpreter would break this into two steps:

  1. Allocate a temporary array T1 and compute C - D there.
  2. Allocate space for A and compute B + T1 there.

However, storage allocator calls can be expensive, so the obvious optimization is to write one loop that computes A[i] <- B[i] + C[i] - D[i].

So just-in-time compilation can look to see what shapes B and C and D have when the line is first compiled, and it can generate pretty efficient code. For example, if all three operands are 50-element vectors, the compiled code allocates a 50-element result and then runs a loop 50 times to put its result values in.

But what happens when the shapes or types of the operands change in a future execution of that line of the function? John and Rob addressed this in two ways.

Considerable experimentation went into this process. For example, if the three operands are 51-element vectors on second execution, the new preamble would check that all three operands are still vectors, but use the current size as the loop count instead of the constant 50.

Suppose on a subsequent execution B is a scalar and C and D are multidimensional arrays. The recompiled code would be slower and more general. This generality tends to penalize only programs in which the value of some variable suffered major changes of shape or type, so rational programmers, who don't tend to do these things, would not be affected as much.

John and Rob and I mined a lot of gold from Abrams's dissertation. One key concept was the idea of an iterator. An iterator is a description of how to visit the elements of an array. For example, assuming origin-0 addressing, an iterator for a 20-element vector visits elements 0, 1, 2, ..., 19 in order.

Some APL operations can be performed just by modifying the iterator. For example, there is a “reverse” operator that operates on a vector and produces a new vector with the elements reversed end to end. If the representation of that vector consists of a block of storage and an accompanying iterator, we don't need to physically rearrange the elements: we just modify the iterator to visit them in the order 19, 18, ..., 1, 0.

In the Abrams dissertation, he developed two new concepts called “dragalong and beating.” The idea was that instead of performing each operation of an expression in turn, analyze the whole expression: that is dragalong. And beating meant rewriting the expression so that it has the same semantics but is more efficient.

For example, if the expression specifies the first element of the sum of two arrays, that can be rewritten as the sum of the first elements of the two arrays. A naive interpreter might form a new array, sum the two arrays into it, and then discard all but the first element.

I located Dr. Abrams in 2012 and wrote to him about how his work had influenced this project. He replied that he had made a couple of trips to HP Labs to consult with John and Rob at the time. He thought it was a shame that all this work never made it into the mainstream.


Lessons for today

By 2016 I will have been programming for a living for fifty years. All these years I have tried to reduce my obsolescence and track the progress in programming languages. By now I've internalized the lessons of structured programming and object-oriented programming. Thanks to my continued association with the bright and creative computer science students at New Mexico Tech, and especially my coworker Daniel Lyons, I've become aware that functional programming is the next important major step in my development.

For someone like me, who started from FORTRAN, these transitions have not been easy. It took me a couple of years to understand objects. I've looked at Haskell a bit but I will probably have to rewire my brain completely to get it.

Recently a project I did for my current employer, the National Radio Astronomer Observatory, led me to an important insight about functional programming that relates to my long history with APL.

Although almost all my work since 1995 has been in Python, I learned XSLT because it is necessary to customize the appearance of documents written in DocBook.

XSLT is a pure functional language: each component of a program computes a single result, with no side effects. The goal of an XSLT program is to build a tree, which may represent an XML document. For example, you might write an XSLT function that takes an author node and a title node, creates a new book node, attaches the author and title nodes as its children, and returns the new book node.

My insight relates to the idea of materialization: when a temporary computational result is assigned a storage location and computed and stored there. Much of Abrams's work was designed to avoid materialization of temporary results through dragalong and beating. From his dissertation:

In effect, APL programs can be considered as descriptions of their results rather than as recipes for obtaining them.

So, as an XSLT program executes, it occurred to me that as a programmer it doesn't matter to me whether it is building a physical materialization of the tree step by step, or whether it is instead building a representation of the operations that would be required to build the tree.

At some point the result of the XSLT program must be materialized, serialized as XML. I don't actually know if XSLT materializes the tree piece by piece, or whether it builds the XML from a complete description of how to produce it, and I don't really care, assuming that the process is correct and doesn't take overly long.

So, to summarize, this has opened a door into functional programming for me. The act of programming becomes the concatenation of pieces of description of the result to form a complete description. Its materialization as a single structure in memory or a single file can be delayed until the description is complete. The other benefits of functional programming come into play here, such as the automated parallelization that is one of the benefits of stateless code with no side effects.

It may be a while before I can write proper Haskell. But programming, like all crafts, is a cumulative process, and I'm thankful for my APL background that is helping to get me there.


Next: Industrial period I: 1971-1983
See also: Shipman's memoirs
Previous: Life in Hobbs, NM, 1956-1966
Site map
John W. Shipman, john@nmt.edu
Last updated: 2014/12/26 22:46:16
URL: http://www.nmt.edu/~shipman/write/memoirs/apl.html