Happy 61st Birthday to the Intergalactic Computer Network — Now Let's Finish Building It

April 23rd, 2024

On April 23, 1963, J.C.R. Licklider wrote a memo addressed to the “Members and Affiliates of the Intergalactic Computer Network”, widely considered to be the genesis of ARPANET and by extension the modern Internet. Except, the memo describes a lot more than just “internetworking”. Licklider has a very clear vision of an online, collaborative, malleable substrate for computing:

I want to retrieve a set of experimental data that is on a tape called Listening Test. The data are called “experiment 3.” These data are basically percentages for various signal-to-noise ratios. There are many such empirical functions. The experiment had a matrix design, with several listeners, several modes of presentation, several signal frequencies, and several durations. I want, first, to fit some “theoretical” curves to the measured data. I want to do this in a preliminary way to find out what basic function I want to choose for the theoretical relation between percentage and signal-to-noise ratio. On another tape, called “Curve Fitting,” I have some routines that fit straight lines, power functions, and cumulative normal curves. But, I want to try some others, also. Let me try, at the beginning, the functions for which I have programs. The trouble is, I do not have a good grid-plotting program. I want to borrow one. Simple, rectangular coordinates will do, but I would like to specify how many divisions of each scale there should be and what the labels should be. I want to put that information in through my typewriter. Is there a suitable grid-plotting program anywhere in the system? Using prevailing network doctrine, I interrogate first the local facility, and then other centers. Let us suppose that I am working at SDC, and that I find a program that looks suitable on a disc file in Berkeley. My programs were written in JOVIAL.

The programs I have located through the system were written in FORTRAN. I would like to bring them in as relocatable binary programs and, using them as subroutines, from my curve-fitting programs, either at “bring-in time” or at “run-time.”

Supposing that I am able to accomplish the steps just described, let us proceed. I find that straight lines, cubics, quintics, etc., do not provide good fits to the data. The best fits look bad when I view them on the oscilloscope.

The fits of the measured data to the cumulative normal curve are not prohibitively bad. I am more interested in finding a basic function that I can control appropriately with a few perimeters than I am in making contact with any particular theory about the detection process, so I want to find out merely whether anyone in the system has any curve-fitting programs that will accept functions supplied by the user or that happen to have built-in functions roughly like the cumulative normal curve, but asymmetrical. Let us suppose that I interrogate the various files, or perhaps interrogate a master-integrated, network file, and find out that no such programs exist. I decide, therefore, to go along with the normal curve.

At this point, I have to do some programming. I want to hold on to my data, to the programs for normal curve fitting, and to display programs that I borrowed. What I want to do is to fit cumulative normal curves to my various sub-sets of data constraining the mean and the variance to change slowly as I proceed along any of the ordinal or ratio- scale dimensions of my experiment, and permitting slightly different sets of perimeters for the various subjects. So, what I want to do next is to create a kind of master program to set perimeter values for the curve-fitting routines, and to display both the graphical fits and the numerical measures of goodness to fit as, with light-pen and graphics of perimeters versus independent variables on the oscilloscope screen, I set up and try out various (to me) reasonable configurations. Let us say that I try to program repeatedly on my actual data, with the subordinate programs already mentioned, until I get the thing to work.

Let us suppose that I finally do succeed, that I get some reasonable results, photograph the graphs showing both the empirical data and the “theoretical” curves, and retain for future use the new programs. I want to make a system of the whole set of programs and store it away under the name “Constrained-perimeter Normal-curve-fitting System.”

...

In the foregoing, I must have exercised several network features. I engaged in information retrieval through some kind of system that looked for programs to meet certain requirements I had in mind. Presumably, this was a system based upon descriptors, or reasonable facsimiles thereof, and not in the near future, upon computer appreciation of natural language. However, it would be pleasant to use some of the capabilities of avant-garde linguistics. In using the borrowed programs, I effected some linkages between my programs and the borrowed ones. Hopefully, I did this without much effort –- hopefully, the linkages were set up–or the basis for making them was set up–when the programs were brought into the part of the system that I was using. I did not borrow any data, but that was only because I was working on experimental data of my own. If I had been trying to test some kind of a theory, I would have wanted to borrow data as well as programs.

When the computer operated the programs for me, I suppose that the activity took place in the computer at SDC, which is where we have been assuming I was. However, I would just as soon leave that on the level of inference. With a sophisticated network-control system, I would not decide whether to send the data and have them worked on by programs somewhere else, or bring in programs and have them work on my data. I have no great objection to making that decision, for a while at any rate, but, in principle, it seems better for the computer, or the network, somehow, to do that.

I find this a deeply compelling vision, even 61 years later! Online access to data and applications, similar to the Web/cloud model we have today, but where the applications (server-side as well as client-side) can be dynamically modified to take advantage of new capabilities (no matter what language they were originally built with). One where computation moves from a remote service to your local machine depending on which is most suitable. And where modifications can be easily published for others to discover and reuse.

Pieces of it certainly exist today, but not in a deep, systematic way. In fact, while all the core technology was around in the 70s, to the best of my knowledge no one has ever built a demo of this workflow. (I’d love to hear if anyone knows otherwise.)

HyperMap is a new spec for REST APIs that is self-descriptive and supports client-side code execution, two key capabilities that I think make it one of the foundational rails for the Intergalactic Computer Network. In the next few weeks I’ll release a HyperMap powered demo of Licklider’s workflow. If you’d like to stay up to date follow along on Twitter or in the Discord.