Java was still pretty brand spanking new in ‘96. And RMI wasn’t inflicted on us until ‘97. But half the stuff we use was invented in the 70’s and explored in the 80’s anyway.
The classes in college that I have used the most by far are the series that covered logic and set theory, and the one on distributed computing. The latter does take some of the fun out of people rediscovering things though.
IIRC, Berkeley had an OS with swarm computing and process migration around ‘87. It wasn’t until 20 years later that the speed of disk vs networking had a similar imbalance again. And of course the mainframe guys must constantly think the rest of us are idiots.
The idea of creating a program and sending it across an abstraction boundary for execution to avoid the cost of continually navigating the abstraction boundary for each step in the program is reasonably trivial. It was an approach I used in my first job for optimizing access to data in serialized object heaps - rather than deserialize the whole session object heap for each web request (costly), I'd navigate the object graph in the binary blob, but this required following offsets, arrays etc. The idea of a 'data path' program for fetching data occurred almost immediately.
In the end a different approach using very short-lived objects which were little more than wrappers around offsets into the heap combined with type info turned out to be more usable, and with generational GC, was plenty fast too; the abstraction stack was thin, the main cost avoided was materializing a whole heap.
The classes in college that I have used the most by far are the series that covered logic and set theory, and the one on distributed computing. The latter does take some of the fun out of people rediscovering things though.
IIRC, Berkeley had an OS with swarm computing and process migration around ‘87. It wasn’t until 20 years later that the speed of disk vs networking had a similar imbalance again. And of course the mainframe guys must constantly think the rest of us are idiots.