“There are three things that we need to focus on as a growing organization: scalability, scalability, and scalability”. If you’re like me, you’ve heard that line at least once. The pace of modern social and technological change is staggering. Most of us struggle just to maintain balance and not be swept away by the tsunami. This is clearly visible in the nurture and guidance of a commercial startup, a new initiative in government, or really any group effort at expansion and/or adaptation. Perhaps the biggest challenge is to grow smoothly without exploding or imploding — put simply, to scale.
The trick is that we’re not honey bees. Brute force scaling is a very lossy process. Forced increase in scale can directly precipitate a reduction in scope. Perspective is lost, diversity is lost, opportunity is lost, horizons shrink, siloes are erected. Not good. Our technology is vastly more complex and complicated. A small error that might safely be ’rounded off’ in a bee colony might bring the entire system crashing down in a human-scale system. Fault tolerance and error correction must be ubiquitous and automatic.
everything fails all the time
– Werner Vogels
There are two main modes of strategic thinking: Deductive and Inductive. Deductive thought employs logic and rationality in order to understand or even predict trends and events. Inductive thought observes the emergence of the complex from the simple to do the same. Both can draw on a mix of mathematical/statistical/historical/computational analysis, although the specific mix is often quite different for the two modes. Both have the aim of making better, more informed decisions. Somewhere in between the two, like the overlapping area of a Venn diagram, is the concept of syntonicity. This is the ability to at least temporarily move one’s mindset to another place/time/perspective. It requires imagination, which is definitely, though perhaps not exclusively, a human capability.
My own area is mostly computational analysis. The vast seas of data available today long ago swamped human capabilities and now require the mechanical tools of automation. The timeline is roughly: counting to writing to gears and rotors (eg. Antikythera mechanism, Pascal calculator) to Jacquard machine (punch cards) to digital computers (vacuum tubes to transistors to Large Scale Integration to distributed computation) to quantum computers and beyond. Along the way, formal concepts were developed such as algorithms, objects, feedback (eg. cybernetics), and artificial intelligence. Many tools and languages have been developed over recent decades (then mostly out-grown), with ages and fashion passing by like scenes from an old Time Machine movie. Those of us who have enough years, enough curiosity, and enough patience, have remained engaged in the movie over the long haul. Simultaneously futurists and dinosaurs, I guess. The red plastic case of my childhood trusty pocket radio proudly boasted of its “3 Transistor” innards. Like most others, I now carry a smart phone that has a billion times that many. That’s modern life — we must scale by orders of magnitude between cradle and grave.
How can the human mind grapple with this much scaling? We evolved to find food and avoid predators in grasslands, not to hop among sub-atomic particles, swim through protoplasm, or wander intergalactic space-time. How can we explore and comprehend reality all the way from quantum mechanics to femtosecond biomolecular reactions to bacteriophages to cellular biology to physiology to populations to geology to astronomy to cosmology?
Whither scope?
Things on a very small scale behave like nothing that you have any direct experience about. They do not behave like waves, they do not behave like particles, they do not behave like clouds, or billiard balls, or weights on springs, or like anything that you have ever seen.
– Richard P. Feynman
Most programming languages are quite horrible at scaling. Scaling down is nigh-on impossible because languages have evolved to be ever bigger. Even those that claim to be lean and elegant usually require vast libraries and modules to do anything useful. Scaling up is normally accomplished by bolting on new data types, functions, and capabilities. Examples include objects, functional programming, and exotic techniques such as machine learning, thus making the language ever bigger and often more opaque. Their standardization and doctrine come at a heavy price. Much architecture and machinery is present and required ‘right out of the box’. Methodology is either implicit or actually enforced through templates and frameworks. This provides tremendous leverage when working at the human scale of activity. It squelches innovation though, when one moves too far down or up in scale.
One of the computer languages I learned and used early-on was Forth. Although I have discarded it several times over the last nearly five (!) decades, I keep coming back to it. It is a very natural almost biological language. I have also found it to be a very syntonic one. A crude definition of syntonicity is ‘fitting in’ or ‘when in Rome, do as the Romans do’. This is the key to scaling the applicability of human thought.
At its heart, Forth is incredibly tiny. It’s essentially a method for calling subroutines simply by naming them. It has a simple model for storage management and sharing: the stack. A stack is one of the oldest computational structures, perhaps going back to ancient time (the abacus, for example). However, it is brilliantly elegant. It combines elemental simplicity with tremendous functionality, a key to high scalability. The entire interpreter and compiler can be implemented in several hundred bytes. Perhaps most importantly, it can be learned, remembered, and used without a bookshelf full of manuals and references. Scaling up is unlimited and quite self-consistent; one basically bootstraps the Forth system like a growing embryo, not like a Rube Goldberg machine. Using this process, Forth can actually become fairly large and capable, see gForth. Note that scaling Forth and the underlying scale of the environment are orthogonal. The real power and utility of Forth comes from its simplicity. For example, with today’s many-core CPUs, it is possible to implement many separate, even heterogeneous, Forth engines in one computer, fully independent yet still communicating. Try that with a behemoth language or even hefty virtual machine.
Thusly, and personally, armed with the smallest possible computational toolkit, the freedom to think is restored. Researcher-programmer meetings can be cancelled. The horse can be put back in front of the cart. One can focus on grokking (grasping syntonically) the environment, physics, and inhabitants of the new scale (and thus horizons broaden again).
Of course, I’m not advocating Forth to be used for things like massive data manipulation, replacing tools like SQL, NoSQL, and beyond. Concurrency, seamless replication, automated inferencing, and vast interoperability are somewhat beyond Forth’s capability (though not entirely, suprisingly). Such tasks usually apply to teamwork. Elementary Forth is not a team language. It’s more suited to individual thought and exploration. Isaac Asimov once mused about the benefits of isolation at least in early stages. Again, we’re not honey bees.
Learning Forth is best done by using it – it’s tiny and simple to start with. If you’re more the reading type, one of the first, and best, books on Forth is Starting FORTH (1981) by Leo Brodie.