March 26, 2016

Documentation and Portability

The whole civilized world has become dependent on the internet.

When you find yourself without network connectivity, many of today's applications (especially Android applications) simply won't work, they display a message telling you to connect to the network, and they're done; if you need to use your application in order to do something, you're out of luck until your network connection and the remote server both become available again.

Even if your particular application is not inextricably bound to the existence of a remote server to perform its function, there is an increasing trend toward placing all documentation on some website that's almost guaranteed to be down when you most need to look something up.

If an application is to be totally portable, it cannot bound to the internet for the performance of its function; if it is bound to the internet, it is only portable to those locations where internet connectivity is available.

If the network is available, fine; but if it is not available, a portable application should still be able to perform its full functionality, including the presentation of whatever documentation is necessary.

While this seems to call into question the portability of any specialized client having no function other than to interface with a specific remote server, a portable client's user interface is always the same no matter where it's running, and indicating whether or not its associated server is available should always be part of such a client application's function.

Ideally the end-user should never have to RTFM, but when it's necessary, it should be possible, and the documentation immediately available should correspond exactly to the application version installed on your media.  If you are at a wifi hotspot and download a new application onto your Android phone, it should work later when you have no network connectivity, and it should work identically to the copy you downloaded or copied onto your Mac or Windows or linux system, documentation and all.

Nobody needs documentation to use a hammer, a hammer's use is so obvious in its implementation that a monkey can use one without the need for documentation.  Computer programs are just specialized hammers made for beating data into shape, and their operation can be just as obvious as a hammer if they're done right; they seldom are, so we end up needing documentation if we're to use the program, whether or not the network is available.

February 29, 2016

Addresses - What Good Are They?

In order to construct applications that are truly portable, we need to address all the requirements for true portability.

If you can't move your application, and its associated data, from your Windows or Mac or linux system that's running on an Intel processor, to your Android tablet that's running an ARM processor, or your BlackBerry phone running OS-10 on top of a who-knows-what processor, it isn't truly portable.  If your application is running on a little-endian system and it has to be modified in any way to run on a big-endian system, it isn't truly portable.

I'm not talking about some conversion program that claims to convert things for you, and might, or might not, do it quite right, and I'm not talking about rebuilding your applications to run on a different processor architecture: I'm talking about a binary file copy operation.

If you can copy your application's executable and associated data files from wherever you last used them onto a USB stick, and then copy them from that USB stick onto the next system you use, regardless of processor architecture, and your applications run identically, they are portable; otherwise, they are not portable.

It's that simple, that's my definition of portability: same program, same binary datastream, same operation, on any system.

From this, the concept that an application should not be bound to instruction-set or architecture-defined wordsize, it becomes clear that addresses, and in fact all forms of word-based arithmetic, are issues that must be addressed if Totally Portable Software is to be more than some distant ideal.

In early days, when applications were measured in kilobytes instead of megabytes, and data was counted in bytes rather than gigabytes, we used addresses.  We assembled our code into machine instructions, and we counted the bytes of each instruction we wrote, and gave each label used in our program the address of the instruction or data it represented.

The only purpose an address has ever served, in any compter, regardless of architecture,  is to locate some code or some data in memory.

The labels, names we assigned to control-points or data items, were always for humans, and the addresses were always for the computer; programming proceeds from semantics to syntax, meaning to implementation.  The fact that we have compilers and linkers to take care of the details of address-tracking does not make the addresses somehow more important than the meanings of the labels that such transient calculated addresses represent.

Totally portable application code cannot include binary memory addresses because of variations in wordsize and endianism, but binary addresses are mere binding details with no inherent semantic value; the label "returnPoint" has a meaning of its own, but the instruction address 000137F4 does not have any meaning of its own, all of its semantic content derives exclusively from the fact that it represents "returnPoint".

We can cut out the "address" middleman and access all control-points and data-items by name.  In fact one could say that the only reason for ever having used binary addresses at all is that they could be implemented with the hardware of early-generation computers.  We have enough to work with now that binary addresses are no longer necessary; we can address control-points and data-items by name.

Likewise binary, word-oriented, numeric representations are problematic.  For portability we do not want to impose constraints related to endianism.

However, as with addresses, the binary representations of numbers are not the numbers themselves.  We use the character representations of the numbers when entering them in as data, and when printing them on reports.  The question of which can be stored more compactly becomes relatively unimportant when the main-storage of systems one can actually purchase today begins at 1 gigabyte and becomes much larger, and only USB sticks come in quantities of less than hundreds of gigabytes.

The question of performance becomes a potential issue with numbers stored in their character representations.  The question of whether decimal-arithmetic based on a character string, is faster or slower, than the conversion of the same string to binary form followed by binary arithmetic of fixed and "non-portable" precision followed by the result's conversion back to a character string, is mostly answered by how much actual arithmetic is being done on the binary form.  And there are still such things as "math coprocessor" components that can be called upon at need.

Totally Portable Software does not require either numeic instruction addresses or binary arithmetic, in fact it requires that neither be supported.

This does however make it more clear that Totally Portable Software be written in a fully interpretive language, which in turn requires an interpreter that (1) runs on as many different system configurations as possible, and (2) runs efficiently enough to make it worth the trouble.

Having written massively complex applications in interpretive languages that support associative-storage (VM/Rexx and linux PHP), having observed the huge increase in processor-speeds and storage-per-dollar over the past few decades, seeing the trend from desktop to laptop to tablet, and looking at the way tablets and smartphones are set up, I am convinced not only that Totally Portable Software is possible today, but also that it is past time to get started.

February 20, 2016

Whats wrong with linux?

The title of this post might lead readers with a Windows or Mac background to expect a bitch-session denigrating linux; such readers will probably be disappointed, unless they recognize the fact that all operating systems are flawed, usually in different (though no-less-annoying) ways.

At the outer, most user-centric level, many people, even those who use a linux distro on a daily basis, think there is a linux operating-system.  In my view, there isn't.  An operating-system has a single well-defined programming interface, available to all applications.  Linux has no such thing: it has not yet evolved one.

Linux, technically, is the linux kernel.  A linux distro (distribution) consists of the linux kernel, the core-command set, some standard and non-standard "daemons" ('daemon' is Unix, thus linux, terminology for a local service process), a whole slew of libraries that attempt to provide the glue between the "operating-system" and its applications, and an eclectic assortment of additional "non-core" commands, daemons, and applications, whose developers have managed to bring them to a "working" state on top of a furiously-shifting environment.

The linux kernel is not "done" yet (software is never really "done", the part not yet done is called "the next release"), and new hardware and new ideas continue to result in further kernel improvements.

However, when an application's best access to system functionality is the textual output of one of the core-commands, bringing that application to a state acceptable as "working" is no small feat, and one based on shifty sand at that; textual output has to be parsed, and if the output changes in either layout or content, the previous parsing must be manually updated.

That's just how it is, and this is no invention new to linux, Unix has worked that way for decades, even though at about the same time that Unix was being developed, they were teaching us in undergraduate Computer Science classes that we should never, ever, parse the textual output of a command and then use it as input; that puts your programs at the mercy of whoever is coding the message output.   Hopefully, it also inhibits that person in the improvement of the message text, because changing it daily would incite riots within the ranks of the software development community.  Instead, we were taught to use the defined operating-system interfaces, which did not require textual parsing and could be expected to "hold still" for a much longer period of time. 

Apparently those who initially developed Unix felt that this was a non-problem, but I strongly disagree; I've had my utilitiy code jerked around by new versions of "the operating system" about once a year, on average.

When the closest a system has to a defined application-programming interface *is* the message output of commands, the application programmer is between a rock and a hard place.   On one hand, the system only provides the information in the form it supports; on the other hand, your applications (and subsequently you, if you use what you write, and if you don't use what you write, you shouldn't be the one writing it) end up getting jerked around like a puppet whenever some core command changes. Between that and having laptops reach what appears to be their planned-obsolescence limit of 2 years, and trying to get a clean distro install on top of whatever firmware "the industry" has decided the market must-have, it can be tough to keep a consistent forward momentum on application development projects.

In addition to the issue of applications that are forced to try and remain current while the libraries and line-mode commands they use change, there is also the configuration issue, that inevitably turns up, in some unpredictably different way, with every fresh install.

There are at least dozens, probably hundreds, of different configuration files that have to be manually updated in order to get things working usably on a Linux Desktop, which of course is not linux, but only a desktop-environment-application that runs on top of the linux kernel and a lot of libraries. 

Each DE (Desktop Environment) attempts to offer something in the area of system configuration, but there is little consistency in config-file format, and less integration between the various system-level programs involved. 

Each distro includes the configuration-defaults for every Desktop Environment package installed, as part of their DE packages.  An end-user attempting to control the computing tool from a given GUI environment can easily become confused about which GUI configuration tools are actually related to whatever config files actually determine how the system operates.

It's a mess.  It isn't bad, just messy.  Windows advocates will try to explain how Windows is better in this way or that.  Having used various versions of Windows between 1993 and 2013, having developed custom controls for Windows from the frame up, having had the user interfaces to the tools I needed to use jerked around with basically every release of every product, I will tell you that Windows is no better than linux in any way that I ever ran into.  The Win32 interface may have been a unified operating system interface, but in my opinion it was as bad or worse than anything I've seen to date in linux.

As messy and confusing as linux might be, there's no other operating system I know of that meets my "min spec".  Of course we all have different views of what "min spec" might comprise.

Approaching software development scientifically requires that we be able to exactly reproduce the starting conditions for our tests, which includes the testing we do as a normal part of the development process.  To understand how something has changed, you need the both the before and after states.  If you can't reproduce the before state, exactly, you're guessing.  If you can't restore your data onto a fresh system drive, including the supporting operating system, from a last-known-good snapshot, that operating system doesn't meet my min-spec.

Linux meets my min-spec: you can conveniently back up the system that supports your code, and restore it to a fresh drive, and having restored it (and adjusted /etc/fstab to load from the intended kernel image), your code will run as it did before.  In fact you can even restore it onto a different hardware configuration, as long as the processors are compatible and your root filesystem contains the necessary drivers, and it will boot and run your code, which will run more or less identically, depending on how hardware-dependent your code might be.

The main problem I see with linux, other than its having been grossly oversold as what it is not (it is not yet a desktop operating system), and the fact that development is proceeding at a pace users may find it difficult to keep up with, is that it is a copy of Unix, which was designed as a multi-user system.

That is not to say that there is anything wrong with designing a system to have the necessary flexibility and security features to support multiple users attached to a single processor.  However, the vast flexibility linux provides is difficult for most end-users to configure; the swiss-army-knife has too many blades of uncertain function for the average user to deal with. 

This will almost certainly solve itself over time, as the linux toolkit evolves into something more fully integrated, with configuration aligned to its corresponding functionality and stored in a central location and format.  No enforcement of standards is necessary for this to occur, it can be expected to simply happen, as better configuration methodology is developed, and application developers continue to use the best and easiest methods available, because if application developers were not inherently lazy, they wouldn't be in the business of automating the work to be done, they'd be doing it by hand.

There is however a characteristic of multi-user system designs that I find problematic.  When a processor with limited resources is supporting dozens, or hundreds of users, who are logged on at once, it makes sense to reduce the overall system footprint by sharing one copy of each application among all of its users.  However, most of the systems linux is currently running on actually support only one user at a time; count the number of Chromebooks and Android phones and tablets out there to see just how quickly the server population has been surpassed by the consumer population, and how outmoded the multi-user-system paradigm has become.

The paradigm of sharing an application, and in most cases its base configuration, among multiple users, breaks down whenever a new version of an application is released, because applications may update the schema under which they store their data.  When this happens the end-user is irrevocably committed to the new application version, at the whim of the sysadm who updates the shared copy of the application.  Even when the end-user chooses the time of update, as is the case when they are performing a general system update "as administrator", the change to a new version of an application may be irrevocable; even when you can regress the application's executable, your data remains stored in the application's latest format, and you can expect previous versions of the application to choke on it.

What is most necessary, it seems to me, is both a physical and logical level of separation between "operating system" (which in most linux distros amounts to the entire root partition with the exception of '/home'), and "application", along with a homogeneous application interface to the operating system.

Each user should have his own copy of each application used.  It should be possible for the user to update any application to a newer version at any time, modify it, or regress application+data (including application settings) to any previous version, or to a last-known-good snapshot.

Of course this brings the dependence of applications on shared libraries to the surface as another problem.  When an application relies on an external library to perform its functions, the process of updating the application includes an update to an external library; likewise, the regression of an application to a previous version may include the regression of an external library to some previous version.

That makes it clear that the dependence of applications on shared libraries is also problematic.

In olden times, when dinosaurs competed to see who would eat us for lunch, we used an ancient technique called "static linking" to create an application that was self-contained aside from its inherent dependence on the operating system.

However, as the magic of the ancients became encapsulated within encapsulations and stored in loadable libraries to reduce overall footprint to make up for the bloat, the hardware industry was making storage cheaper and thus encouraging more and more code-bloat, and "lookit-me" lames were adding such vital functionality as talking paperclips thrust into our faces in more and more ways in newer and newer versions of applications which, at one time, were actually usable.

In this programmer's opinion, it's time to take the technology back, and hopefully reach a state in which applications can be developed without constant readjustment to the weekly fashion statements of those in control of too many libraries.

Fortunately it can be done, without throwing away the wheel we are reinventing.

November 22, 2015

Total Portability is not binary

Primitive caveman didn't have many different materials to work with; dirt, wood, rock, water, plus whatever he could scrounge from dinner's remains (bone, gut, skin, etc).

Now we have steel, aluminum, various exotic alloys, plastics, supermagnets, semiconductors, blah de blah, we have a whole bunch more stuff to work with than the ancients could conceive of.

Likewise, primitive programmer had comparatively little to work with.  We worked in assembler, and we dealt with binary addresses every day, whether they were written in octal or hex, and simple arithmetic in a non-decimal base was our stock in trade as we worked as close to the bare metal as it is possible to get, without benefit of any operating system worthy of the name.

Nowadays we have more toys to play with. But even though we have operating systems to host our compilers and our version control systems and our interactive development environments, things are still being done the hard way, with binary addresses and different machine instruction sets and architectures.

It seems like the array of libraries and shared objects (at least on linux distros) continues to increase with no end in sight. And each of those is just chock-full of binary addresses, generated from names in source code, and linked together into a single matching lump, each of which seems to become superceded by the next update, and in many cases the previous version is still needed by applications that aren't affected by the update, or haven,t been updated to use the latest libraries.

If you understand how compilers and linkers work, it is difficult not to be awed that it all fits together and works. Mostly. If you have the right set of matching libraries.

Each of those libraries evolved over time, from version and release, to the next version and release, in response to some application's need, even if that need was the result of a bug-fix, rather than new function.

If you want Totally Portable Software, it is necessary to move beyond different operating systems and machine architectures.

If you need a different binary for Intel or ARM processors, your software is not totally portable. If you need a different binary to run your app on a linux system than you need to run your app on a Windows system, or an Apple system, or an IBM mainframe, or your Android phone, or your iPhone or BlackBerry, your application is not totally portable.

Today we don't have any Totally Portable Software. We have lots of software that attempts to be portable, but it's hung up on irrelevancies like operating systems and hardware architecture and support libraries that contain inverted pyramids of workarounds which provide functionality the operating system fails to provide in the name of efficiency.

So basically, to achieve total portability, you have to start with a portable non-binary architecture, and build up from there. That makes it sound like reinventing the wheel, the internal combustion engine, and all the rest; fortunately it isn't quite that large an undertaking (though it is definitely significant).

It also sounds as though it would be very slow, but it seems that modern processors have become fast enough to do the job, if you do only what is actually necessary, without having to work through a mass of interface routines that attempt to perform functions ill-supported by the underlying host OS.

Traditional operating systems like linux tend to try to be everything to everyone, and are carrying forward decades of fixes and enhancements; backward compatability is potentially a giant-killer, but the overwhelming number of existing applications is enough to necessitate it, and more and more libraries serve to keep the existing codebase moving forward.

On the other hand, a non-binary architecture has some significant advantages.

First, it avoids the need to convert names into binary, and thus make all using-applications dependent on specific versions of a specific shared library.

Second, a system-independent architecture need not be much more than a thin client interfacing with a single user. It can ride along on top of a linux host that acts as a local server. Applications can perform their functions and interface with the user in a consistent way, with any number of end-user-selectable UI paradigms.  They can also obtain existing functionality that is provided by a number of local or remote servers.

A truly portable architecture, supporting a consistent, user-selectable, user-tailorable, operating interface, along with a suite of portable applications like text editors, mail programs, and others, can act as the end-user's sole interface with the hosting OS, and through that host, the wider networked world.

Totally Portable Software is practical today, it's just a matter of investing the time to develop a non-binary machine architecture that is sufficiently flexible.

November 06, 2015

user-interface idioms and gestures

The physics guys fiddle with things in their labs, and sometimes they discover a new trick they can play; sensing a finger's touch on a display screen, for example, or in older days, figuring out how to store data on a spinning, magnetizable disk. They come up with new knowledge, and oftimes, someone recognizes its usefulness. This is the phase that I call "discovery".

If somebody can think of a way to sell the new capability, the hardware engineers get involved and come up with an interface, something that programmers can use to write a driver for the new gadget. This is the phase I call "offering".

At that point application developers get involved and find ways to use the world's newest toy in their apps.   The new or updated apps make their way to end users, and what I call the "usage" phase begins.

That probably seems pretty obvious, but the point is that until the "usage" phase begins, the actual usability of the new device remains purely theoretical, aside from a small amount of use by the application developers themselves.

By the time end-users get their hands on the new device, a fairly large amount of support code has already been written. Additionally, numerous applications have been developed or modified to make use of the new capability, and they are affected by any changes to the base support facility.

Eventually a few individual developers may discover new gestures that are more intuitive or facile than the original ones.  They may integrate them into their applications to gain advantage over those they view as the competition.

Idioms (as the term is used here) are the bits of code that programmers use to perform a specific function, for example getting a yes/no answer from the operator. Gestures are actions the operator uses to respond to a programmed request for information, for example clicking on "yes" or "no".

A user-interface that is comprised of generalized idioms and their associated gestures can be implemented as a replaceable layer. One implementation may involve the use of icons and mouse clicks to represent idioms and gestures respectively; another implementation may use textual output and specific keystrokes, or voice output and input, to represent the same idiom/gesture mapping.

Without a generalized idiom/gesture layer, each application is left to define its own user-interface methodology. These may be placed in a shared library so families of applications can share the same user-interface look and feel, but other applications or application families may use other methodologies. The operator may encounter a number of different user-interface conventions while using a single operating system.

Additionally, the application developer who chooses to change his app's user-interface must locate all the occurrences of UI transactions and change each to use the newly established conventions.

Beyond the need for new shared libraries when a new UI methodology is established, there is the issue of backward compatability. The operator is forced to learn the new UI paradigm when the application is updated, as opposed to when it may be convenient.

In order to achieve a user-interface that is consistent across all applications, system-wide, that can be seamlessly replaced with an alternative UI paradigm, at a time of the operator's choice, a user-interface layer defining the necessary idiom/gesture mappings is indispensable.

Within a proprietary/commercial environment, such a system-wide interface can be dictated by management; thankfully, within a free and open source environment, managerial fiat is existent only when the project is backed by a commercial entity.  As a result, instead of attempting to herd cats, one offers toys to play with or not, based on the characteristics of the toy rather than a need to release something that will attract new sources of profit.

August 23, 2015

lame interfaces, inverted pyramids, and code bloat

Sometime in the early 1980s, I was finally able to buy a computer of my own. It was a Radio Shack something-or-other.  It ran CP/M.  I remember writing programs for that computer.  It was a Z80 and the code I wrote was written in Assembler.

I wrote a full text processor for that machine.  I forget whether it had 16K or 32K of memory.  It's been a while.

When I abandoned Windows Vista and migrated to linux, the Windows drive contained something like 30G or 40G.  The most I've ever needed for a linux install is around 5-6G, so I tell people that 10G should be plenty unless they need a huge /tmp or they insist on putting /home on the root partition.

Let's be generous here.  The Ubuntu 11.10 install that I've been using since that time has a root partition that's 5G in size with 3.3G of that in use; 1.6xG is plenty for any /tmp use that I have, and that install does most of the things I need to do.  It doesn't support bluetooth but neither does this old netbook I'm writing on at the moment.  It doesn't support MTP (Media Transfer Protocol) so that's a valid reason to move to a later version, so I can transfer files to and from my phone while it's tethered to give internet access and charging on the USB cable.  Debian "jessie" is the version where MTP became functional, and it's recently gone LTS, so that's my target.

I don't know for sure yet since I haven't cleaned up that install, but it will doubtless require more space than the old Ubuntu 11.10 install.  Currently that partition contains 4.95G of data compared to the old Ubuntu's 3.3G, but I'm not really done with it yet; for some reason the GNOME team seems to think that Mahjong and a bunch of other useless crap is downright essential, so I'll want as much of that gone as possible.  And there's a lot of other junk I'll never use and don't need taking up space.  It seems likely that I might need to write some code to find it all, we'll see if that's worth the trouble or not.

Each version of "linux" gets bigger, but that's not linux, that's the stuff that runs on top of the linux kernel.  There is no linux operating system, there is just a linux kernel and some core commands.  Everything beyond that is the desktop environment people think of as "linux".  Ubuntu, Debian, they're all linux-based distros, they are not linux.

And while the userspace stuff that comprises the "Linux Operating System" grows like topsy, the actual linux kernel keeps shrinking.  It went from about 25M for version 3.0.0-29, to a little more than 18M for version 3.2.0-4, to around 17M for version 3.16.0-4.

Now, not knowing the intimate details, I assume that the decrease in size primarily represents the offloading of code that was previously considered essential, into loadable form.  From that standpoint maybe the kernel is actually growing if you count the sizes of all the loadable drivers etc..

But look at the sizes again.  The kernel is xxMEG and the total root partition size is xxGIG, so we are talking about an "operating system" that is roughly 1000 times as large as the kernel that runs it.

Did I mention the idea of an inverted pyramid in my previous post?  You bet I did.

One of the things I remember about CP/M was that it showed a certain mastery of interface design.  When you called a routine to provide something, it returned the information in the form that was acceptable to the other interface routines.  You didn't have to waste a lot of time and effort converting what was convenient for one developer to generate, into something that was convenient for some other developer to accept as input.  You didn't have to build an inverted pyramid of functionality because it was already there.  That's the primary function of an operating system, not maintaining the hardware, but making what the hardware can do available to the programmer.

The idea of code reuse is to keep the pyramid right-side-up, so that any given application can be small because it's able to use system functions that actually do the job, rather than providing pieces you might be able to use to do the job yourself if you're willing to write the interface code yourself.  So yes, poorly designed interfaces are a large part of what causes code bloat and inverts the pyramid.

The pyramid doesn't have to be upside-down.  But if you only have a kernel, even a good kernel like linux, with no operating-system interfaces available across the board, you're going to end up with an inverted pyramid that represents the code bloat which turns 17MEG of kernel into 5GIG of distro; you end up with GTK+ and QT and TK and all the others, and all the libraries they require, and all the libraries those require and so on and so forth.  There are huge dependency trees for functionality that should be available in the base operating system, if linux actually had a base operating system, instead of just a kernel plus a lot of distros that want to be King.

Which of course is perfectly natural at this stage of the industry's development.

August 16, 2015

an historic perspective

I was absent from the planet when the first computer was constructed, not yet born in 1943 when the Collosus ran on tubes and had no stored program other than the switches and plugs that defined its operation.

It wasn't until 1969 that I was introduced to the idea that computers existed in a part of the world I had access to, and shortly thereafter I entered into what seems to be a lifetime addiction to the art of writing code.  At that point the microcoded IBM 360-40 had been around for about 5 years, so I'm a relative newcomer to the art.

There was no VLSI at that point, people had only learned how to put a relatively tiny number of circuits on a chip, production chip-runs included massive losses by today's standards, and computers were what we might (very generously) describe as "pitifully slow", even though to us, in those days, each new model seemed blindingly fast compared to the previous ones.

The consensus opinion at the time seemed to be that something like convergent evolution would drive computer instruction sets toward some common optimal form that could be implemented in hardware, and we'd all live happily ever after.

Little did we who shared that opinion realize how an increasing world population would drive the infant computer industry to the levels of competition for profit that we see today, or that instruction sets would probably never merge, at least not within our lifetimes.  Certainly they have not done so yet.

There were a number of instruction sets then, just as there are now.  But even working within a particular instruction set, there were technical issues to be solved as programs became larger and more complex.  Once the term "storage" came to mean magnetic media instead of punched cards or paper tape, a lot more could be done.

First came includes.  That, along with linkage editors, gave us the ability to reuse code on a significant basis.  The code still had to be compiled each time it changed, but it could, in aggregate, become a library of reusable subroutines.

That was a single step forward, but it established a basis from which a great deal of chaos has emerged, even though many only see its secondary or subsequent symptoms, if they notice anything at all.

At every step along the way, we were building reusable blocks of code, but they were seldom exactly as we needed them to be.  Nobody knew how new functionality needed to be, new functionality gets figured-out a little at a time, because, you know, it's *new*.

Each time we tried to use a routine that was not exactly what we needed, but was closer than starting from scratch, there was a decision to be made, with no perfect alternatives: rewrite, modify, or encapsulate.

Even in those less competitive days, nobody's management wanted to invest the time necessary to rewrite, because time spent rewriting translated into dollars of profit lost.  The choice was often, perhaps usually, to modify or to encapsulate the functionality in a higher-level routine with a slightly different interface.  The first problem is, both of those alternatives are inherently troublesome. 

Whenever you modify shared code, it affects every program that is already using that shared code.

If you choose instead to encapsulate the necessary functionality in a higher-level routine with a new interface, your code becomes the potential victim of whoever decides to modify the encapsulated code.  On top of that, you have taken the first step in the creation of an inverted pyramid where the original code is at the bottom, and supports the entire structure of encapsulating higher-level routines, on top of encapsulating higher-level routine, ad infinitum (aka "code bloat").

If you are the author of that one routine down at the bottom of the inverted pyramid, and it becomes necessary to modify it significantly, because you are affecting every routine that has encapsulated your code, all the way up the inverted tree, nobody who is your unintended victim is going to love you for it.

But wait, it gets even better!

In "olden times", before 1975 or so, there were very few programmers to fill the steadily increasing number of software development jobs.  That was good for us, because people who are in great demand get paid more (and need to put up with less) than those who are less wanted.  However, as universities around the world geared up and began cranking out new Computer Science graduates, things changed in two ways.

First, by the early 1980s there were enough programmers to fill the available jobs.  That really wasn't great for anybody except employers, who could hire a replaceable commodity called "a programmer" in a "free-market" that they controlled.  But second, and more importantly, most of those new college grads had managed to pass their classes without learning any more than necessary to get good grades, at least if one judges by the people in my own graduating class and the new college hires I have worked with over the years; they did not learn enough, whether because they were lazy, or because colleges were teaching jobs instead of understanding.

I find the educational system in the US to be lamentably enslaved by the fashions that pass through industry, because they really do not teach people to be computer scientists, instead they teach them how to fill some particular employment role... but the educational system isn't the point here, the point is that droves of these new college grads were dumped into the system without having the necessary skills, even though their transcripts testified to their excellence.

Most of those newbies had very few clues about how some of the code relative oldsters had written and debugged and put into a library for reuse worked; it was magic to them, magic being defined as something that works somehow but that you don't understand and don't dare modify because your performance appraisal is coming up and, oh my, look, the magician's wannabe apprentice has found a magic wand he can wave around and do things with, even though he doesn't know how it works.

Which is fine, if you don't mind the result that encapsulation after encapsulaton grows the inverted pyramid faster and higher, and more people are dependent on the small number of subroutines at the base of the usage tree.

It's worse if you happen to maintain an operating system that provides base-level functionality, then everybody is dependent on any change you make to your code and none of them want to spend any time conforming to whatever changes you've made.  That's why operating systems have versions, they help to restrict the bloodletting to certain times in the release cycle.

The fellow sitting at the bottom of the pyramid, who either wrote the original bit of logic, or who has actually learned what it does as part of picking up support for it, has to tread very carefully in the fixes department.
 
Worse, those working at the top of the pyramid, who have written actual applications that do something useful, have to write more and more complex encapsulating routines, as the code beneath them changes, since they need to support all combinations of whatever mess of changes landed unavoidably on the guys at the bottom, before they could duck and run to a new job paying more money.

Versions are to contain the blood: convert everything to the new interface provided by the base system and hope they don't change it again before you have a chance to take two steps forward to compensate for the backward step that new release cost you.

How did I come to this viewpoint?  By spending the first few years of my life in the business building low-level components like device drivers and file systems, and the remainder building development tools that qualify as high-level applications.

Here are some conclusions you might wish to consider:

1. Maybe that massive inverted tree of fixes really isn't necessary.

2. Sharing code at other than the source level is a mistake that will cost you in the long run.

3. Instruction-sets are pretty much irrelevant to logic specification.

4. Machine code is a binary scripting language for the underlying microcode.