« Deleting Spam From sup Maildirs
» Chicago Code and Coffee
Code: TeX, version numbers
When I evaluate a software library, I typically look first at how recently it’s had a release. Is it being updated, or has it been abandoned? There’s an assumption, here.
Mark Jason Dominus wrote about having ironed the bugs out of a small, feature-complete library:
But then I started to get disturbing emails. “Hi, I notice you have not updated Text::Template for nine months. Are you still maintaining it?” “Hi, I notice you seem to have stopped work on Text::Template. Have you decided to abandon this approach?” “Hi, I was thinking of using Text::Template, but I saw it wasn’t being maintained any more, so I decided to use Junk::CrappyTemplate, because I need wanted to be sure of getting support for it.”
I started wondering if maybe the thing to do was to release a new version of Text::Template every month, with no changes, but with an incremented version number. Of course, that’s ridiculous. But it seems that people assume that if you don’t update the module every month, it must be dead. People seem to think that all software requires new features or frequent bug fixes. Apparently, the idea of software that doesn’t get updated because it’s finished is inconceivable.
Twelve Views of Mark Jason Dominus
The assumption is that all software is either buggy, incomplete, or both. It’s not an unreasonable assumption, though there are a few notable counter-examples:
Knuth has declared that he will do no further development of TeX;
he will continue to fix any bugs that are reported to him (though
bugs are rare). This decision was made soon after
TeX version 3.0 was released; at each bug-fix release
the version number acquires one more digit, so that it tends to the
limit π (at the time of writing, Knuth’s latest release
is version 3.1415926). Knuth wants TeX to be frozen at
version π when he dies; thereafter, no further changes
may be made to Knuth’s source. (A similar rule is applied to Metafont;
its version number tends to the limit e, and currently
stands at 2.718281.)
Knuth explains his decision, and exhorts us all to respect it, in a
paper originally published in
and reprinted in the
NTG journal MAPS.
What is the future of TeX?
I’d like to see more of this in software. We can build for all-time rather than a weekly “iteration”. Other developers can build on our software more confidently, knowing that it’s less likely to change.
There are two big aids to reaching “done” that reach in opposite directions: small size and extensibility. Text::Template can be done because it has a fairly small number of well-defined features. TeX can be done because it’s almost infinitely user-extensible. Either the user gets a manageable problem solved well or the tools to solve any related problem.
I don’t know that we need an organization to manage these “completeable” libraries, maybe the answer is as simple as a versioning scheme.
SemVer is a nice standardization of a common practice, even if there’s a lot of devil in its details. Knuth had a nice idea with version numbers that approach a transcendental number. There’s as many digits as needed for the software to stabilize without what looks like the “major version number” shifting.
Maybe something like this would work:
- Initially, software releases use a major/minor format, like v1.0 or v3.11. Major is incremented with new features and breaks backwards incompatibility. Minor is incremented with bugfixes. The goal for this stage of life is to build features, explore possibilities, get feedback from users, and (if it couldn’t be determined up-front), decide what feature-complete means for this project.
- When the software is feature-complete, it is labeled “complete” with a patch number, like vCOMPLETE.18 or vCOMPLETE.5. Patch numbers count down from 23 to discourage the temptation to sneak in more features. Especially confident maintainers may start with a lower patch number.
- Finally, when the patch number hits 0 or five years have passed from vCOMPLETE.0, all remaining bugs are declared features and the software is vDONE.
I can imagine some practices worth testing: start with a clear TODO listing goals and a TODONT for things the library will not include, only depend on the standard library or other vDONE software, answer user misunderstanding with documentation, test comprehensively but judicously, select a few core data structures, avoid stateful APIs. I’m sure many more will be discovered.
When evaluating a software library, we can then look for vDONE code to know we’re getting software that does what it claims in a reliable, predictable fashion. What we build on it should last longer than when we have to chase point releases and juggle compatibility of shared libraries. Or at least it’ll help us cope with the reality of software:
Art is never finished, only abandoned.