Bulbs re-lit: productivity paradox

One of the curious things of “today”, ie. the tech landscape of post-2010, seems to be productivity paradox.

Software serves actually two very different roles:

  • software as central part of a product
  • software as a utility, a building block for other software engineers

I’m talking of the latter use, software as a utility.

Several decades back, software was in rather esoteric and almost completely invisible role, controlling devices: printers needed control software such as the PostScript, in order to draw the glyphs (characters) that a computer fed to them.

Mainframe computers needed specialized operating system software. So called general purpose computers (or, later: Personal Computers, PCs) were not yet established as everyday consumer tools.

The software development was not considered very much of a value. Software was said to be a necessary thing, but not of business value per se.

Big money was in budding electronics companies, which later became the giants such as Intel (for historical glimpse, see CNBC’s video Inside Intel’s Bold $26 Billion U.S. Plan To Regain Chip Dominance)

The roots of open source and “public domain” licenses goes back to the 1960s, I believe. The phenomenon probably originates from exactly this reason, that not yet there was such closely-guarded proprietary projects competing each other. Software was in scarce supply, and co-operation kind of made sense, I would assume.

Today I had a live rant about some things mentioned above. I was talking with my colleague over Slack. We have slightly different points of view perhaps. Both of us have been in the trade for a considerable time, ie we are experienced (a total of 20+ years).

I originally had a memorable glimpse of almost this same kind of discussion in 2015, when my former team boss mentioned the paradox. He’d overseen a total development force of over 1000 employees, doing a very large-scale embedded (hardware + software) project in telecom industry for a bit less than a decade. His experiences were that of slight frustration and lack of understanding how, back in traditional (and mobile) apps, the modern software stacks did not give the productivity boost that one would expect. So the situation was that a product included tens of thousands of lines ready code libraries, but still there were considerable obstacles in utilizing the force within.

I remember vividly his words…
“the last 30 years the productivity hasn’t actually changed a bit (in software industry)”

It really stroke me then, but later became to understand what that meant. On the surface it seems counter-intuitive: how come, if we can nowadays just “plug and play”, or import a lot of the software corpus that we’re going to use, it’s still relatively laborious to produce software products?

That particular year, 2015, was in the middle of an era where front-end development, at least on the Javascript front, was going through a very rapid progress. Javascript was being made into a full-fledged, respectable language with which ever more complex projects were implemented. Before this, one could say, Javascript was but a “pretty frosting”, a decorative finish on top of web-facing software.

I investigated a bit further, and remembered also the foundations of this going as far back as 1970s. Back then, it was of course the Fred Brooks and his book “The Mythical Man-Month”. There Brooks mentions ‘tar pits’. The phenomenon of the tar pit is about originally quite fast progressing project that starts to magically slow down. Things become more difficult to change. This phenomenon, in my opinion, hasn’t changed much during the years.

Life cycle of a software

Skipping a lot of theory (called SDLC), we can compress things in a nutshell:

It’s really easy to protype something between 100 and 1000 lines. These small sketches of an app serve the purpose of making feasibility studies and spikes: does it feel we’re generally capable of doing the actual, larger project? Do theories play in reality, as we have come to expect? Etc.

When you go beyond a few thousand lines, you start to feel the initial resistance. Human memory plays part of this; that’s inevitable. The namespace (naming of things) becomes a bit harder, as some of the obvious ones you have already occupied – you need to watch not stepping over parts that are useful.

Depending on technical choices over the software stack, your IDE (editor and tooling) may or may not be helpful in coping with a larger source code. The choice of programming language determines whether symbolic names can be resolved in static source code.

Static name resolution of symbols in source code gives the IDE intelligence: it can do 2 things better:

1. guide (tab-complete) the developer, ensuring that a correct symbol is currently being referred to.

  1. guide the developer and let him better ensure, that he’s referring to the correct variable, function or other symbolic name
  2. speed-up completion of symbols (so-called tab-complete) , thus letting developer spare the short-term memory and stamina to actually making code, not writing long and descriptive variable names

When you push farther, over 10000 lines, there’s often more and more of the support processes within software engineer that “just need to be done”. The progress isn’t linear. It’s also quite seldom that one person alone manages a large program, so others need to be taken to the project.

Team play: the vision, communication, grooming

Looking after quality and communicating of the purpose, schedule, and requirements takes its share. This is however quite inevitable and a healthy sign, also.

Software making is team play.

Much more worrisome than communications issues is when there’s a lack of communication. “No news” in software projects is often a sign of something bigger coming up later – developers’ minds have parted, and the gap has grown. What easily follows this kind of situation is phrases like “Oh?! I thought.. X” and “Hey, we’re no longer actually doing… Y”. So, it’s very good idea to synchronize minds early, and synchronize often.

However, I wasn’t so much sparked today about the technicalities of software or the productivity paradox. Much more than that what sometimes surprises me is the question of a sort of “rush” and gap between the quality of a good overview documentation about a software, and the speed with which the actual code is being written. There’s many sides to this question.

I’m a staunch advocate of good quality docs.

The mind and programming

Flow is a very powerful and enjoyable feeling. Flow is something that happens often in programming.It’s like the runner’s high equivalent, but in coding.

When in flow, we’re having all the wires and bits in our fingertips, and creating new stuff is easy. Leave this ecstatic state for just a couple of weeks, and you feel quite a dramatic effect: coming back to the project, looking at the source code, all the names of functions, variables, and so on – it doesn’t feel somehow “the same”. The reason is that perhaps the valence, the emotional strength and meaning of those words have a bit different representational aspect in our mind, now. Neurobiologically it might be about diminishing activity of certain neural patterns. In the brain, it’s said that neurons that fire together, wire together. In English, it means we tend to form ‘warm clusters’ of interconnected things. I believe in programming this is also true and it might be a strong component of programmer’s productivity. So, keep the soup warm!

First step into a subjectively novel project

Now take a completely outsider’s view: you enter into a existing project, where there’s a 100,000 lines of code. There’s probably somewhere between 100 – 1000 different files, and hundreds of function names. When you zoom in closer, you will the sub-surface structure of the code: variables, code statements, some tricks here and there. Some of the logic is invisible, most likely: things happen because of the author of code knows the actual possible states of the variables better than you do.

You versus the original author: differences

The author knows kind of mental shortcuts to the code. The author also knows the history of the code, and the interaction relations between different objects and parts.

I like to often think of the symmetry of services: Do code functions have their logical counterparts? If I “open”, can I close? If I write, can I read? And so on. Sometimes there’s inherent beauty in the design of software – things feel very natural. The opposite of this is kludgy implementation, where you simply have to know that certain things are exceptional; the symmetry is severely broken. Some of this however is also about the flavor and style of the whole underlying stack: many frameworks are opinionated; as you spend time using a framework, you start picking the culture and “the way things are done around here”.

My advocacy to good quality docs doesn’t mean that documentation should be superbly lengthy, but it should be to the point; understandable, and cover the immediate burning questions that a complete newbie would have when entering a new project.

I wrote, poignantly, in a chat: “If there’s 4 things that are mentioned in the docs of open source project, the order seems to be…” (part 2 coming up)

Errata: edit on 19.4.18 7th paragraph, word added
.."a product included tens of thousands of lines ready code libraries.."

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: