My Tips for a Professional Developer

Reading Time: 5 minutes

Purely coming from my experiences so far.

I’m a geek. I’m a programmer. I hack away, I’ve always done code. From some point onward, I also started doing code for money into real industrial and business applications.

It was a fantastic moment to really grasp that what had been merely a pastime, could be turned into my trade. I enjoyed software development ever since.

The elements of change can be categorized easily:

  • things that stay the same
  • new stuff
  • skimming, bubbling, trending and waning stuff

Things that stay the same

Computers essentially still are programmed by making them follow instructions, to manipulate bit-based objects. Things that are made from a series of 0 and 1 bits.

Did you know that most of the surface area, cost and complexity of CPUs is due to massive parallelism? The number of gates, total cumulative “vias” (electron conductors) and related power consumption is due to the fact that it isn’t the basics of computation that have changed, but the sheer “volume” or pipeline that the processor crunches nowadays! Essentially a CPU nowadays does serial computation, but in massive scale – and with some snazzy tricks to avoid calculation of futility. Caching is the technology that fetches a result that is already known at the time the computing instruction hits the execution pipeline.

U canst

What I’m trying to say is that even though you might think OMG! I could never understand CPUs of modern computers, I’d say that’s not true. Zooming in, down to the elementary parts you would still find very familiar concepts: registers, simple microcode (pieces of a actual machine code instruction). The computer’s guts, so to speak, are of the type: IF something THEN do something, ELSE do something else. IF-THEN-ELSE. Add to that a capability to jump (branch execution to other place), add two numbers together, and so on – it’s what makes your machine tick. The rest is performance glitter.

What the software industry is aiming is a few things besides technology development per se:

  • managing complexity of a software project, sometimes reaching 100,000s or even millions of lines
  • handling the blame game and risks properly
  • verifying and testing the quality of software
  • keeping software development costs at bay
  • if possible, keeping the margins (profit) by applying blue ocean strategies
  • avoiding cardinal mistakes of doing an app of tech that no one will ever use (or pay for)

Meta: The process side of software engineering

With more experience under the belt, I got great experiences from methodologies to cope with project complexity, team work, splitting development to many members, solving problems together, and continually improve the way the whole team works.

This is a recollection of things that I’ve encountered. Ideas, methodologies, good and bad ones. Things that worked for me, some that din’t.

I’m not particularly an advocate of any specific one toolbox, whether it is a project methodology or a piece of the tech stack.

Enjoy the silence

You might have periods of silence in the project, and especially during daily schedule. Enjoy. These are often the moments where intense development and raw writing work happens. Coding is something that takes different turns depending on the microtask at hand; sometimes you are basically writing quite simple snippet of code, where you don’t have to hold many constraints or guidelines in your head at the same time. At other times, you’re doing a leap forward in some critical portion, where a lot of “things hang in the air”, and making it all spit out as code is a lot more nerve-wracking. People counter often the physical noise with means such as noise-cancelling headphones. These are really ways to do stuff. And a reminder to all fellow coders: even though we might perceive someone is merely listening to good music and having fun, remember that inside what happens might be a intense concentration and trying to get a complex thing to fit.

Turning fear into curiosity

Frustration can be a powerful force. Sometimes upon initial impact with a project, a so-called handover, there can be friction; you might be looking at a work worth up to several years, when you are entering in midst of a project. There’s a lot of tacit knowledge also present, decisions having been made. A new project being handed over is like a shapeshifting blob of stuff; you’re struggling first to get a overview and familiarize yourself with possible any new tech that comes up.

Don’t be intimidated by initial frustration. It usually turns out fine! Keep asking those dummy questions and keep an eye for possibilities to start chiming away with the code. Usually when you do changes and additions, things start to roll at once.

  • identify your emotions
  • identify the flow and connections between emotions; did you get frustrated at something, a process, someone particular; at a way of things done, or something that you think should definitely be done?

Ivory Tower – Old Systems Plague

Some large organizations have problems in fallout of obsolete technology. Large HR / workflow systems implemented years ago seemed like a great idea; now, they’re outdated, clumsy to use, and have little if any real positive effect on personnel. Yet no-one seems in position to nudge a change. This is a typical situation that you might encounter.

My tips:

  • keep your head cool
  • try to find other kinds of users besides yourself; what do they have in it? Someone surely knows how to motivate using the system, otherwise it would’ve most certainly been dumped
  • when there’s a chance, be part of the changing force that can make things better

Learn to surf on the cutting edge

In our profession, things rarely are “ready”. That means:

  • you will be using software to make software
  • software contains always some bugs
  • => this means that your tools are not perfect!
  • some of those bugs will be visible to you, too
  • you will get irritated by the bugs
  • you can also control how much you let the bugs irritate you, and
  • file a bug report. Or even better: make a patch, and submit that (if the tool is open source and hosted somewhere publicly)

This trend IMHO has stayed essentially the same. The microarchitecture of our tooling has been more finegrained – so on average we’re doing software composed of many more packages, than we used to.

Software stacks – 50000 ft view

Some software stacks might take considerable time to get ready for work. We might be talking about a day or two, typically. There’s some trial and error typically in the process of setup. The more cutting-edge stuff you use, you’ll find that there’s often no single “dogma” of how to do even such a thing as complete setup. It just takes a bit of getting to know the stuff. Explore the options, try to also read first-pass without interruptions and as much as you’d like, it might be better to resist the temptation to go hands-on immediately. So: read. Then start doing, once you got a grasp of the big picture. Otherwise it’s really easy to get stuck midway, and actually wonder how you can undo your half-baked installation.

This is something that is best tackled with patience.

I’ve gone through maybe 20-30ish stacks at some depth. The biggest change that I perceived is that during the last 20 years, rise of open source dogma made a refreshing change in the sense of involvement. Whereas stacks used to be “professional toolboxes” introduced by some big corporation, nowadays stacks are a living project where individual contribution is often appreciated, things move faster, and there’s a better chance of actually reaching a live person to ask about for advice.

Programming in assembly

Reading Time: 4 minutes

Assembly is the programming language closest to a CPU. ‘Closest’ here means that the keywords in the language translate 1:1 into instructions done by the CPU. With other (higher lever) languages, there’s either an interpreter or compiler which unpacks the language and generates usually quite significantly higher number of instructions when the code is to be executed on a real CPU.

With assembly, you tell exactly the processor what you want done – at register level. Whereas higher level languages achieve a lot of operations by abstracting the ingredients into a function (subroutine) name, assembly is about the elementals of computation: manipulating bits, and registers (composed of 8, 16, 32, or 64 bits at a time) — essentially, one of the smallest storage units of digital computers.

The things we often learn as programmers is a bit more “sophisticated”, and rightly so! It’s good to work on a level we’re comfortable with.

All languages, deep down “enough”, will be compiled into assembly. For example, Java is compiled to Java bytecode, which is then run in a virtual machine called JVM. The JVM however has to eventually execute plain assembly. Same with all other languages.

A (high level) programming language can be either:

  • interpreted, or
  • compiled

The question between those two choices has mainly to do with: at which point does the conversion to machine language happen; is it “early on” (compiled languages), or during the execution (interpreted languages). Python is interpreted, while C language is a compiled language. C produces traditional executable files; whereas Python source code is run by passing the file to the Python interpreter.

Assembly is a good language to really get an understanding of what the computing hardware actually does. All modern computers are described with the van Neumann architectural model:

A computer simply can load binary values to registers; do comparison and the usual suspects like addition, subtraction, multiplication, and division; store the value back to RAM (memory); and do stuff like jump around in various parts of the code (the ‘IF… THEN’ equivalent).

At first the basic level of profiency in assembly is attained by learning the opcodes: what commands are available. In reality, even assembly commands are internally stored and executed as a sequence of microcode within the processor.

Think of registers as just 8-, 16- 32 or 64 bit variables. They are done in real gates, physical constructs in the CPU. So they “always exist”, fixed. Their value can be altered: you can load a number, or you can load a number (content) from a memory location. There are commands to

  • zero a register (make it 0)
  • add two registers (arithmetically sum)
  • subtract a register’s value from another register
  • multiply
  • divide a number in a register by another register
  • compare the values of registers (and take action: a jump = branch)

I did a lot of Intel x86 assembly programming as teen.

Is assembly really that hard?

Why does assembly have a hard-to-grasp reputation? It’s probably because of the very terse and “weird” vocabulary. Also compared to other languages, there’s so much of “nonsensical” stuff in assembly: why the heck do you “move the value 64778 to register this-or-that”.. It doesn’t seem to make any sense at all!

When you’ve learned to program in assembly, it all makes sense. But I have to admit that looking at some of the code now, in the year 2019 – that’s some 25 years later – I don’t recollect all the details anymore.

Let’s look at a image uncompression program. It’s a complete program, showing a RIX image on-screen. RIX is a format which is now almost extinct. It used to be quite popular in the wild, although very simple format. Because of being simple the .RIX was also a perfect training target for making a program that can interpret it.


Set_DTA proc near
mov ah,1ah
lea dx,new_dta
int 21h
Set_DTA endp

AllocMem proc near
mov ah,48h
mov bx,4096
int 21h
jnc yli1
disp_str nomem
end_prog 255
mov alseg,ax
ret AllocMem

DeAllocMem proc near
push es
mov ax,alseg
mov es,ax
mov ah,49h
int 21h
pop es
ret DeAllocMem

;; Find first file, matching the search mask string defined
;; in memory area pointed to by "maski"
FindFirst proc near
mov ah,4eh
xor cx,cx
lea dx,maski
int 21h
FindFirst endp

;; After we have called once the FindFirst proc,
;; continue giving next results using the same search mask string
FindNext proc near
mov ah,4fh
int 21h
ret FindNext

LoadRIX proc near
lea dx,dta_name
call open_file
mov kahva,ax
mov bx,ax
call begin_file
mov bx,kahva
mov cx,64778
xor dx,dx
push ds
mov ax,alseg
mov ds,ax
call read_file
pop ds
mov bx,kahva
call close_file
LoadRIX endp

SwitchPic proc near
push ds es
mov ax,0
mov w1,ax
mov w2,ax
mov ax,alseg
mov es,ax
mov cx,3
in al,60h
loop plp
mov si,w1
add si,w2
cmp si,030ah
jb eikay
cmp si,0fd09h
ja eikay
mov al,byte ptr [es:si]
push ds ax
mov ax,0a000h
mov ds,ax
pop ax
mov byte ptr [si-030ah],al
pop ds
inc word ptr [w1]
cmp w1,0ffffh
jne yli3
pop es ds
mov ax,w1
add w2,ax
jmp spl
SwitchPic endp

ClearBuf proc near
push es
mov ax,alseg
mov es,ax
xor si,si
xor ax,ax
mov word ptr [es:si],ax
add si,2
cmp si,0fd10h
jb cbl1
pop es
ret ClearBuf

mov ax,tieto
mov ds,ax
call Set_DTA
call FindFirst
jnc yli2
disp_str norix
end_prog 255
call AllocMem
mov ax,13h
int 10h
call LoadRIX
push es
mov ax,alseg
mov es,ax
mov dx,000ah
xor bx,bx
mov cx,256
call SwitchPic
call FindNext
jc ulos
get_key 0
cmp al,27
je immed
jmp newpic
get_key 0
call ClearBuf
call SwitchPic
mov ax,3
int 10h
call DeAllocMem
mov ax,0c06h
mov dh,0ffh
int 21h
end_prog 0

w1 dw 0
w2 dw 0
maski db '*.rix',0
nomem db 'Not enough free memory (64K) to run program!$'
norix db 'No .RIX files found in current directory!$'
alseg dw 0 kahva
dw 0 new_dta db 30 dup(0) dta_name
db 13 dup(0) TIETO
END prosed

Open source, closed doors? Peace of code.

Reading Time: 2 minutesThroughout my involvement with code, I’ve been curious as to both the volume and quality of code. As well as, how it feels in particular moments to be programming.

Not so long time ago the suitability of open-plan offices to R&D (generally, “anything needing precise concentration for rather long periods of time”) was revisited and, according to many articles or persons, programmers hate open-plan offices. This in turn translates to diminished productivity.

Part of the negative side of a open-plan office is due to interrupting the flow, or optimal mental state of a developer. Good managers know how to shield developers from completely unnecessary and fruitless disruptions. Apart from one-man shops (where you, the CEO, are also developers, sales-guy, coffee grinder and the janitor), developers rarely should directly handle individual service cases (ie. being helpdesk), nor should they have much direct daily output to sales activity. Developers often participate as technical aides in product design; write both payload code and tests (the ‘core’ of their trade), handle open bugs, and learn new things. Developers should not (in my opinion) be involved much in the back-office activities of a project, such as maintaining capacity and reliable availability of servers, configuring complex build systems; and they definitely should not be involved in mindless ramifications from organizational architecture change such as moving a lot of stuff from a folder to another, or having fencing with any office productivity / email / calendar suites. Well, the latter goes to every role (not just devs) in the company, and I know that it’s part idealistic to state that change shouldn’t incur painful and numbing experiences.

My stance on the open-floor plan issue is quite similar as the news. If I’m mostly in developer role, I prefer somewhat closed rooms. It doesn’t mean that each developer would sit in their own closet, but rather that a team is shielded from extraneous noise and distractions. A very good idea is to have easily available, temp quiet spaces for individual work. Booking them shouldn’t be necessary.

Joel Spolsky said very well:

The more things you (ie. developer) can keep in your brain at once, the faster you can code, by orders of magnitude.

There might be purely neurological reason behind this. Our sound perception works as a background thread, automatically. We kind of – computationally speaking – keep making sense of the word-stream that enters our ears. Thus the more there is sound signal in the enclosing space, the more we probably have to deviate from perfect concentration on the most important task.

The idea behind open-floor plans probably were to alleviate siloing (individual developers going solo, making things that become incomprehensible to others, and pose a business risk). By putting people together, the architects maybe thought of leveling and making the team advance in more even and predictable steps. Reality perhaps got in the way.

Anatomy of an operating system

Reading Time: 2 minutesThere must have been time, when you most likely would’ve received some major antidotes for talking about operating systems that could fit into a mobile phone. Radio phones used analog technology, and they were like tuned musical instruments, only that these devices operated on a specified band of electromagnetic spectrum.

The early mobile phones were set up at the factory, and then used in unmodified form for their lifetime. The things you could change were ringing tone, and your address book contents.

The phones certainly had no “disk drives”, no code pointer, stack pointer, internal memory, or the kind. At least this is how it looked to the surface – for the end user. The phones indeed had already quite sophisticated operating systems, but they were closed-source and hidden from prying eyes, running stealth within the phone. Just like nowadays you can’t modify the operating system of a washing machine… Because there’s simply no compelling reason.

The OS’s role was to handle things as complicated as multitasking; the early WAP browsers, which were supposed to bring mobile Internet to European countries, probably contained tens of thousands of lines of code.

It would be most interesting to know more about the early history of mobile operating systems. I had a British Psion Revo pocket computer, which had EPOC operating system on it. These little computers were called organizers, or PDA (personal digital assistant). In 1986 Psion released the Organizer II, which marked a milestone in the march of PDAs.

Later on, Palm Computing and many others created a variety of these devices. They almost invariably lacked proper data connectivity, which perhaps made their era shorter.

Without slick network access, you had to use all kinds of docking solutions to wind up the PDA with a PC – and your data would not be synchronized in real time, nor was sending or receiving possible when the user was away from those “PC access points”. It wasn’t very beautiful at all. We were still talking about sales figures like 1 million units per year.

Later on, this EPOC would be the core on top of which Symbian was built. Nokia, in 2001, had produced its flagship communicator 9210 which ran Symbian OS (which was a rename from previous EPOC32). Symbian grew, measured by units shipped. In 2007, there were 126 million units sold by end of March, 2007. Nokia was also active on “non-smart” phones, ie. ordinary mobile phones. It lost focus in the smartphone sector, partially because of the steep learning curve of new programming developers trying to enter Symbian world. In addition, Nokia kept the development environment semi-closed, requiring participants to register for a fee.

Now, years later, in 2011 as Microsoft and Nokia have decided to use Microsoft’s Windows Mobile Phone (WMP) platform as Nokia’s future smartphone operating system, the Symbian was given to a consulting company called Accenture, along with some 3000 developers from Nokia. One good question which has been circulating in media is: What happens to Symbian? And the developers? Even though the sentiment is generally a bit gloomy, we might have surprises along the way. As Finns say; “Älä heitä kirvestä jorpakkoon, vaan tee ilmaveivi.”