Smartphones inside out – part 2: mobile networks

Reading Time: 14 minutes

I wrote a post about mobile phones a “bit” earlier. This is the follow-up, with mobile networks. They are, inevitably, something that the whole mobile culture relies on. Yet I think mobile networks have received quite little interest from general public.

5G is a hot topic right now in 2018. The generations of mobile networks mean that there is an incremental continuation in improving (often) the speed, coverage and generally the capacilities of the mobile network. 5G has been a long time in the waiting. There are stepping stones to it, from the current 4G, so the change is not a “turn of a switch”.

Let’s get back to basics.

I was fascinated with the mobile masts early on. img_8169I’m not “into them” so much nowadays, but remember scaling one mast and checking it out myself as a teen.

During studies I was part of a local radio amateur club (OH2TI) for a couple of years in Otaniemi, 2005 and 2006 (or thereabouts). This was much better way to get familiarized with masts, radio technology, electronics, and people.

Mobile networks are the almost invisible part of our mobile culture. Without them, there wouldn’t be “mobility” at all – we’d just be carrying phones that could not seamlessly connect to other phones, servers, and landline phones. Proliferation of the network’s components has led to better coverage and a faster network. The support structure in densely populated cities is different from rural areas. However the ingenuity of a mobile network is that it unifies the layer so that users feel as if the network is magically omnipresent.

Looking back now in 2017 to the roots of telecommunication, it’s easy to almost forget what a long journey had to be taken to get the quality of mobile networks available today.

As the story of Nokia Mobile Phones has somewhat waned from public limelight, the Nokia (network company) goes on strong.  tn_siilasmaaIn fact, in autumn 2018 Risto Siilasmaa is publishing a book about the current Nokia!

Network speeds in mobile world have gone through the ceiling, almost. A 50 mbit/s mobile download speed would have been pure fantasy just a decade ago.

Nowadays it’s quite evident that mobile networks are used for 2 prime purposes: transmitting speech and transmitting data. Speech used to be the sole type of payload going between a mobile phone and the network. Nowadays the roles have almost swapped: people use data, by using their smartphone apps, possibly more than they’re using talk.

SMS or short text messages was a curious “freak invention”. People could send a maximum of 160 characters from a phone to another. SMS did not technically “consume” bandwidth between the hops in mobile network, since the message payload data was transmitted in a control channel. SMS became a killer app; it is even still used a lot, even though there are a lot of “competing” applications that utilize data, and can provide a richer means to transmit icons, animations and photos along with text.

How does a mobile talk to the mobile network?

A mobile network faces the phones using radio. Radio itself is an older invention. The modern digital mobile networks always have to, nevertheless, work according to and respecting radio principles. That’s part of “why”: we need standards bodies to regulate radio traffic, otherwise an ensuing chaos would not have enabled the development of such widespread and homogenous quality adaptation of mobility.

Session between phone and mast

So, img_8170each mobile phone has a “discussion” (ongoing session) with a mast. For example our speech becomes a stream of bits: first the mobile phone digitizes the voice of its user, then sends this on wire. The “wire” happens to be radio, not a cable — but do remember, often the gap between masts is indeed a wire ūüôā

In 1990s the antenna design race was a big one. There was also a settling period between whether the antenna should be visible (a whip) or embedded cleanly into the phone. The latter won.

All things are layered, so that for example a software developer does not have to know all the gross details of radio networks. Even the mobile phone manufacturer doesn’t these days have to know all the details: manufacturers can subcontract the manufacture and design of electronic chips used in the phones.

Mobile network speed

The speed comes in three different aspects: latency, bandwidth and the network coverage. A fourth aspect is jitter (the variability of quality during time).

In reality there is also a 5th aspect that is not actually about the mobile network; but rather the load of an application server. Just as in desktop computers, some sites might have inadequate capacity, considering the number of users, and thus the user experience is not the best possible.

The latency is the initial setup time of a data connection, and it is also the minimal time that it takes to get any amount of data from transmitting phone to a receiving phone (or server). Latency affects the quality of real-time communications like speech, video and gaming. Less is better. Nowadays latency as low as 5-10 milliseconds is achievable in mobile networks. When first data capabilities appeared in mobile networks, the setup might take several 1000s of milliseconds.

Why the mobile network speed is tricky?

In mobile networks you are always out there, in the wild: things are a little bit more unpredictable than those office LANs. The amount of cell towers, their interconnect network quality, temporary issues that have to do with weather, natural and built obstructions between phone and the mast; network outages, and many other things affect your perception of the quality of a mobile network.

When you buy a mobile lease, that is, sign up with a mobile operator, you’re buying a snapshot of their offering: how things stand today. That is, quality and prices can change in the future. If the operator doesn’t expand capacity as fast as it gains new customers, then on average the throughput will decline.

Sometimes bottlenecks in capacity are not directly up to the operator – they might face legislative, technical or even political reasons for not being able to change the network structure.

Evolution of mobile tech

15 years back the phone was most about 2 things: built-in features and a fixed software. The pace of development was fast, but it was in a different way: every release of a new phone model was anticipated by questions regarding what Truly New Features were packed into the make and model.

Phones were also exploring the form factor quite radically in the 1990s:whether the phone was a single-piece or had a bevel (joint) and a separate screen; how many real buttons the phone had. The design of the buttons (ordinary, 4 directions, a “navi” roller etc.).

At some point completely new things, like the digital camera, surprised the markets. Camera was an interesting thing, especially now judged afterwards. It became an essential ingredient of the smart phone.

World first in camera phones? Samsung with the model

Then further down the line, innovations became kind of more anticipated: you’d get still leaps of new exciting development, but it was¬†incremental, not disruptive. iPhones have basically stopped evolution since iPhone 4: even the staff at Apple Stores roll their eyes when being asked about “What’s the next iPhone going to bring?”. Answer: “A little bit of this and that – nothing magic.”

Apps, apps, more apps!

Nowadays there are millions of apps on app stores. An average consumer keeps about 17 apps per phone.

Thus the competition for “attention” on a user’s phone is fierce, even though technically the limit to hold more apps is starting to vanish. People just don’t want too much clutter on that very personal real estate. Typically the choice to start an app requires some swiping left/right on the phone. The more you have apps, the longer it takes to get to the correct one. It’s a usability issue.

Today’s 1 mbit/s, 5, or even 40-50 mbit/s burst download speeds are insanely fast compared to the almost magical 9600 bit/s that initially was allocated to a phone by its “host”, the mobile tower. Today’s speeds are thus

Let’s put the Moore’s law to test. Moore can predict to the future of technological evolution. It’s an empirical formula, based on the initial observations of how gates (components) in integrated chips got smaller. This ‘minituriazation’ is the key phenomena that enables high-tech products.

Evolution of the Mobile Network

How slow or fast were things with a early “smartphone”?

When Nokia’s “Communicator“, a flagship product, hit the road in 1996, and soon became a legend, the mobile data network was quite different back then. A phone’s data connection practically rode at most on max 2 combined GSM data channels. Each data channel packed 9,6 kbit/s so the Communicator could run at a then-whopping 19200 bits/s.

Let’s take as example one digital image, around 128 kilobytes, and that would take 54 seconds to download. Impractical. Nowadays in 2018 the download time is in reality somewhere between 1 and 2 seconds. Not bad at all! Let’s say it is 25 x faster now.

By the way, taking that image download example, with 5G coming in 2018-2020 these kinds of small UX improvements are expected to get “just right” – that’s my bet. There’s also something that tech itself can improve. The way those images are being used can affect the various parts of the timing, and thus our psychological feeling of how fluently the particular service works.

Let us hop back to 1996 and the Communicator:

in practical sense, for example most emails were only pure ASCII text – something around 1-2 kilobytes per mail; thus 20 new mails could be downloaded in 18 seconds. Or a single email in about 3-5 seconds, with all the protocol set-up etc that took place when the client-server connection was established. It wasn’t exactly as fluid as you’d want, but then again: it was quite a breakthrough, a shift in the paradigm: you did not have to be sitting at your desk. You were suddenly truly “mobile”, capable of doing most parts of work anywhere. Well, anywhere within the reach of mobile networks.

The 19200 bps speed we mentioned is a tiny fraction of the speeds of nowaday’s 4G networks. It’s less than 1 percent of 4G’s speed. Imagine that!

Still the 1990s Nokia Communicator was a phenomenal success story at that time. It was iconic, and it also brought significant power to the owner.

Handover and the magic of data

A mobile phone, in order to stay useful and true to its breed, keeps a connection to its “parent” – the mobile mast. The phone basically only peeks into the world through its data connection. This is the very core phenomena of “the world is at your fingertips” -illusion: yes – and no. Only the next “hop” (a PC with a network connection) is at your phone’s fingertips. If that server machine – the “receiving end” of your mobile data connection – is not up, then you’ll take a dive into the abyss: “no connectivity”. However during times the technology has been evolving and almost 100% of the time you will have a data connection. Almost magic!

The scenario is not that much different from your laptop and the WiFi access point (“hotspot”). With mobile phones, the distances are greater, and usually there’s also a rather constant movement – thus a “handover” is expected. In handover the mobile phone¬†leaves¬†one network and enters another network. In many cases, this is completely automatic and the user doesn’t know a handover took place.

(By the way; depending on the design, if in the “laptop scenario” we mentioned before, you leave the range of your hotspot, you’ll¬†lose¬†connectivity. Any attempt to access Internet will show an error message. It is possible to design campus networks so that there are Wi-fi¬†repeaters within the area, densely enough, so you can carry around and do work with your laptop, and it jumps from a hotspot to another as needed).

Looking at the feature list of a mobile (cellphone) sometimes makes us dizzy. The lists, as fair and accurate they might be, don’t tell the whole story. Usually just acronyms are used: WCDMA, 3G, GSM, WiFi, UMTS, EDGE, and so on.¬†I think there’s a vast population of users to whom these words mean absolutely nothing, or, sometimes even worse, they vaguely bring into mind something which practically leads to expensive or inefficient use of the phone.

On manufacturer side, 3G and 4G networks have become de facto. The availability and quality is still an ongoing quest, worldwide, and there are huge differences in average density and speeds per country.

Without proper knowledge one can use the phone in very unoptimal way, pay too much fees, and get frustrated because of network outages. Knowledge never hurts.

GSM – the foundations of mobility

The three letters “GSM” used to be eponymous for the whole handset industry. GSM originates from a standards group (originally¬†Groupe Sp√©cial Mobile ), a very small one, which drafted the technology that united a back-then quite heterogeneous set of mobile network technologies. One crucial example: handover.

When you use a mobile phone in car or while walking, you cross boundaries between different base stations. A base station can be “heard” only for about a few kilometers – beyond that the signal fades off. A handover happens when two base stations agree that your mobile phone call will be continued even after crossing the boundary.

GSM is the main standard that enables worldwide mobile communications. GSM was immensely importance at its time, and still guarantees interoperability of the phones at voice and text message level.

A couple of rule of thumbs:

– know your phone’s settings: what your phone’s hardware (features) allow, what is available to be modified as settings, and where you can find them in the menus

– different network technologies have varying maximum speeds, physical ranges, and security. Use the one that is suitable for your purposes.

Analog and digital networks

I’ll leave largely the software part aside, and concentrate on those (physical) network features. The original cell phones used different kind of technology: analog networks.

These were very much the same as what radio amateurs had been using for decades: one’s voice is transferred into a different frequency, mixed with a carrier wave, and transmitted. On the other end, a reverse process happens and the listener receives the original voice.

While these were simple, analog networks had problems with security, conflicts (overhearing other people’s talk), and the inability to serve as a backbone for modern services. Thus GSM standard was born. The word ‘GSM’ came to mean also the cell phone itself. GSM is an interoperability standard which defines digital communications between phones, using mobile base stations (‘masts’).

The neat idea of digitization is that once you have that basic ‘pipe’ or connection between two phones, you can do a lot of things over it. Text messages, Web, using email, and a lot of other¬†services became possible. A flagship product of this kind of “mobile office” was probably Nokia’s Communicator 9000.

Antennas

But as simple as we’d like to keep the digitization, the truth is a bit more complex. The basic reason behind this is that still, no matter how digital, the communications happens in the electromagnetic spectrum – in the air, using electrons. Electrons are guided by antennas. They fly through air (kind of – unfortunately; electrons would be ideally precise in void, but we would have trouble without air to breathe ūüėČ and are received by another antenna – the receiver.

This movement of electrons (electricity) obeys the laws of physics, which don’t give a heck about what’s going on the “upper plane”. All communications that we use in today’s world (2016) happens using some band (slice) of a frequency.

Where do WiFi, Bluetooth, 3G, 4G come in?

The thing to note first is that phone communications can be of three distinct types:

  1. autonomously between two phones
  2. phone using local wireless area network (WiFi)
  3. phone using data network (3G, 4G)
The first type of communication includes Bluetooth and infrared (IR). It’s just about “swapping” data, when the phones are relatively close to each other. The significance of phone-to-phone communications has declined, as it is plain easier to use ritually a “TCP/IP” communications method, ie the Internet. Another thing that diminishes the practicality of phone-to-phone is that large business platforms like Facebook, Google tools and a host of others all benefit most if all data goes through a central point (their servers).
However, where autonomous communication is important, will be between the mobile phone and local information systems like ticket vending, or a car’s entertainment system.
It doesn’t cost anything, because there is no operator in between. The phones usually negotiate a security code (PIN) by asking the users to agree on one. People use this kind of comms to swap addresses, copy files, and so on. The speeds vary, Bluetooth being faster than infrared. BT is also more secure and robust to interference. Also NFC (near field communications) is a type of autonomous communications, where the receiver is often a POS device at a shop.
Where does one need WiFi?
That’s actually a very neat feature, that can save you money and allow doing software installations, updates, and operating system installations easier. WiFi means that your phone essentially looks like a computer to a WiFi hotspot device: the device allows you to become part of the network, essentially Internet.
The most obvious place to have WiFi is at your home or work. Also public places like libraries, cafes, and metropolitan areas in general can have a network coverage.
WiFi cannot charge (make you pay) directly, but you may not receive access rights to WiFi network in some places, before you pay. For example, some hotels, airports, or other similar places require you to buy a coupon which gives you a username and password, or might otherwise authorize you to use the network.
The ‘native’ data communication happens using 3G or 4G networks. These are mechanisms by which your phone can relay information between any other computer (server) that would basically be reachable via Internet. When you load a Web page, read your email, or do anything that comes from the Internet, your phone will most likely use these networks (but the preference, order of networks is configurable).
Note that 3G and 4G data has a real price. There are several billing plans (‘data plans‘): by amount of data being transferred, a fixed data plan (monthly costs), temporary data plan (usually bought for one day) – actually too many to exhaustively list all of them. What matters is that you know your plan type.
The most dangerous situation is when you are not sure about what kind of plan you have, and still continue using the data features: this puts you into jeopardy of getting a very hefty bill the next time your operator sends you one. So: be sure! Use the Internet (on a PC), or call your operator to get timely information of the plans.
What’s the future of mobile computing and phones?
– “no one knows”
– ..but still we might have good guesses
Understanding the market drivers
– companies
– temporary financial drivers in components and manufacture profitability
– large handset manufacturers are also interested more and more about network quality
– B2B sales drive security needs: corporations are obliged by law to have security standards, and part of this is the mobile security
Many smartphones become severely “impeded” without a good quality mobile network. The phone/network symbiosis has become evident. A way to test that out is to turn off the mobile data in your own phone. Then the apps will (only) function mostly when you’re in the reach of a free Wifi. Some apps however are useful even in this off-line usage mode.
One can still home in to Wifi networks here and there, but without mobile data coverage, the phone
Theoretically ordinary mobile operator’s services could be hosted on a server, and let the phones roam in Wi-fi -like networks. Thus this would open up the traditionally very “closed-domain” role of a mobile operator. But there are strong economic incentives that direct the path of technological advances in things like mobile networks: carrier’s one of the most valuable and hard-to-imitate asset is exactly this “core mobile network”.
There have been these kind of tests; for example in the city of Oulu, Finland. Oulu is one of the birth places of mobile telecommunications, and has a pioneering attitude to develop even radically new kinds of mobile paradigms. City of Oulu provided a “Finland first” in communal, free wireless network.
Why haven’t the data speeds grown faster?
Reality vs. what we think
– network coverage
– true average attainable speeds
What’s keeping us back from enjoying high-quality, non-interrupted TV on a mobile phone? The answers are variable, and there’s a lot behind the development. Some of the things we’re experiencing are because of heavy standardization processes. Mobile communications is a field that needs rules for the players, in order to make it possible for a large group of users enjoy the experience. If there was no standardization of radio frequency use, we would essentially be electronically jamming each other and no one would be able to get data through.
Networking in general had problems in the 1970s and 1980s due to inconsistent technologies, which meant that even within a single building there might have been half a dozen different network technologies. (By the way, Cisco – the network company – was built out of the vision that these networks should be interconnected and working together!)
This same kind of problem was troublesome for WiFi, too – at first incompatibilities seemed to be a major obstacle to success. When technology is in its infancy, consumers are often very skeptical – and, when the benefits start to show up, so do new customers line up.

Final words

Mobile networks arose from a simple idea: carry voice and data, through a backbone, and “surface” the coverage with a radio connection. Thus you would create a network that would allow you, the user, roam around freely and still be connected. The first analog mobile networks were already tried in the 1970s. GSM standard was a major miletsone in the unification and standardization of digital communication in the mobile networks. Data usage has overtaken voice during the 2000s. Cravings for ever greater speeds, coverage and flexibility drives the evolution of mobile networks, now entering the 5G era in around 2018-2020 throughout the world.

DONT’T CLIMB THE MASTS…

By the way! img_8167¬†I really don’t recommend scaling the mobile masts. Rather, pass this article on, and/or drop a question here, right in the comments. This way we’ll get much more information on this modern day marvel of mobile networks.

Get a clutter-free Dia editor

Reading Time: < 1 minute

I love doing both formal and informal modeling in Dia. However, sometimes the grids and other visual clutter bothers. There’s 2 in-page items (the Grid, and Page Breaks) and the Rulers.

The Page Breaks were difficult to turn off.¬† It took me a lot of time to find the correct place for that. That’s why I want to share it.

We’re aiming for this:

dia_blank_good_looks

Turning off Grid, Rulers and Page Breaks

  1. Click View -> Show Grid (to turn off showing the grid)
    dia_toggle_menuGrid
  2. Click View -> Show Rulers (turn off)
  3. Click File -> Preferences -> [View defaults] -> “Page Breaks” -> Visible (turn off)
    dia_toggle_prefBreaks

That 3rd step was the thing I had been looking for a long time!

Seis, Intel Рpalataan kesällä!

Reading Time: < 1 minute

Yksi ikävimmistä tavoista aloittaa päivä on huomata, että oma kone ei toimi.

Näillä ohjeilla pistät Intelin mikrokoodi epävakaat jakelut toistaiseksi paussille, apt-pohjaisissa Linuxeissa kuten Ubuntussa.

(Microsoft-laitteille tietoiskua Microsoftin sivulta)

Jos p√§√§si√§isen 2018 tienoilla yht√§kki√§ Ubuntu-tietokoneenesi tuntuu “hajoavan”, esimerkiksi n√§ytt√§m√§ll√§ kernel panic,¬†tarkista ensin onko kyseess√§ vain huono ohjelmistop√§ivitys. Saatat selvit√§ turhan uuden laitteen ostamiselta, ja kaikelta muulta rumbalta mik√§ siihen monesti liittyy.

TL;DR: Jos haluat estää mikrokoodipäivityksen

  1. Avaa shelli
  2. sudo apt-mark hold intel-microcode
  3. Tarkista, että meni perille.
  4. Seuraa uutisia seuraavan 3-6 kk aikana aika-ajoin
  5. Kun intel-microcode paketti on alkanut stabiloitua ja kaatumisuutisia ei kuulu, voit ottaa päivitykset käyttöön:
    sudo apt-mark unhold intel-microcode

Tarkista hold ajamalla vielä yksi komento

Helppo tarkistaa, menikö hold perille:

$ apt-mark showhold
intel-microcode

Muista tarkistella aika ajoin miten asia kehittyy. Nyt toistaiseksi, jos pidät muuten koneesi tietoturvallisena, eli Linuxissasi ei ole potentiaalisesti tuntemattomia käyttäjiä sisällä, en näkisi ongelmatilanteen estämiseksi mikrokoodipäivitysten lykkäämistä huonona ideana.

Tasapainottelua: teoreettinen ongelma vai kaatunut kone?

K√§sitt√§√§kseni kotik√§yt√∂ss√§ koko Meltdown-ongelma on varsin pieni. Meltdown sallii sivukanavaa pitkin tapahtuvan tiedon lukemisen; k√§yt√§nn√∂ss√§ monen k√§ytt√§j√§n servereiss√§ kernelin muistisuojaus pett√§√§, koska modernien prosessorien cachet ja predictive branching sallii k√§yt√§nn√∂ss√§ prosessille tilaisuuden vakoilla limitt√§in tapahtuvan laskennan tuloksia niiden j√§tt√§mien j√§lkien (“sivuvaikutusten”) perusteella.

Lynx alias: kivuton käyttö

Reading Time: < 1 minute

Hyönteiskarkote

T√§m√§ artikkeli on nopeutusvinkki tekstiselain ‘lynx’:n k√§ynnist√§miseen Linuxissa. Voipi kiinnostaa muitakin. Mutta ep√§ilen. Hyv√§√§ perjantaita joka tapauksessa!

Keksien (cookies) hyväksyminen

HUOM!

Jos noudatat alla olevia ohjeita, eli ajat lynx-tekstiselainta optiolla -accept_all_cookies k√§yth√§n katsomassa mist√§ on kyse. Piparit, keksit, eli “cookiesit” ovat juttu, johon kannattaa hieman tutustua.

Lynx -tekstiselain

Käytän ajoittain tekstiselainta normaalin Firefox / Chromen sijaan; tyypillisesti esimerkiksi dokumentaatiota lukiessa.

Lynx on alunperin Washingtonin yliopiston kehitt√§m√§, kenties tunnetuin ja pisimp√§√§n tuettu web-selain. Se eroaa valtavirtaselaimista siin√§, ett√§ voit ajaa lynx:i√§ terminaalissa, ilman ns. “grafiikkaa”.

Eräs piinaava yksityiskohta Lynxin käytössä oli se, että Lynx halusi jokaikiseen cookie -pyyntöön vahvistuksen.

Homma korjaantuu, kun käyttää -accept_all_cookies parametria käynnistyksessä. Bueno!

Ja t√§m√§ on tyypillinen homma, mink√§ voi automatisoida. On tyls√§√§ muistaa aina pitk√§n parametrin lis√§ys, joten haluan toki vain ajaa ‘lynx’ ja painaa enter.

Pysyv√§ alias ‘lynx -A’ k√§ytt√§j√§kohtaisesti

Lis√§t√§√§n t√§m√§¬†accept_all_cookies vivun k√§ytt√∂ ‘lynx’ komennolle:

cd $HOME
vim .bash_aliases
alias lynx='lynx -accept_all_cookies'

(tallenna tiedosto, ja poistu editorista)

source ~/.bash_aliases

Testataan viel√§ toimivuus, tuliko alias k√§ytt√∂√∂n. Kysyt√§√§n tulkilta, mit√§ t√§ll√§ hetkell√§ on aliasoitu, ja poimitaan vain ‘lynx’ tekstin sis√§lt√§v√§t m√§√§ritykset:

$ alias | grep 'lynx'

Pitäisi näkyä rivi:

alias lynx='lynx -accept_all_cookies'

Ja itse “oikea” testi. Aja shellist√§:

lynx

Jos Lynx sallii nyt selaamisen eri paikoissa, ilman herjaamista cookieista niin aliasointi toimi!

Ohjelmistot vs. bakteerikasvustot

Reading Time: 2 minutes

Mielenkiintoinen ajatus nousi aamulla; intuitiivisesti, ohjelmistojen kehityksen nopeudesta.

Ohjelmistot ovat koodia, siis tekstirivej√§. Tietysti rivit pit√§√§ osata kirjoittaa, eli niill√§ kuvataan oikeastaan tekoja (algoritmit) ja asioiden v√§lisi√§ suhteita (tietorakenteet). K√§yt√§nn√∂ss√§ sill√§, mik√§ alusta tai teknologia valitaan (C, Java, JavaScript, Ruby, Python, …), ei muuta t√§t√§ yksinkertaista faktaa.

Kun palikoita on riittävästi laitettu pinoon, jonoon ja kyljelleen, syntyy ohjelmisto.

Alunperin ajattelin pelk√§st√§√§n ohjelmiston kehityst√§ nosteen ja vastuksen avulla, t√§st√§ syntyisi aika helposti se kuuluisa S-k√§yr√§, jossa on kaksi “tasoa”: tukitaso ja katto.

Ajattelin tuollaista S-käyrää, biologista. Biologisen populaation kasvussa on neljä vaihetta:

  1. kasvuolosuhteisiin mukautuminen
  2. kasvun alku
  3. nopean kasvun aika
  4. kantokyvyn saavuttaminen, kasvu pysähtyy

S-k√§yr√§ on hieman eri kuin (talouden nousuaikoina, “good times”) k√§ytetty hockey stick, jossa usein kuvataan eksponentiaalista jonkin asian alkukasvua. S-k√§yr√§ss√§ ei edes kuvitella, ett√§ kasvu olisi ikuista – olettaen¬†ceteris paribus.

Biologisesti esimerkiksi bakteeriviljelmät ovat usein aluksi täysin eksponentiaalisella kasvulla eteneviä. Koska viljelmän kasvu perustuu solun jakautumiseen, ja jokainen solu jakautuu keskimäärin tietyn ajan kuluttua, saadaan sama ilmiö kuin ydinräjähdyksessä: ketjureaktio. Sitten, jonkin ajan kuluttua, kuitenkin bakteeriviljelmässä tulee rajoite vastaan: yksinkertaisimmillaan tuo rajoite voi olla tilanpuute, tai esimerkiksi ravinnon puute. Tai tautien lisääntyminen populaatiossa. Rajoitteet pakottavat tasapainotilan, jossa syntymä ja kuolema ovat yhtä suuria. Näin populaatio ei pääse kasvamaan tietyn määrän yläpuolelle.

(Palaan muuten my√∂hemmin t√§h√§n pieneen nyanssiin; “koon” k√§site bakteeriviljelm√§ss√§ vs. ohjelmistossa)

Analogioita?

Ohjelmistoissa aivan kuten bakteeriviljelmässä alkuunpääseminen on tutun kuuluista:

1. The Lag phase
   During lag phase, bacteria adapt themselves to growth 
   conditions. It is the period where the individual bacteria
   are maturing and not yet able to divide. During the lag phase
   of the bacterial growth cycle, synthesis of RNA, enzymes and
   other molecules occurs. During the lag phase cells change
   very little because the cells do not immediately reproduce
   in a new medium.

Hetkonen? Nyt tuntui tutulta! Mikä vaihe on kyseessä ohjelmistoprojektissa, kun bakteeriviljelmässä tapahtuu ylläolevaa?

Viritän muutamia kysymyksiä seuraavaa osaa varten.

  • jos ohjelmistojenkin kehitys alkaa hyyty√§, miksi silti projekteja tehd√§√§n usein monien vuosien ajan?
  • onko ohjelmistolla samanlainen absoluuttinen raja kuin bakteeriviljelm√§ll√§?
  • miten isoja oikeat ohjelmistot ovat?

Ideoita? Paina kommenttia! Kiitos, ja jatkuu osassa 2.

Mitä ohjelmistopäivitykset oikein ovat?

Reading Time: 5 minutes

Käyttöjärjestelmä on yhtä turvallinen kuin sen päällä ajettavien ohjelmien tietoturva. Jotakuinkin näin ajattelen, käytännössä.

Ohjelmistopäivityksiä tulee yleensä vain oikeastaan kahdesta syystä: kehitys tai korjaus.

  • ohjelmaa on kehitetty: uusia ominaisuuksia, jo olemassaolevia parannettu
  • bugi korjataan: se voi olla harmiton, tai liitty√§ tietoturvaan

Usein automaattisissa ohjelmistop√§ivityksiss√§ tulee molempia laatuja. Bugit ovat luonteeltaan sellaisia, ett√§ niit√§ havaitaan ajan kuluessa, ja kun k√§ytt√§j√§t huomaavat ohjelman k√§yt√∂n aikana jotain ep√§normaalia. Bugien “lopullista m√§√§r√§√§” on teoriassakin mahdoton ennustaa; siksi ei voida sanoa, ett√§ ohjelma olisi koskaan t√§ydellinen.

Etenkin älypuhelinten sovellusten päivitys on muuttunut varsin kivaksi käyttäjän kannalta. Itse tunnen lähinnä iOS (Applen puhelinten) toiminnan, ja koskaan sen kanssa ohjelmistopäivityksestä ei ole tullut mitenkään kynnyskysymystä.

Miksi ohjelmistopäivityksistä pitäisi tietää?

Ongelmallisimpia “koneita” ovat itse asiassa ne, joissa k√§ytt√§j√§ll√§ ei ole mit√§√§n hajua, mik√§ on ohjelmisto, ja mit√§ tietoturvasta huolehtimatta j√§tt√§minen voi merkit√§.

Tilanne, jossa tietokoneen annetaan olla niinkuin se on, ja pelätään asetusten muuttamista, on itse asiassa aika yleinen. Vaikka olen tehnyt käytännössä modernien (moniajavien, Windows- ja Linux käyttöjärjestelmien) parissa töitä ja harrastusta 20 vuotta, ajoittain huomaan itsekin nojautuvani tai ainakin toivovani voida valita passiivisen sivustakatselun. Tietotekniikkaa ja ohjelmistopäivityksiä voisi kuvata näin: ne ovat tosi kivoja, niin kauan kuin kaikki toimii kitkatta. Kun ongelmia tulee, selvittelytyöstä voi muodostua yllättävän pitkä ja mutkikas juttu.

Siksi Рjotta nuo taustalla vaikuttavat asiat Рvalaistuisivat ja päivityksiin liittyvät pelot hälvenisivät, käyn hieman läpi myös tietotuvan ja ohjelmistojen päivitysten perusteita.

Tietoturva on “seksik√§s” aihe mutta tavattoman tyls√§√§ tietyll√§ tapaa. Se on sellainen extra, jota useinkaan ei¬†jaksaisi¬†hoitaa k√§yt√§nn√∂ss√§.

Olen ehkä siinä mielessä hieman poikkeus, että ihastuin tietoturvan periaatteisiin ja teoriaan koulun kautta. Silti, myönnän, että esimerkiksi pari kertaa viikossa aina 2 tuntia kerralla kuluttava podcast aihepiiristä on joskus puuduttavaa kuunnella. En voi aivan varauksetta suositella siis Steve Gibsonin ja Leo Laporten Security Now! -podcastia, vaikka se onkin alansa parhaita (jos haluat tarkkaa analyysiä erilaisista ajankohtaisista tietoturvaongelmia, paina ihmeessä Subscribe tuolle podcastille!). Se ei vain ole sellainen, joka kattaisi perustarpeita, vaan menee tosi syvälle asioiden taustoihin.

Itse asiassa: nyt käännyn teihin päin. Mikä on paras ja nasevin tietoturvauutispläjäys, joka antaisi myös käytännön ohjeet asioiden hoitamiselle? Kuunteletko tai katseletko säännöllisesti jotain tiettyä podcastia, vlogia tai vastaavaa? Laita kommentteihin!

Käyttäjänähän meitä kiinnostaa vain varsinaisten hommien tekeminen, eli se että tietokoneella saadaan aikaiseksi se, mitä alunperinkin alettiin tekemään.

Tietoturvaa vaivaa myös se, että asiasta puhutaan usein hyvin teknisin termein. Suomenkielistä, selkeää ja riittävän ytimekästä tietoa on myös aika vaikea saada Рetenkin jos haluaa sitä riittävän nopeasti tietoturvaongelmien noustua.

Käyttis + softa + minä = kokonaisuus

K√§ytt√∂j√§rjestelm√§ (Windows tai Linux, Mac) on mik√§ on, mutta jos sen p√§√§ll√§ olevat ohjelmat vuotavat ja bugittavat, “alla olevalla” turvalla ei ole merkityst√§. Siksi k√§ytt√§j√§n itse tekem√§t p√§ivitykset ja ennenkaikkea ymm√§rrys tietoturvasta on t√§rke√§ss√§ roolissa.

Kauan aikaa sitten, tarkoittaen 1980-lukua, käyttöjärjestelmissä ei ollut oikeastaan varsinaisesti mitään tietoturvaa. Ne oli ohjelmoitu olettaen, että ulkopuolisten ihmisten olisi hyvin vaikea päästä koneelle; Internetiä ei vielä ollut kotikäytössä.

Myös senaikaiset ohjelmistot eivät sisältäneet tarkoituksellisia heikkouksia, ja virukset ja muut haittaohjelmat olivat vasta tulossa 1990-luvulla.

Mik√§ “ohjelmisto” oikeastaan on?

Ohjelmisto (‘software’) on tietokoneen kannalta (n√§k√∂kulmasta) kasa bittej√§, jotka on tallennettu tiedostoon. Ennen ohjelman ajoa k√§ytt√∂j√§rjestelm√§ lukee tiedoston muistiin, ja alkaa sy√∂tt√§√§ komentoja prosessorille. Ohjelma on siis aiemmin valmistettu ja k√§ytt√§j√§lle kopioitu, tietty√§ j√§rjestyst√§ noudattava erityinen tiedosto.

Jos kirjoitat muistioon pienen pala tekstiä, se ei ole tietokoneohjelma. Jos tällaisen tiedoston yrittäisi syöttää prosessorille, saisit virheilmoituksen: prosessori ei tunnista komentoja mielekkääksi.

Joskus muinoin ohjelmat saattoivat olla niinkin pieniä kuin 5-10 tavua (eli lyhyitä, mitattuna komentojen määrällä). Lyhin ohjelma on periaatteessa 1 tavun mittainen, mikäli prosessorissa on sellainen komento, jonka tallennus vie vain 1 tavun muistia. Käytännössä ohjelmat ovat satojen tuhansien tai miljoonien tavujen mittaisia. Koon voit tarkistaa näin:

Ohjelmistotiedoston sanotaan olevan “executable”: ajettava eli ajokelpoinen. Vastakohtana ajokelpoisille tiedostoille ovat erilaiset data-tiedostot, jotka sis√§lt√§v√§t ohjelmien k√§ytt√§m√§√§ tietoa.

Kaikki ohjelmat koostuvat hyvin samanlaisista peruspalikoista: komennoista, joilla voidaan siirrell√§ tietoa keskusmuistissa. Tietoa voidaan verrata, kopioida, poistaa, ja n√§it√§ voidaan tehd√§ “loogisten ehtojen” voimin. Matemaattisesti operaatiot ovat

Automaattiset päivitykset

Aika usein nyky√§√§n k√§ytt√∂j√§rjestelm√§ p√§ivitt√§√§ itsens√§ automaattisesti, eli ajastetusti ja “t√§ydellisesti” (kaikkien osien suhteen). T√§m√§ itsess√§√§n on ollut kenties er√§s k√§yt√§nn√∂n tietoturvan suurimmista edistysaskeleista. Hyv√§ niin!

Mutta, 3 syytä miksi automaattiset päivitykset eivät ole ratkaisu kaikkiin tietoturvaongelmiin:

  1. ajan kanssa kilpajuoksu
  2. automaatiikka on pois päältä
  3. joskus automatiikka ei sovellu (eli aiheuttaa enemmän ongelmia)

Ajan kanssa kilpajuoksu

Yksitt√§inen tietoturvauhka johtuu teknisesti “rei√§st√§”, eli bugista ohjelmassa. Bugi on olemassa riippumatta siit√§, onko sit√§ havaittu tai ei. Bugi on siis ohjelmakoodin logiikassa sellainen kombinaatio, joka on haitallista jos sit√§ k√§ytet√§√§n tietoisesti (tai tahattakin). Esimerkkin√§ voisi heitt√§√§, ett√§ ohjelman on tarkoitus¬†aina¬†tarkistaa p√§√§k√§ytt√§j√§n salasana, ennenkuin se antaa tehd√§ tiettyj√§ toimintoja. Ohjelmaa on my√∂hemmin paranneltu, koodia kirjoitettu. Tuolloin koodaajalta on unohtunut lis√§t√§ t√§m√§ tarkistus. Ohjelmaan j√§√§ nyt bugi, jossa tietyss√§ erikoistapauksessa salasanaa ei tarkisteta, vaan normaali k√§ytt√§j√§ voi “luvatta” tehd√§ asian jonka pit√§isi olla sallittu vain tunnistauneelle p√§√§k√§ytt√§j√§lle.

Mutta todenn√§k√∂isyys, ett√§ joku oikeasti k√§ytt√§isi tuota ongelmallista bugia hyv√§ksi, on aika pieni, jos rei√§n olemassaolosta ei ole tietoa. Tietoturvaan liittyv√§n bugin raportointi ja julkituonti on siis kaksiter√§inen miekka. Samantien kun bugista tulee julkinen, alkaa my√∂s mahdollisesti sen hy√∂dynt√§minen luvattomiin tarkoituksiin. Mutta raportointi saa liikkeelle prosessin, jossa yleens√§ joku kirjoittaa korjauspalasen (“patch”) – t√§ll√§inen tapahtuu yleens√§ 1-14 p√§iv√§ss√§ ja voimme k√§ytt√§jin√§ hyv√§ksy√§ ohjelmistop√§ivityksen, jonka j√§lkeen ohjelma on turvallinen t√§m√§n bugin osalta.

Tietoturvahaavoittuvuuksista puhuttaessa, termi “responsible disclosure¬†tarkoittaa sit√§, ett√§ rei√§n ensimm√§iseksi havainnut taho pyrkii tuomaan turvallisesti tiedon vain niiden (oletetusti “hyvien”) piiriin, jotka voivat korjata vian ennenkuin suuri yleis√∂ saa siit√§ tiet√§√§. Etumatka voi olla merkitt√§v√§ ja eritt√§in t√§rke√§, ja v√§hent√§√§ uhkan aiheuttamaa vahinkoa.

2. automaattiset ohjelmistopäivitykset tarkoituksella pois päältä

Aina siis automatiikka ei ole passé. Käytännössä tällaisia syitä, miksi käyttäjät estävät automaattipäivitykset, on joko että se häiritsee työntekoa, tai että esimerkiksi ohjelmistokehittäjänä itse haluaa kontrolloida tarkemmin mikä ohjelma muuttuu, ja milloin.

Keskin√§inen riippuvuus ja “dependency hell”

Päivitysten kenties kaikkein arkaluonteisin puoli on, että ne saattavat aiheuttaa myös ongelmia. Tämä johtuu ohjelmistojen keskinäisestä riippuvuudesta.

Riippuvuudessa kysymys on: Jos tietokoneella on N kpl eri ohjelmistoa asennettuna, mitk√§ versiot kustakin pit√§√§ olla, jotta ne toimivat kesken√§√§n yhteen? Tarkemmin sanottuna yleens√§ kyse ei ole “kokonaisista sovelluksista”, vaan jaetuista ohjelmistokirjastoista. Kirjastoja asennetaan siksi, ett√§ ohjelmistokehitys helpottuu. Mutta t√§m√§ luo (tai; ainakin loi ennen) keskin√§isriippuvuuden ja joskus my√∂s noidankeh√§n, jossa oli vaikea saada sopivia palasia paikalleen kokonaisuuden toimivuuden kannalta.

K√§yt√§nn√∂ss√§ tuosta tulee matemaattisesti “N:n tekij√§n tulo”, eli kertolasku jossa on monta termi√§. 3 ohjelmalla ja 5 versiolla jokaisesta saataisiin testijoukoksi 5*5*5 = 125 eri kombinaatiota. Todellisuudessa versioita on huomattavasti enemm√§n, samoin sovellusohjelmia. K√§yt√§nn√∂ss√§ PC-koneella voisi kuvitella olevan 20 sovellusohjelmaa, ja niist√§ tarjolla ainakin tuo 5 versiota. 5^20 on aika t√§htitieteellinen luku.

No totta puhuen¬†nyky√§√§n – itse asiassa jo kauan aikaa sitten – t√§st√§ “dep hell” ongelmasta on p√§√§sty aika nerokkaalla mekanismilla yli: pidet√§√§n jokaisella ohjelmalle oma virtuaalinen kirjastonsa; eli jokainen voi kuvitella el√§v√§ns√§ omassa kopperossa, jossa k√§ytt√∂j√§rjestelm√§ tarjoaa sopivat versiot.

Tosin dep hell toistuu eri muodoissa uudestaan. Eli py√∂r√§ keksit√§√§n my√∂s uudestaan vuodesta toiseen, riippuen mist√§ ohjelmistoekosysteemist√§ ja k√§ytt√∂j√§rjestelm√§st√§ puhutaan. Ratkaisumalli on olemassa, mutta sen k√§yt√§nn√∂n sovelluksen rakentaminen on siis “work in progress”..

Edes t√§ysin automaattisessa j√§rjestelm√§ss√§ kaikkien kombinaatioiden testaaminen ei olisi v√§ltt√§m√§tt√§ kovin nopeata, saati k√§sin. Toinen iso ongelma todellisessa tilanteessa on, ett√§ kaikki operaatiot eiv√§t v√§ltt√§m√§tt√§ ole “peruttavissa” helposti. Toisinsanoen jos testataan A1.0 ja B1.0, jonka todetaan aiheuttavan ongelmia, ja haluttaisiin palata alkuper√§iseen tilanteeseen jossa koneella ei ole kumpaistakaan ohjelmaa, voi olla niin ett√§ tuohon on vaikea p√§√§st√§.

K√§yt√§nn√∂ss√§¬†usein p√§√§dyt√§√§n siihen, ett√§ kokeillaan toimisiko kaikista ohjelmista¬† viimeisimm√§t (tuoreimmat versiot). Onneksi sovellusohjelmien kohdalla “itsen√§isyys” ts. riippumattomuus on lis√§√§ntynyt. Ohjelmat eiv√§t ole usein en√§√§ niin

Riippuvuus on monimutkainen asia, usein niin monimutkainen, että lopulta päädytään vain takaisin aivan yksinkertaiseen määritelmään. Usein toimivuuden testaajana on me, käyttäjät. Ideaalitilanteessa ohjelmistot voisivat olla itseäänkorjaavia (self-healing) ja mukautua erilaisiin tilanteisiin, mutta tietoteknisesti kyseessä on vielä suhteellisen kaukainen unelma.

Seuraavassa osassa: ohjelmistopäivitykset Linuxissa (aptitude), ja semantic versioning esittely. Ehkä. Ainakin tuo aptitude!

Basis for the prophets of Remote

Reading Time: 2 minutes

The world seems to be talking about digitalization. It’s rather elusive subject at times. I like to think of digitalization in very simplistic way: how much does it save effort?

The core of digital work (in software) is keeping your eyeballs on the screen, your mind concentrated, and the environment in a shape that supports you physically. The rest is minor things. There’s thus quite few real pre-requisites for successful work:

  • fast Internet access
  • 220V outlet for laptop charger (ie. normal electricity)
  • a table for work
  • decent weather (preferably indoors)
  • calm atmosphere (not too much buzz and noise around)

There are of course some aspects of team work involved, but basically for the sake of keeping the story simple, we can think of the software and consultancy work as place-independent.

Enter Sowell

In the spirit of American economist Thomas Sowell, let’s ask “what’s beyond the obvious”. We ask the “what implications does remote working actually have”, beyond this explicit definition we just gave.

  • work quality
  • work amount
  • customer benefit
  • benefit for the corporation (employer)
  • environmental benefits
  • any other pros?
  • what about cons?

One of the biggest promises of digitalization is that it makes location (and time) irrelevant. That’s one really interesting feature I found, working at Mainio Tech, a Helsinki-based software consultancy company. The infrastructure was in its native form designed so that it supported remote work. The tools and setup was there, no need to “start thinking” of how to enable remote work. It made a big impression on me. It was also a strong “Eureka!” moment to observe what it takes for things to click properly.

Sometimes there are technical barriers to remote work. They’re often in reality overcome with technology. The question remaining is the shared vision of what can be achieved with a distributed workforce.

 

 

Present History of Man (2018)

Reading Time: 2 minutes

In 2001 I wrote an item list about certain things related to technology, culture and inventions. Back then I looked at the world with the eyes and experiences of quite fresh technical university undergraduate student.

I also predicted in the same text, rather vaguely, but nevertheless some aspects of the future and how it might turn out.

The list

 

  1. you had to actually carry things called “shades” to protect your retina from too much bright light.
  2. you had to swallow chemicals, called drugs or medicine, which affected your metabolism in a way that made you feel a bit relieved. You never really understood why you took it, but because of its relieving effect you actually sort of became addicted to the substance.
  3. you had to carry either metal pieces called coins or plastic cards called credit cards to indicate you were eligible to buy things. From time to time, some genius decided the unit should be changed from one well-established to a new one.
  4. you had to manually synchronize and move bits from place to another for them to be accessible. Automatic secure pan-synchronization of data was not invented yet.
  5. the clerks at the shops made rude mistakes in calculating return money.
  6. in computer operating systems, thing some times worked and another time they did not. There was no logic in all that. There were relatively quite big lag times between the interaction with the system.
  7. pictures were imitated in computers with discrete elements called ‘pixels’ or picture elements. They had not yet discovered Unlimited Natural Graphics.
  8. disk drive space actually run out. Worse; this was noticed at the moment you ‘saved’ your documents. You had to manually do that oftentimes.
  9. Computer documents were static and lasted on the disk as long as somebody did something about it. Documents did not imitate natural memory and disappear when irrelevant. Relevance links (RL) were not yet discovered.
  10. Computer interaction between human and the machine was done with crude devices like the
    mouse (pl. ‘mice’); it was a piece of plastic with a rolling ball inside it. The movement of
    the ball was registered by 2 rotating rollers. Both the X- and the Y- axis had their own rollers.
    “Explicit Intention”-, “thought pattern”-, “Task relevance”- and physical indication measurement methods were unknown in Human Computer Interaction of that age.
  11. the displays were a legacy from around the 1960s. The most central piece of electronics was the
    Cathode Ray Tube (CRT). LCD and plasma displays were claiming space during the turn of the millennia, but still CRTs were widely used.
  12. you had to remember tens or even hundreds of little codes during the day to manage your life in the so-called Information Age. You could not concentrate fully on your ambitions. Codes were required to operate your personal communication device, computer, car, doors, etc.

Found an obsolete item?

Tell me when you see an item has disappeared or the fact is now obsolete. Write a comment here at Jukkasoft blog (or email me), and mention:

  • mention the “Present history list 2018” as subject (if mail)
  • which point on the list
  • has the issue resolved completely or partially?

N&Bx: dissecting (ba)sh

Reading Time: 2 minutes

Welcome to Nuts&Bolts with Linux, or ‘N&Bx’.

I’ll keep sharing some of the things that have recently found. It’s amazing how rich a GNU/Linux system actually is. I’ve jokingly said many times that you can either play games, or just explore more and more of a Linux. Either way you won’t run out of things to do.

What is a shell?

The shell is really what it says; kind of cover, an enclosing; “world of its own”. The shell’s purpose is to interpret commands and act as a middleman between users and the core of the operating system. Without a shell we’d be really hardcore enthusiasts, like those of the 1950s who knew how to program a computer by switching levers and knobs on and off.

There are mainly 3 kinds of shell usage

The most usual that we think of with a shell is interactive use: you type commands, and the shell is your tool to speaking with underlying OS. A shell can contain a lot of amenities and amendments that we take for granted, and some that are probably still undiscovered by many users.

Second kind of usage is¬†executing a single command with a shell. You’ve probably seen those

sh -c "echo '1' >> file"

kind of commands that take advantage of the shell’s rich internal commands, pipes, and combine these sometimes with program execution.

Third use case is where a script uses a shell as an executing environment. This is the batch mode; for example in Linux, the commands that regularly run via scheduling, take a shell underlying, which sets a execution environment, and then the payload command runs on top of that. The payload itself can be a shell script, internal shell command or an external binary (program).

Bash had been my default shell in Linux for years. I had not even thought the decision originally as to ‘why’, but I guess bash seemed somewhat more¬†sophisticated than the standard ‘sh’ shell.

Next in N&Bx we’ll check out how does bash set up the “environment” and its own options. Until then, adios!