const labelOrdinals = [
{ hasLabel: 'Kirjauspäivä', index: 0 },
{ hasLabel: 'Arvopäivä', index: 1 },
{ hasLabel: 'Maksupäivä', index: 2 },
{ hasLabel: 'Määrä', index: 3 },
{ hasLabel: 'Saaja/Maksaja', index: 4 },
{ hasLabel: 'Tilinumero', index: 5 },
{ hasLabel: 'BIC', index: 6 },
{ hasLabel: 'Tapahtuma', index: 7 },
{ hasLabel: 'Viite', index: 8 },
{ hasLabel: 'Maksajan viite', index: 9 },
{ hasLabel: 'Viesti', index: 10 },
{ hasLabel: 'Kortinnumero', index: 11 },
{ hasLabel: 'Kuitti', index: 12 }]
function pickField (ln, labelName) {
// How manyeth field is the field corresponding to text labelName?
for (let sc=0; sc<labelOrdinals.length; sc++) {
if (labelName==labelOrdinals[sc].hasLabel) {
return labelOrdinals[sc].index
}
}
}
That code above really got me thinking.
I’m doing pretty much of excess there – it’s both slow, and takes space. What is it?
I am trying to get basically a number for a (String) key. It’s a map data structure. Matching function type is string matching.
I will be matching an input string towards the key hasLabel in each of the array’s items.
What’s the deal here?
Let’s look at the bigger picture. Programming is an art and science.
Art is about getting inspired. Science is the part where you combine patterns, math, and a bit of knowledge of runtimes, hardware even.
It’s easy to get stuck on naive initial ideas about code. My pattern in the lookup problem was stated above in code. I want to put a for() loop, and do string match inside it.
My focus was on:
a) elegance of code (is it ‘neat’)
b) the array growing to substantial lengths, let’s say 1000s of items
Measure it!
The way to go is do comparative performance measurements. It’s quite in vain to talk about issues without data. So let’s measure!
Measurement can be done basically in 3 ways.
- make analytical algorithm judgements
- measure milliseconds or even microseconds of code (CPU time)
- “instrument” the code and count abstract CPU cycles from the code
We’ll be increasing the dimension (length) of the lookup array by increments of 5 and keep measuring how much the FOR + hashing (matching) takes. Let’s assume Javascript internally does string matching somehow in a sane O(n) linear way.
Analysis done in theoretical manner (pertaining to method 1) will show the long-term trends, at least theoretically. In real world the computer’s architecture and CPU cache and all these things will twist the real results a bit.
Stay tuned. More to come!
Leave a Reply