I was doing a recruit platform, in Java, and within there I was currently writing the spec and mechanism for format definition file for IT Skills.
Then I came upon our good old friend XML and it’s companion, the grammar definition XSD (XML Schema definition). These two have a relationship: Since XML files in general must be validly constructed (not “broken”), the validation happens by checking programmatically a XML file against the rules in a XSD file.
The XSD itself is also a language, and it’s quite flexible: you can write your own custom grammars to validate XML against. So a question arises: How is XSD defined, then?
Just enough down the rabbit hole: XSD structure
I’d first written just a text file, ad hoc, about some of the learning I needed to do.
The file had tags, sure, but grammar was not yet there. In order to make the writing easier, it’s useful to have autocompletion in the text editor.
Needs so far:
schema for XML captured as a XSD
a validator tool for XML – takes XSD, XML -> shows results
editor’s autocomplete mechanism for writing compliant XML
Quantmod library is one of my first darlings in the exploratory experiments that I did for quantitative finance. I’d previously read about algotrading and all, but what sparked my interest really further were two things:
The main work horse you will be working with is ‘R.exe’
Naturally the R software installer also automatically sets up .dll libraries and does some registry keys for making sure R environment is properly set up, but you don’t usually need to do anything manually regarding these.
shadow values (a copy of locations before calculation)
using shadow values the execution (order of picking a boid) from mesh does not matter
shadow buffer makes the calculations virtually as if they were done in “natural way”: all at once
opposite of using shadow buffer would be calculating the boids updates serially; thus the fresh-in, inter-round position of a boid would be used for all other members’ calculations; thus ordering of calculation would have an effect on the resulting positions (a externally introduced flaw, sort of order bias)
So, after a semi-catastrophy on my laptop, which was caused by me just jumping the gun on getting rid of ads by Acronis (backup software), I’m writing a software to ease and automate the post-bare-metal state.
In English, that means installation of all those software that you had on your laptop, but never actually gave them two cents of thought.
Simple idea.. loop
Think about what installation is all about? Sure. Sounds easy:
download the installer file
hash-check file in order to know integrity (safety)
On surface level it’s really easy. But if you stop imagining the typical installation of software, you’ll quickly remember that there are a few caveats:
accepting EULA or similar license dialog (keyboard/mouse)
choosing configuration options during installation wizard
confirming final options before installation starts
knowing when the installation wizard has finished, since Windows can’t run many installation in parallel. The previous has to end before next install can begin
You just need a file; proper user that has privilege to install new software, and a way to authenticate the file is legit and doesn’t contain malware.
So we could write a software to just programmatically drive required tools to install all the needed software. Magic. Wish all programming was this easy. Getting back to this!
Some details of getting installs right on Windows 10
Windows software installers come in a few forms:
newer MSIX – one thing I need to research a bit
good old .zip files
For our purposes, we are only interested in a few specific aspects:
is the installation file executable per se
if not, what are the minimal prerequisites (dependencies) needed to run file?
Does zip file cause difficulties?
Many installers come as a zip. Zip is a lempel-ziev compressed file. The repetion of consecutive identical bytes has been omitted, and there’s a few extra tricks that make the file size even smaller. If you want to know more of the theory of zip, look at wikipedia: Lempel-Ziv compression.
The problem we might encounter with zip and authenticating has to do with whether the zip can ‘cloak’ content.
This turns out not to be a problem: a hash of zip file is as good as hash of any other content – despite there’s internal structure in the file; a good hash algorithm has a property (preimage resistance) that ensures only two identically same collections of ordered bytes give out the same hash value.
I’m putting my hands down on code, and come up with the results within a couple of weeks. Hopefully! 🙂
Crumbs of bread, a bit of flour. Cleaning the house is sometimes more than “just doing”. For me, it’s at its best also a great time to have a moment for thoughts on improving the design of interior.
I’m pretty visual person. Sometimes there’s immense beauty in settings like depicted here. It’s the kitchen drawer. Along a magazine I really enjoy setting my eyes on, and flipping page after page, glancing at designs, immersing myself in the feast of ideas.
Now you should see a rather black and dull looking shell window. Write, using keyboard, ‘echo Hi!’, and press Enter. So just write the text e,c,h,o and space and then capital H and i followed by exclamation character (Shift+1).
The shell says back Hi!
You have used an internal shell command on Windows. See you soon!
Seth is 28 years. He graduated in record time from Harvard, having just finished studies as the great 2008 boom and bust in housing burst. Luckily for him, his income was secured as a Team lead in a financial portfolio and investment company called Inkin Finance.
He found the years following the recession somewhat depressing at first, but thought that this would be the perfect point to develop a more rigorous self-improvement programme, with the aid of software.
Inkin Finance was booming with mathematically apt people. Creative geniuses, sometimes lacking good “pet projects”. These guys were using their wits all the time, for something. They couldn’t switch their brains off. It was strange to first come about this fact. But then again, Seth understood he couldn’t either. He was always “on”.
With a subtle hint as a Team lead, Seth was quickly able to get a team of two for his project – Froggly.
Froggly is a “semi-sentient AI guiding us to more fun life”, as the team came up with a snazzy slogan in the very first meeting. It was 8:30PM, the team had their first casual brainstorm in Inkin’s “fridging Kitchen” (TM) – a team-spirit building place for many; beanball couches, perfect coffee, hipster sentiment. And sometimes a bit of real sunlight, even. Now there were the wading rays of early evening.
Seth develops quickly an amicable relationship with Froggly.
This personal aide is ALWAYS with him. It grows with him. Technically they built the prototype so that people’s smartphone browsers could be injected with a live “Webview code” – in English, the development can take place, while people use the software. There’s no disruption, no updates, nothing to prevent full steam ahead.
Froggly becomes better, and thus the time spent doing improvements in it, becomes more valuable. Where ever Seth goes, he finds Froggly both useful and very entertaining. Better deals, wiser choices, better health, more time for the One.
One day Seth asks Froggly deeper questions. He likes to have conversations with him. Froggly crashes. It spews a cryptic message in the terminal. Seth’s bubble of illusion takes a slight dent. Sure, it’s just a software. He was kind of hoping for more.
Look at the small amount of tasks and committed RAM.
Fraone is my Frankfurt’s digitalocean Ubuntu server.
The VPS has:
4 GB RAM
80 GB disk
I’d previously left it running 121 days consequtively, and there was 100+ processes (‘tasks’) running. Now with a clean boot and new kernel, it’s very neat. 42 tasks (only!) and less than 256MB RAM in use. Neat!
Memory usage shows from the right end, approx 12/09 onwards that RAM usage dropped significantly. That’s the effect of a boot on a new kernel. I also shut down a service I did not have use for right now (postgresql).
There’s probably a ton of ways to actually get going with developing React code. You need always anyways:
Node and npm installed
on either Mac, Windows or Linux
One of the ways is get a VPS (a light, pay-by-use server) and do development there. I had the situation where I wanted to isolate my experiments to a rather specific server. However I wouldn’t want to spend too much on the resource, as price. A VPS was a natural choice, since I had one spare.
This Post arose as I was doing code (my “React experiments”) on a Digital Ocean Linux machine. It’s a 4GB RAM, 80GB disk VPS with a Ubuntu 18.04.2 linux version on it.
I use the server to develop on, and test run React code.
There’s a couple of things I need from the server:
beauty of Linux tooling – gotta hone all my knee-jerk muscle memory learning hours
web server (to serve the React app over web)
Digital Ocean’s backend IP network is way faster than at my home residential ISP (so: large gigabyte installations that sometimes happens with ‘npm’ are faster)
The last point about backhaul IP capacity difference is interesting. What it practically implies is that I am balancing between the awkwardness of latency between my laptop and Digital Ocean’s VPS (typing code), and the benefits that working remotely on the VPS gives (bandwidth for large installs).
Set up latest NodeJS package
Assuming: you’re logged on to a shell on your server
I had some leftovers from previous experiments. So first thing I did was: a) uninstall the old NodeJS, and b) install latest NodeJS (the server).
Note: if you need to uninstall previous NodeJS, type in shell:
apt purge nodejs
Then verify that removal was succesful:
and you should not have node installed. Expect the shell to throw an error message. If so, good – carry on:
Set up “nvm” for better control over Node versions
Assuming: you are still logged on in a terminal on your server