Spread the love
Reading Time: 2 minutes

Getting SEO wrong is probably quite easy.

That's not a bold statement. This is - in addition to the rather fancy project name, Discoball:

I had blogged for 10 years, until I had a glimpse into the reality that my content wasn't exactly "selling".

Here. Raw diary of how I:

  • discovered the status quo in terms of search engine optimization, and my content quality on Jukkasoft blog
  • learned how Google search and ranking works
  • what methods I used to gauge my blog's post ranking on Google
  • how much it all cost
  • what was the effect of doing remedies on the blog

Google uses a variety of spice in what is called PageRank, an algorithm that determines the weights - and the order of search results display. PageRank scores web pages, and gives the result in which order (descending importance) the Google search results page shows sites. Rank high, and you will get hits. Rank worse than the first 10 results (default size of search results), and you will get a few % of the hits that would have been available at higher ranks.

The original structure, algorithm and plan for making Google's spider software is very openly documented. The Anatomy of a Large-Scale Hypertextual Web Search Engine is a base document that you can view to get a glimpse into Google's inner workings, at a time when the Web consisted of about 20-30 million documents.

During years, and as Google has had fundamentally an open approach to developing its methodology, there were also many attempts by various parties to tweak their search results, ranking higher in Google searches. Google responded with adjustments to the search algorithm. They used to be rather infrequent, but have since become much more prevalent.

Intuitively I was baffled by the fact that sometimes searching even with explicit keywords that I knew to be contained in some of Jukkasoft's (my blog) posts, I couldn't get a hit on Google. This led me to believe that my site was not exactly built by the book, in terms of search engine discoverability.

With a manual next to me, I set forth to first investigate how indeed am I ranked generally on Google, then make a plan to make Jukkasoft articles fully discoverable by search engines.

My aim is not to become a master of web as advertising media, but to better utilize the hours already spent on writing the content.

As I started planning it seemed obvious that I would be facing two kinds of things: technical and quantitative. The technical part has to do with things such as..

  • how does the Google search spider work
  • what search spider expects from a well-behaving blog
  • what are some of the obvious findings I need to fix in my blog's articles?

Whereas the quantitative is raw data, numbers, which is indicative of probably both the writing quality and how interesting the article's topic in general is to my audience:

  • what's the expected background traffic (hits per month) that I sustain regardless of efforts to improve content searchability?
  • is the blog being used as a kind of reference, popping into individual articles, or more like a book with interesting content devoured article after article?

The Plan

  • make a inventory of status quo: # of articles on Jukkasoft; followers (readers), and viewer statistics (hits per week, month, year)
  • what's the trend on yearly view statistics
  • understand Google's PageRank basics (the search engine)
  • refresh and ingest the main points of the modern (2019) PageRank algorithm
  • understand which portion of the articles makes top 80% of traffic quantified by page requests (hits)
  • determine which of the articles are discoverable and which are not

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.