Graphics interchange format. We probably know gif as eponymous of memes in the internet. GIFs were the original breakthrough format for distributing images in a network, before Internet era. GIFs prevailed.

They’ve come a whopping 35 years traveling with us in electronic medium. The meme explosion in Internet sociology made GIF relevant again, as GIFs have a sort of native capability to support animation.
There are naturally competing, better quality actual video formats, but as with all things that benefit from a wide user support, GIF is one of those formats which is almost guaranteed to be supported by a Web browser.
The reason GIF originally (in 1990s) gained so much notoriety and it spread like wildfire was that it was an effective way of compressing images that have a maximum of 256 different colors. GIF compresses paletted images. Compression meant more pleasant online experience – and not only that, but smaller phone bills. Even Internet connections were charged on the basis of time spent online, before the era of always-on broadband.
Faster image transfers
The size reduction gained by compression makes the transmission time of an image smaller, so you get stuff downloaded faster on a network. It’s no small feat. Ratio of compression tells how much an image loses in size. The bigger ratio, the better.
There are however certain things that need consideration in compression use.
I’m going to dive into the world of GIF.
Let’s go!
GIF is a format for images. Originally created by Steve Wilhite working at Compuserve.
Basics of images
Computer images are made up of pixels, single colored tiny squares on the monitor.

A raw file of an image simply would store each pixel color value, and contain 2 numbers to guide the image display code in knowing what is the intended width and height of the image.
A practical issue with images is the storage size. Early computer images simply had to fit into a few kilobytes (kB) of memory. Computers had only typically 64-512 kB of memory. Displays were also of low resolution compared to nowadays. Whereas now in 2022 we have displays capable of holding 2000 pixels across, in 1980s the computers had very seldom more than 320 pixels across.
Amiga and PCs were the generation of computers which started to enjoy the proliferation of images. One of the key enabling factor was interconnection of computers via computer networks. First networks were not the Internet we know today, but PC and other kind of computer servers run by individuals. These machines were hooked with a modem line (a dial-in). The machines also ran a program type which was called a ‘door’. The door waited for incoming phone calls from a user’s modem, and once a call came in, established a connection between the two computers. The server administrators could set up rights for users, and decide whether anyone (anonymous users) could use their services, or whether it was limited to a private set of users.
Modems could transfer typically between 2400 and 56000 bits per second. Today’s slowest broadbands typically transfer 1500000-2000000 bits per second (35 times as much as back in the day). This meant practically that image transfers took a long time. Any method to cut this time was well appreciated. GIF was such an invention!
Paletted images
Images can be paletted (or unpaletted).
An image palette (or color lookup table, CLUT) stores a fixed amount of unique color values that will be present in the image. GIF is a paletted file format, and has a limit of 256 unique colors.
Relation between raw image pixel storage size and # of colors in palette
Each bit can represent 2 states. Each one more bit doubles the amount of elements. Thus 1 bit can represent a black or white pixel (0 or 1). Two bits can represent one out of 4 colors. Three bits one out of eight. The relation is 2^b
Thus two to the power of number_of_bits
Practically a rule of thumb in images is that 4 bits can represent a 16-color image, which were common at one time. 8 bits can represent 256-color (typicall paletted) color image. 24 bits per pixel can represent a true color (continuous) virtually unlimited number of colors. the human eye cannot distinguish actually more than the colors representable by 24 bits, so it suffices; but in certain scientific images more bits are necessary to prevent low quality of the resultant image, when many rounds of processing are done on the image data.
The benefit of a palette is image file size reduction: instead of encoding single pixel color value with R,G,B (3 components of each size 1 byte), a 256-color image only needs exactly 1 byte per pixel.
GIF specifications were a mystery cocktail until GIF89a spec release in around year 1991. The GIF format is creation of Compuserve, an Ohio-based online access provider.
Images sizes have followed the improvement in display technology. Computers early in could typically only show a viewport area of 320×200 pixels. Nowadays typical screen resolutions are about 6 times in both dimensions.
How much file size matters in image transfer times?
The transmission time for uncompressed image is determined by simple equation:
T=N/B
Time (in seconds) is the amount of bytes (8bits each) divided by bandwidth B, where the B is in units of bits per second. A 40 kilobyte image on a 10 megabit line would take:
T=40*1000*8 / 10000000
=320000/10000000
=32 milliseconds
In reality when viewing a single web page, we need to load many images. Typically a page contains 2,2 megabytes of content, some of which are images. [ https://almanac.httparchive.org/en/2021/page-weight ]
For practical purposes image files should be as small as possible. This desire to shave off some bytes from transmission is called image compression. The compression is pure science.
There are a few main ideas in compressing images.
- replace consecutive repeating pixels with shorter repeat commands
- find generative functions that can be represented easily, parametrized, and which can create the input image
- use the frequency of pixels (histogram) somehow to store image in a more effective way than “tabular” X,Y table
- utilizing human visual neuroscience (deficiencies in our perception ability of incoming light) we can omit some information in the image, if lossy compression is tolerated
LRE or run-length encoding means replacing the consecutive pixels. Instead of 1000 pixels of white, just store 3 bytes of data: “Now put 1000 pixels of white here”. So in LRE we only need to store two things:
1) pixel color
2) run length
For 256 color paletted images you’d get all the LRE chunk’s information necessary into 3 bytes. There’s one problem you need to tackle with LRE: since the command takes one unique code, there must be a way to tell the image decoder that a LRE chunk data is coded, not a raw pixel value. How this is usually dealt with is have LRE take place of one of the 256 paletted colors; typically it’s a the least frequent color that is replaced by LRE command.
Approaches and bounds to compressing images
Shannon devised Information theory, which has its say on how much image compression can do. Image compression falls into the “source coding” application. With source coding, we have full knowledge of input and peace of the channel – in contrast to channel coding, which is another Shannon’s application. Channel coding deals with the transmission of a message in real world situations, where channel degradation has to be considered.
We could pose a few curious, and fundamental questions about image compression:
- What efficiency can we expect to gain from compression?
- Where does the tradeoff between decompression time and time gained by transmitting smaller images meet?
Bounds
We can approach the image compression from thinking about certain bounds. Bounds are statements of known limits of some phenomenon. For example bounds for an execution time of a particular algorithm are important to know, so that they are acceptable in terms of the expected use of the application which uses the algorithm. Many computer applications cannot know beforehand all the possible inputs they will encounter. Bounds thus give a certain kind of insurance against worst cases.
Compression has this same issue. If the image set to be compressed was limited to a certain, limited set of images, compression would be a breeze.
But real compression algorithms have to deal – generally speaking – with any conceivable images made up of arbitrary combination of pixels.
The possibility of a large (abrupt) pixel color change between two consecutive pixels, is, however less than the possibility of an abrupt change between any two random pixels in the image; the phenomenon gives rise to application of certain tricks in compression.
Signal paradigm
An image can be considered a signal, if we go from top-left point in the image, towards the end of the image, and pick one pixel at a time. Now we get an array of pixels. Arrays are neat, in the way they help us get a computer science -familiar way of thinking about the image: it’s just a large array (sequence) of pixels. One after another. The decompressing algorithm has the duty of constructing once again a rectangular image out of this sequence.
But the question with data representation is what do we achieve with it?
Another fascinating way to think about image compression is this:
given L bits of information, there can be maximum 2^L different outcomes. Well, let L be the size of compressed image. For file size of 1 byte, L = 8 (bits). Every 1 byte of file adds 8 bits to L. So 2 bytes would be 16 bits.
Now, let’s think about implications:
So if we, for example claimed that we have invented a compression algorithm that can compress every possible image found on Earth to 2 bytes (16 bits), we would also claim that there are only 65536 unique images found on Earth. Which sounds ridiculous, right?
Leave a Reply