
Video resolution and aspect ratio are things that every consumer is aware of on a basic level. When shopping for a screen or video player, they are often wowed by resolutions that may not inherently mean anything to them, but the shape and breadth of the picture, as well as its quality, definitely stands out to any consumer.
An image created this way, out of a series of rows of color values, is called a rasterized image, and while modern technologies don’t work quite the same way, pixel-based raster imagery is and will continue to be the way most if not all video is displayed. Modern video screens, despite being OLE DB or something similar, still utilise this rest arising concept when drawing video on the screen.
Video has a longer history than people tend to think. While aspect ratios were always a challenge in early film, modern ideas regarding resolution especially come from traditional ways of displaying a moving video without a projector. Television technology, as introduced by the brilliant Philo Farnsworth, drew images as a series of lines across luminous cells in a screen utilizing an electron gun. This is the CRT display we all grew to be familiar with from the 1940s on into the early computer age.
One thing we have to talk about before aspect ratio or resolution is the concept of interlacing. In the previous section, we talked about how video is displayed by drawing a series of lines of pixels, from top to bottom. Interlacing is a method where only half of the rows of pixels are drawn with each frame. This has long been used in the digital age as a method to compress video to make it smaller, but at the cost of video quality. While it’s not directly visible to the human eye, the cumulative results of interlacing result in a strange sense of the video seeming “off” not dissimilarly to the way VHS looks to eyes who have seen high-resolution modern video. It should be avoided at all costs when producing modern video.
So, what is aspect ratio? Aspect ratio is basically a ratio between the width and height of the display surface. A perfectly square display, which is almost never used, would be a 1:1. The traditional aspect ratio of televisions was 4:3, where modern film, from the 1960s onward, commonly use either 1.85:1 or 2.39:1. This is why any home video release of a theatrical movie would commonly say that it had been formatted to fit a television screen, in times when CRT televisions were still common.
Modern widescreen displays, both for flat-screen televisions, computer displays, tablets and even a lot of phones, is 16:9. When working with video production not intended for standard theatrical release, 16:9 is the standard to aim for, as at this point, penetration of displays with this aspect ratio are almost universal.
Resolution isn’t the same as aspect ratio. Any aspect ratio could sport a version of a base resolution if you chose to produce it in such a way. Resolution is simply the amount of pixels available on the display. Considering individual pixels carry a single colour value, and these combined to create an image, higher resolutions, a.k.a. more pixels, is always going to produce a sharper, higher-quality image.
Common resolutions you hear are things like 420p, 720p, 1080p and the vaunted 4K. These specify the number of pixels in a row across the screen. In a 4K screen, there are a whopping 4000 pixels in a single row across a display. The lower resolutions, you may notice, have a P next to them. This is because these lower resolutions were common, high-end specifications at one point, and an interlaced version of these resolutions also existed, accompanied by a letter I in their specs.
Lower resolutions, even mixed in with otherwise modern video, can set a mood, such as softening the video, adding a dream-like aspect to it, many other things. When indicating footage taken long ago, you may need to reduce the aspect ratio to 4:3, drop the resolution to something like archaic 320, and even put it through some filters to duplicate the old NTSC signals of analog video from back in the day.