A deep dive into CRT modelines and EDID editing

In the process of setting up CRT TVs and monitors, I've often worked with modelines, the cryptic strings of numbers which define how a GPU should drive a display cable with rows of pixels. Once critical to 90's Linux users trying to setup XFree86 to display on their CRT monitors, modelines have found a new life among hobbyists who tinker with resolutions and framerates to bring out the full potential from their CRT and gaming LCD monitors. In this article I'll be exploring the origins and history of modelines, how they're used to hook computers up to CRT displays, and how I wrote a modeline generation tool and discovered multiple bugs in Xorg's cvt tool along the way.

The origins of modelines

CRT timings

Modelines were originally designed as a way to describe analog display signals between computer graphics cards and CRT monitors. While modern LCD monitors have a fixed array of electronic pixels which can change color, CRTs instead had a continuous screen of phosphors (or on color screens, a fine grid of red, green, and blue phosphors), and an electron beam (or one beam per color) tracing horizontal lines of light across the surface.

The electron beams originate from an electron gun at the back of the CRT, and are aimed using variable electromagnets (deflection coils) which deflect the electrons at an angle. On all consumer displays (not vector displays), the horizontal deflection coils would sweep the beam from left to right before snapping it back to the left, tens of thousands of times per second. The vertical deflection coils would slowly sweep the beam from top to bottom before snapping it back to the top, at a frequency matching the screen's refresh rate (60+ Hz on computer screens). This pattern is known as a raster scan.

The CRT is controlled by a set of signals entering the monitor or television. The video signals control the intensity of the red, green, and blue beams as they move across the screen, modulating the brightness of the colored spot as it scans the image. (Unlike with LCD panels driven by digital display protocols, the display does not actually know how many pixels wide the analog video signal is.) The hsync signal tells the horizontal deflection coils when to sweep the beam back to the left of the display, and the vsync signal tells the vertical deflection coils when to sweep the beam up to the top of the display (while hsync signal pulses continue being sent).

Interestingly, the display signal actually goes black (hblank period) for a specified amount of time before and after the hsync signal is activated. This is because after the hsync signal is activated, it takes time to change the direction that current flows through the horizontal deflection coils, to aim the electron beam to the left. As a result, display signals generally spend a brief period of time in blanking before the hsync signal (front porch duration), and a long time in blanking during the hsync pulse (hsync duration) or after it (back porch duration).

Similarly, the display signal also goes black (vblank period) for multiple whole scanlines before and after the vsync signal is activated. It takes time for the vertical deflection coils to move the beam back to the top of the screen, so display signals spend the majority of the vblank period during or after the vsync pulse (vsync and back porch), rather than before it (front porch).

Diagram of video timings, with each line consisting of horizontal back porch, active pixels, horizontal front porch, then horizontal sync, and the equivalent vertically. Note that in X11 modelines, the back porch comes at the end of the scanline/frame, unlike this diagram placing it at the beginning.
Diagram of video timings. Note that in X11 modelines, the back porch comes at the end of the scanline/frame, unlike this diagram placing it at the beginning. (Source)

Sidenote: Analog video connections

How are video and sync signals sent from the computer to a screen? On VGA monitors, there are separate red, green, and blue signal wires, as well as two wires to carry hsync and vsync signals. CRT TV sets used different sync standards; for example, SCART in Europe combined hsync and vsync into one wire (composite sync), which is pulled negative briefly for hsync, and kept negative for most of a line to signal vsync. Component video and older formats would combine sync signals with video (sync-on-luma and sync-on-green), requiring the receiver to interpret positive voltages as video and negative voltages as sync signals. (I've written about how color is encoded over various TV connections in a previous blog post.)

Workstation monitors would often use RGB signals over BNC connectors, with composite sync or sync-on-green rather than VGA's two separate sync lines1.

Modelines in X11

Each CRT display is only made to support a limited range of hsync frequencies (lines per second) and vsync frequencies (frames per second), limiting the resolutions and refresh rates they can display. Some displays would display a black screen or error for out-of-range signals, while others would malfunction (there are stories of monitors physically going up in smoke or flames234)!

Up until the 1990s, displays did not tell computers what resolutions they supported. As a result, to setup a monitor on a Linux/Unix machine, you would have to create a modeline using a modeline prober (like X -probeonly) or calculator5, or look up your monitor in a modeline database6 (like xorg-conf.org?). Then while XFree86 or Xorg (a Unix program which hosts GUI apps and displays them on-screen) was driving your display at a fallback resolution such as 640x480, you'd copy the modeline into your XF86Config or xorg.conf file, then reboot the system (or even apply changes live7) and hope the new resolution would show up correctly. Some users may even have used xvidtune to move the image around their monitor graphically, using an API that predated today's xrandr. (xvidtune still exists and is still optionally shipped with Xorg, but on my amdgpu driver, it's only able to fetch the primary monitor's resolution, but throws an error when trying to set video modes.)

With the release of EDID in 1994, monitors were now able to report their supported frequencies and resolutions to the computer. Reportedly some people would use tools which requested EDID information from the monitor and generated modelines you had to enter into XF86Config, since XFree86 couldn't yet read EDID data itself. Later, XFree86 4.0 released in 2000 with support for reading EDID information from the monitor and automatically generating valid video modes for the monitor, and this functionality carried over to its successor project Xorg, making manual modeline entry largely obsolete.

Photo of cat sprawled out comfortably on a silver Trinitron CRT monitor, eyes closed
"Tabata taking a cat nap. She doesn't care much about computers except they tend to be nice and warm." (Source)

Right now I'm visiting my sister for the holidays and Tabata sends a warm purr right back at you.

She misses that monitor though: LCD's are not as comfy as CRT's ;)

Interpreting the numbers

In the past (and today), some Linux users would treat modelines like opaque strings to be used as-is, while others would use tools or manual calculations to understand the meaning of the parameters, then edit or write new modelines to better suit their displays and needs.

An example of a modeline is:

Modeline "640x480_59.94"  25.175  640 656 752 800  480 490 492 525  +HSync +VSync

The word "Modeline" is followed by a name in quotes, serving to identify this mode to the user. Next comes the pixel clock (megapixels per second).

Inspecting the horizontal timings, 640 656 752 800 means that each scanline begins displaying image pixels at time 0, and stops displaying video 640 pixels later (meaning the image is 640 pixels wide). At time 656 (pixels) the horizontal sync pulse begins, at time 752 (pixels) the horizontal sync pulse ends, and at time 800 the GPU begins outputting pixel 0 of the next scanline.

Inspecting the vertical timings, 480 490 492 525 means that each frame begins displaying image scanlines at line 0, and stops displaying video 480 lines later (meaning the image is 480 input pixels and output scanlines tall). At time 490 (lines) the vertical sync pulse begins, at time 492 (lines) the vertical sync pulse ends, and at time 525 (lines) the GPU begins outputting line 0 of the next frame.

+HSync and +VSync mean the horizontal and vertical sync lines are normally low (grounded), and get pulled to high voltage (3.3 to 5 volts) to signal a sync pulse910 (active-high). If the modeline contained -HSync or -VSync, the corresponding wire would be normally held at a high voltage and only pulled to ground during a sync pulse (active-low).

VSync pulse timing within a scanline

What's the exact time the vsync pulse begins and ends relative to scanlines' hsync pulses, and non-vblank scanlines' horizontal display/blanking periods? This turns out to be a surprisingly complex question.

Screenshot of Audacity, recording a VGA cable's green pin (many short pulses, one per scanline) relative to its vsync pin (a wide pulse many scanlines long, which starts and stops after the short pulses disappear).
Screenshot of Audacity, recording a VGA cable's green pin relative to its vsync pin. I varied the hsync pulse's starting time, and recorded vsync timing relative to scanlines.

In this screenshot, I constructed two custom modelines (60.72 1000 _ _ 4000 240 240 247 253 -hsync +vsync) on my 15R SE laptop, with hactive taking up 25% of each scanline, and hblank or sync taking up the remaining 75%. Then I recorded the VGA cable's green pin on the left channel (active/blanking intervals), and the vsync pin through an attenuator on the right channel (vsync pulses). In the top recording, I started the hsync pulse at the start of hblank, and in the bottom recording, I started the hsync pulse near the end of hblank. I found that delaying the hsync start time delays vsync start/stop as well.

(Sidenote: I've written about probing VGA signals in another article.)

Getting a VGA output

CRT monitors are analog displays, and generally came with a VGA input port rather than today's more common HDMI and DisplayPort. As a result, to connect a CRT to a modern PC and GPU, you will usually need a DAC adapter or older graphics card with native analog output.

900-series and older NVIDIA cards came with VGA or DVI-I ports with native analog RGBHV video output, as did AMD's 300-series and older cards.

If you want to render games on a newer GPU, you can install a modern NVIDIA GPU, render games there, and have it stream frames to an older AMD card with analog out (for example a HD 5450 which is otherwise worthless as anything beyond a low-spec display adapter).

Alternatively you can install a newer GPU, and hook up a HDMI or DVI output to a DAC. There are many different DACs to choose from, with different power supply stability, image bandwidth, and feature sets, falling into two main categories (HDMI vs. DP).

Getting correct colors over VGA

Digital-to-VGA DACs can suffer from various nonlinearities and inconsistencies in their color output:

The MiSTer FPGA community has compiled a spreadsheet where they tested VGA DACs for black crush/linearity, peak voltage, and color balance.


Warning for DP-to-VGA on AMD graphics: When plugging a Plugable RTD2166 DP-to-VGA DAC dongle into an AMD GPU (in my case a RX 570), the display may show wildly inaccurate colors. In my case, reds were noticeably amplified, giving the entire screen a reddish cast except for white colors (where the red channel was clipping at full scale). The colors were correct when plugging the same DAC dongle into an Intel Ivy Bridge desktop computer using integrated graphics (on a Z77X-UD3H motherboard) with a DisplayPort output.

To fix this problem, I found I had to open AMD Software and click on tabs Gaming → Display, then pick my CRT monitor (eg. Display 2), and disable Custom Color altogether.

Strangely enough, after I uninstalled AMD drivers with DDU and reinstalled the drivers, disabling Custom Color did not fix the red color cast! Instead I had to enable Custom Color and Color Temperature Control, then set Color Temperature to 6500K. (IIRC this temperature previously did not result in a correct color appearance?) This time around, I validated using an oscilloscope that all input RGB color values up to 255 produced distinct red voltages and were not clipped. I also verified that other color temperatures (like 6400K and 6600K) produced clipped reds.

Photo of AMD Software opened to Gaming → Display →  Custom Color
A diagram showing where to find the Custom Color control in AMD Software.

Finding a CRT modeline

If you find yourself wanting a modeline today, to make a display run at a resolution and refresh rate your OS normally doesn't expose, there are many ways you can get one.

If talking to a television, you can find SDTV modelines at https://geocities.ws/podernixie/htpc/modes-en.html, or more TV/monitor modelines at https://www.mythtv.org/wiki/Modeline_Database. There are also tools to generate modes or program your graphics card to a specific mode; AdvanceMAME claims to support creating video modes matching specific consoles, but I've never tried using it to play any specific consoles, and I'm not sure it has a generic tool.

If talking to a VGA computer monitor, there are better-developed tools to generate modelines. There are actually many different types of modelines:

If you want to generate a modeline, you have many tools to choose from:

To install custom modes, you will either generate a patched EDID binary (on most OSes), or install custom resolutions to the system in addition to the monitor's EDID (Linux X11 xrandr, some Wayland compositors).

Reading a monitor's native EDID and modes

Before adding custom resolutions, you may want to extract your display's original EDID to a file.

Note that some tools will dump 256 bytes of EDID data, even on monitors which only have a 128-byte ROM. You can open the file in a hex editor, check if the second half is all FF or identical to the first half, and delete it if so.

Warning: DP-to-VGA adapters based on the RTD2166 can edit the EDID data being read from the monitor, even when you are not using custom resolutions! The dongle will alter or replace any resolutions with a pixel clock above the chip's limit of 180 MHz (from 2 lanes of DP HBR). On my Gateway VX720, it replaced the native resolution (the first detailed timing descriptor) of 1600x1200@75, with 1024x768@60 DMT but with the wrong sync polarities (both active-high). This caused my monitor to not properly recognize the resolution as 1024x768, and fail to save position information properly.

Workarounds for incorrect modes

If you are affected by a RTD2166 returning incorrect resolutions, you can bypass the incorrect resolution by installing an EDID override to your OS. The default resolution can be 1024x768@60 with correct sync polarities (both active-low), or another resolution altogether (so the computer will use default correct timings for 1024x768@60).

If you are plugging a VGA EDID emulator dongle (with an I2C EEPROM) into a RTD2166, you can flash the EDID override to the EEPROM, but all detailed resolutions must be valid (with pixel clock below 180 MHz). I found that both 1280x960@75 and 1024x768@60 were passed through unmodified.

Managing custom modes on Windows

Custom Resolution Utility (CRU) is compatible with all GPU vendors. To use it to add a custom resolution, first run CRU.exe, pick a monitor on top, then under "Detailed resolutions", click "Add...". In the new dialog, you will have many ways to pick a resolution and enter/generate timings (CRU docs):

  • Automatic CRT - Uses standards compatible with CRT monitors. Uses VESA DMT for 4:3/5:4 resolutions, CVT otherwise.
  • CVT standard - Standard intended for CRT monitors.
  • GTF standard - Old standard commonly used with CRT monitors.
Screenshot of CRU's "Detailed resolutions" editor, set to "Automatic CRT" timings.
Screenshot of CRU's "Detailed resolutions" editor, set to "Automatic CRT" timings.

Once you're done, run restart64.exe and wait 20 or so seconds 🙁 for the GPU drivers to restart and Windows to see the newly added resolutions.

On AMD GPUs, "AMD Software" also allows editing custom resolutions. It has the advantage that you don't need to wait as long to apply resolutions. But I found the resolution calculator dialog to be janky, and others have recommended avoiding it altogether in favor of CRU.

Screenshot of AMD Software (23.3.1)'s Custom Resolutions editor.
Screenshot of AMD Software (23.3.1)'s Custom Resolutions editor.

I do not have a NVIDIA GPU to test its custom resolution editor, but have asked people online for advice on how to use NVIDIA Control Panel:

Anyway, NVCP is usually fine as long as the user is aware that "Automatic" timing presets is almost never what they want to use.

Automatic in NVCP basically tends to assume you want to scale everything to super standardized resolutions like 1080p, or 1600x1200 etc. so you'll just get a bunch of ugly GPU scaling for no reason and fixed refresh rates.

The best way around that is to simply pick GTF or vanilla CVT.

Screenshot of NVIDIA drivers' custom resolution editor.
Screenshot of NVIDIA drivers' custom resolution editor.

On Intel integrated graphics, the driver differs by generation; 5th gen and older CPUs use Intel Graphics Control Panel, 6th gen and newer requires Microsoft Store and Intel Graphics Command Center (I cannot test this on Windows 10 Ameliorated), and Arc GPUs (and possibly 11th-13th gen CPU integrated graphics) require Arc Control Software.

Unfortunately, Ivy Bridge and older GPUs ignore all EDID overrides installed by CRU.

Output resolution not matching desktop resolution

One footgun is that if you pick a resolution and refresh rate that Windows doesn't know how to display (because it lacks a CRT mode), it may set the desktop resolution to your request, but pick a different larger display signal resolution than you selected, and upscale the desktop to the display resolution. This happens on both Windows 7 and 10, and results in a fuzzy image or missing scanlines on-screen. On Windows 10, you can check if this is happening by opening "Advanced display settings" and checking if "Active signal resolution" does not match the "Desktop resolution" that you have selected.

Screenshot of Windows 10 "Advanced display settings" page, showing "Active signal resolution" larger than "Desktop resolution".
Screenshot of Windows 10 "Advanced display settings" page, showing "Active signal resolution" larger than "Desktop resolution".

To prevent this from happening, try changing the "Refresh Rate" on the same page until "Active signal resolution" changes to match. If that still doesn't work, try clicking "Display adapter properties for Display #", then in the dialog "List All Modes", pick a desired mode, then click OK twice. Hopefully that should fix the problem.

Unfortunately, on Ivy Bridge HD 4000 integrated graphics with Windows 7, even "List All Modes" does not natively output resolutions below the monitor's "default" resolution (the first detailed resolution in the EDID), but instead scales the desktop to the default resolution.

Managing custom modelines on Linux with auto-modeline

If you want to apply custom modelines on Linux X11, you have to first obtain the modeline (by looking up a modeline calculator or running gtf or cvt in a terminal), then run three separate xrandr invocations to create the custom mode (xrandr --newmode name ...), then add it to a display (xrandr --addmode output name) and set the display to the mode (xrandr --output output --mode name).

Then if you want to change the mode (for example switching between different blanking and sync sizes, or between DMT or CVT), you have to repeat all these steps with a new name (since you can't edit a mode in place). If you want to uninstall the old resolution, you'd then call xrandr --delmode output name followed by xrandr --rmmode name.

All these steps are necessary because xrandr and the Xorg server have many limitations on mode management:

To automate creating and switching modes, I've written a program auto-modeline which automatically generates modelines based on specifications (dmt/cvt, width, height, fps).

There are multiple ways to define a mode in modelines.ini or auto-modeline print/apply:

Custom resolutions on Wayland

If you're running Wayland instead of Xorg, your options for setting custom resolutions are more fragmented.

Beyond tinkering with the Linux kernel, the way you pick custom resolutions depends on the specific Wayland compositor you're using. Sway has its own method for setting custom resolutions (sway-output). You can add permanent resolutions by editing ~/.config/sway/config and adding lines containing output (name) modeline (values). You can set temporary resolutions by opening a terminal and running swaymsg output '(name) modeline (values)'. I have not tested these, since I do not use Sway (note12).

Hyprland (another compositor) also allows configuring monitors with custom modelines (not yet released) in ~/.config/hypr/hyprland.conf, or by running hyprctl keyword monitor ... (hyprctl docs). I have not tried setting custom modelines in Hyprland either.

One emerging standard (4 years old and still unstable) is to use a tool like wlr-randr to tell your compositor to enable a custom resolution. This only works on compositors which implement the non-finalized wlr_output_management_unstable_v1 protocol; currently this is supported by many wlroots-based compositors, but (to my knowledge) not GNOME or KDE.

On Wayland, GNOME has gnome-randr(-rust) and KDE has kscreen-doctor, but as far as I can tell, neither supports creating custom resolutions/modes not supported by the monitor (GNOME Super User, KDE Reddit, KDE bug). You can still use boot-time resolution overrides, or override the EDID binary to add new modes.

Managing custom modes on macOS

To override modelines in software on Apple Silicon, you'll have to pay $18 for BetterDisplay, since there are no free apps to do what Windows and Linux have long supported. The cracked version of BetterDisplay "works", but crashes on startup 2/3 of the time, and randomly when sleeping/waking.

If you want a hardware solution, you can order my PCB at https://codeberg.org/nyanpasu64/vga-edid-powered, but I can't exactly recommend it.

Photo of a Pi Pico (right) hot-glued to a VGA PCB, plugged into a VGA EDID emulator PCB (left)
A Pi Pico (right) with rp2040-i2c-interface installed, serving as a USB-to-I2C. It is reflashing a VGA EDID emulator (left) powered over USB-C.

The advantage to a hardware EDID emulator is that it generally keeps identifying itself even when the monitor is turned off (unlike some VGA monitors which power off their I2C EEPROMs when the power button is toggled off). Also it shares the same EDID resolutions across all your OSes and computers (rather than having to reapply them on every OS when you make a change), though reprogramming the dongle is more work than installing a modeline using xrandr or CRU.

NTI sells a pre-assembled VGA EDID Emulator for $22 plus expensive shipping. This device has functioning screws (though not thumbscrews), and can be reprogrammed, but only with an expensive device which can only clone monitors, and I don't know how to reprogram it from a PC's I2C interface (as I haven't bought the dongle or programmer). I have not tested the $20 Amazon VGA plugs/dongles either.

Interlacing computer monitors

One interesting aspect of CRTs is that you don't have to draw scanlines at the same locations every frame. Instead, broadcast television was built around drawing alternating frames of video in the gaps between the previous frame's illuminated lines, in a process known as interlacing. In the television industry, these "frames" are known as fields, and a "frame" refers to two adjacent fields (so SDTV can be called 30 frames per second, even though the screen is illuminated 60 times per second).

Interlacing (eg. 480i) can produce a more detailed image than half-height progressive scan (eg. 240p), by enabling smoother vertical color transitions, and allowing nonmoving objects to be drawn at twice the vertical detail (at the cost of flicker at half the field rate in high-frequency areas). Compared to repainting the entire screen at the half-field rate, interlacing reduces flicker by doubling how often the image flashes in normal circumstances. Compared to doubling the horizontal scan rate, interlacing requires a lower horizontal sync rate, and half the video bandwidth for equivalent horizontal sharpness.

The downside to interlacing is that if objects scroll vertically at some speeds, they lose increased vertical detail since they move by half a scanline when the field moves by half a scanline. Similarly if your eyes are tracking vertical motion, the entire image will appear half-res with visible scanlines and gaps. And moving/flashing objects will appear combed on every field, if your eyes are not moving in sync with them.

Interestingly, interlacing was not only used in televisions, but also early computer monitors to get high-resolution images from video cards and displays with limited horizontal frequency and signal bandwidth. For example, the IBM 8514 graphics card and monitor (and its successor XGA) could output a 1024x768 signal at 87 Hz field rate, with interlacing (so it only drew a full image at 43.5 Hz)13. I found a description of an 87 Hz mode at http://tinyvga.com/vga-timing/1024x768@43Hz, which matches up with the 44.90 MHz modelines at https://www.mythtv.org/wiki/Modeline_Database, and may match the 8514's timings as well.

Another description of monitor interlacing can be found at https://www.linuxdoc.org/HOWTO/XFree86-Video-Timings-HOWTO/inter.html.

If alternating lines are bright and dark, interlace will jump at you.
...use at least 100dpi fonts, or other fonts where horizontal beams are at least two lines thick (for high resolutions, nothing else will make sense anyhow).

Apparently text with thin horizontal lines will flicker when interlaced at a low frame rate. This could prove annoying. So far I've only tried interlacing at a field rate of 120 Hz, and didn't notice 60 hz flicker of fine text.

CRT interlacing on modern GPUs

More recently, CRT enthusiasts have experimented with interlaced timings to extract better resolution and refresh rates from PC monitors. Unfortunately for them, many modern graphics drivers (or worse yet, hardware) are dropping support for interlacing. As a result, people are using old graphics cards and drivers as display outputs, sometimes paired with a modern GPU used to render the actual visuals. One popular pairing is a modern NVIDIA GPU with modern drivers, coupled with an old AMD HD 5450 or similar with a VGA output. Another option is a 900-series NVIDIA GPU with native analog output through DVI-I and a passive VGA adapter, or you could play games on older AMD GPUs instead. Interestingly, on AMD, CRT EmuDriver is not recommended for interlacing (only for low resolutions), as interlacing is supposed to work fine with regular drivers on older cards with analog outputs.

If you want interlacing on a newer GPU, you can feed a digital output through an HDMI-to-VGA dongle. DP-to-VGA dongles are another option, though on Windows only Intel iGPUs can output interlaced resolutions over DP, not AMD or NVIDIA. I've even heard suggestions of rendering games on a NVIDIA card but outputting frames through an Intel iGPU's ports, because they support interlacing over DP and NVIDIA doesn't14, though some people say it adds 1 frame of latency.

I've tested interlacing on a Plugable DP-to-VGA adapter (Realtek RTD2166 chip) with a Gateway VX720 CRT (like Diamond Pro 710). I've noticed that the fields aren't exactly spaced half a scanline apart, but every second gap between scanlines was wider and more visible. This is slightly annoying, but less noticeable than combing during motion.

The XFree86 Video Timings HOWTO says:

You might want to play with sync pulse widths and positions to get the most stable line positions.

I have not tested this. It would also require creating custom modelines, rather than using unmodified GTF or CVT interlaced ones.

Another barrier to interlacing is that many modeline calculators fail to produce proper interlaced modes:

I think that Windows programs tend to do better; for example, people have had success using Nvidia's custom resolution editor (though I'm not using a NVIDIA GPU and can't verify), as well as CRU's resolution editor.

[NVIDIA] reports interlacing in a brain-hurtful manner lol

Basically NVCP will do something like, for example: 1080i = 66 kHz, and I think can sometimes do something unintuitive with the vertical refresh being halved or doubled what you actually perceive.

But your CRT monitor will staunchly report 1080i as 33 kHz, of course.

CRU takes an interesting approach to building interlaced resolutions.

Screenshot of CRU's "Detailed resolutions" editor, set to "Automatic CRT" timings, with the Interlaced check box checked, showing a user-editable 1024x384 and fixed = 768
CRU's "Detailed resolutions" editor allows creating an interlaced resolution if you check the Interlaced box at the bottom. Note the "1024 x (384=768)" text.

In CRU, you enter vertical parameters in terms of per-field (half-resolution) line counts, and the program automatically displays the per-frame timings to the right. When set to Manual timings mode, checking the Interlaced box automatically divides the per-field Active textbox by 2 (to keep the per-frame vactive identical and convert it to a per-field vactive), but instead leaves the per-field porch and sync sizes unchanged (reinterpreting the the per-frame timings as per-field ones), then recomputes the per-field Total textbox based on the new active line count.

Building an interlaced modeline

Working with interlaced X11 modelines is tricky, because the numbers actually represent a "virtual" progressive frame produced by weaving two fields together. In interlaced modes, vdisplay, vsync_start, vsync_end, and vtotal are actually computed by adding the time spent in active, front porch, sync, and back porch in two fields of a single frame. (Note that vsync_start and end are off by half a scanline per frame or two fields, see below).

Earlier I was researching the CVT specification and Linux program, to find how to generate a CVT interlaced mode(line). I found bugs in how the Linux library/program generated interlaced modes, and reported a bug in their tracker with my multi-day investigations. Here I'll summarize my final conclusions after piecing the puzzle together.

According to the CVT specification's spreadsheet (on tab CVTv1.2a), the front porch and sync pulse duration are added to each field; this also matches Windows CRU's CVT interlaced mode generator. You're meant to calculate a half-height progressive mode at the same field rate, then generate an interlaced two-field modeline by doubling all vertical timings and adding 1 to vtotal. The modeline's vsync start needs to be 2 front porches* past the vblank start, and the vsync end needs to be 2 vsync durations past the vsync start, so that each field has its own full vertical front porch, sync pulse, and back porch.

(*) Unfortunately interlacing throws a wrench in the calculations. The vtotal value is 1 line more than twice a progressive vtotal, and when an interlaced signal is being transmitted, the total front and back porches are each 0.5 lines longer than twice a progressive porch, leaving the vsync pulse halfway between scanlines. How do we represent this as a modeline?

To look for answers, we'll model the output signal, and see how we could rearrange it to a virtual half-rate progressive modeline.

Assume an interlaced mode with an odd number of lines (=N) per frame (2 fields), and an even number of active lines (you reaaaally shouldn't be using odd resolutions, let alone interlaced).

To model interlaced modes, I drew diagrams for two test "mini modelines". Both had odd vtotal (scanline count) per frame (2 fields), and 1 active line per field; the interlaced vblank was placed after either the early field (with longer trailing porches) or late field. (It doesn't matter, both cases have the same issue.)

How do we convert between an interlaced signal and a sequential modeline (representing the timings of two sequential fields)? Return to the diagram of the video signal.

Diagram of interlaced and rearranged progressive vsync timings, relative to scanlines and hsync

These mid-line "interlaced vsync" pulses can be encoded as integers in two ways. You can either round them down to the previous scanline (which in a CVT modeline with a 2n+0.5 line long front porch, produces an even front porch length), or round them up to the next scanline (which in a CVT modeline, produces an odd front porch length).

I've confirmed that Linux drivers add half a line to vsync timings, by rebooting to amdgpu.dc=0 and probing my DP-to-VGA adapter with my sound card.

Sidenote: Interlacing in the HD era

In the world of television and video, interlacing has found its way into the modern age.

In the early days of HDTV, before LCD killed off plasma and CRT, some manufacturers released native 1080i HD CRTs. These generally had a fixed horizontal scan rate of 33.75 KHz (1125 lines every 1/30 of a second, split across two fields), displaying fast-moving objects at 540 lines per field. Some HD CRTs supported native 480p scanning at 31.5 KHz as well, for displaying 480i/p content without scaling. (I hear that Panasonic Taus with 31khz support were lag-free in video games, while some other TVs would introduce a frame or two of latency.) If a HD CRT supported 720p content, it would be scaled to 540 lines tall per field before being displayed.

Interestingly, HDMI comes with CEA interlaced modes, a single one of which is interlaced with the forbidden even vtotal (VIC 39 = 1920x1080i with 1250 total). Also https://www.mythtv.org/wiki/Modeline_Database includes multiple interlaced 1080i modes with a vtotal of 1124 rather than 1125. I'm not sure what actual HD CRTs used.


Surprisingly, interlacing has survived the transition from analog SDTV to digital HDTV. ATSC broadcast TV is built around either 720p or 1080i video transmitted using the MPEG-2 codec. 1080i offers more total detail for near-static images, but requires deinterlacing algorithms if shown on LCD TVs, and offers inferior vertical resolution for moving objects, so sports broadcasts generally run on 720p instead. ATSC is also a victim of the corruption pervasive among commercial standards using patented technology (to quote Wikipedia):

With MUSICAM originally faltering during GA testing, the GA issued a statement finding the MPEG-2 audio system to be "essentially equivalent" to Dolby, but only after the Dolby selection had been made.[1] Later, a story emerged that MIT had entered into an agreement with Dolby whereupon the university would be awarded a large sum if the MUSICAM system was rejected.[2] Following a five-year lawsuit for breach of contract, MIT and its GA representative received a total of $30 million from Dolby, after the litigants reached a last-minute out-of-court settlement.[2] Dolby also offered an incentive for Zenith to switch their vote (which they did), however it is unknown whether they accepted the offer.[2]

Outside of North America, most television is based on DVB-T (also MPEG-2 at 720p or 1080i) or DVB-T2 (H.264 up to 1080p50). I hear many countries still primarily broadcast on DVB-T, but Thailand uses DVB-T2 but still broadcasts in 1080i50, presumably to save bandwidth compared to 1080p50.


Another place I've found interlacing is when I pulled the SD card out of my Sony Alpha a6000 camera, and found a large video file with an odd .MTS extension. Opening it in mpv, I was surprised to see combing in moving objects, because mpv had used comb reconstruction rather than a proper deinterlacing algorithm. After pressing the d key, the video became more watchable, though I still noticed artifacts in fine detail (whether from deinterlacing or video compression).

Newer compression algorithms like H.265 and VP8/9 do not support interlacing at the compression level. I think interlacing creates more problems than it solves, for digital video transmitted to LCD displays (ATSC and DVB-T may have reasonable given limited bandwidth and the codecs of the time, but interlaced H.264 is questionable), and hope that it remains a thing of the past.

Footnotes

We plugged in one such, and when the switch came to 15khz? It exploded. Like, the magic grey smoke escaped, along with flames and melting plastic.

I did let the smoke out of one monitor with a bad X modeline.

I remember creating a new modeline and adding it to the list. Then using ctrl+alt and +/- to cycle through the modes. I would get to the new mode and the monitor would start buzzing and clicking and the image would flicker. I would quickly toggle to the next mode that was "safe" then go back and edit the modeline and try again.

10

http://martin.hinner.info/vga/640x480_60.html (I'm not sure why it says 2 lines vertical sync, but the oscilloscope photo shows 3 lines of vsync pulse)

12

If you are trying to drive a display past its self-reported frequency/pixel clock limits, some users report that you must first override the EDID through the Linux command line. Others report you can pick custom resolutions on a CRT without installing a spoofed EDID binary.