One of the abilities of every LCD display is that it can change the value of all its pixels at once, but today this is an advantage that is hardly used when a GPU is transmitting information to the display. , although that would be ideal now. which would significantly reduce the latency of each frame.
The first monitors that appeared on PCs were nothing more than televisions with the radio frequency receiver removed, these televisions were designed to receive radio waves from the television network and these sent the information line by line. scanning at specific times.
The video inputs on the TVs were very rudimentary and used the same circuit as receiving the TV signal, this meant they had to work at the same times and in the correct way to properly display the image on the screen, otherwise those times were following the image was displayed incorrectly or simply not displayed.
Over time, new PC standards were born allowing much higher resolutions: EGA, VGA, etc. Until nowadays when CRT screens have been replaced by LCD screens, however the way of sending data to the screen has not changed since all this time, since it continues to be sent in the same way. way than before.
GPU time with monitor
For an image to be generated correctly on a screen at a specific resolution, it must follow an exact series of times and therefore transmit data at a specified frequency, the four essential times being:
- VBlank: The time in scan lines during which the monitor has nothing to show the image, this can be given
- VSync: The period of scanning lines during which the monitor displays a full picture.
- HBlank: The period of a scan line that it is not displayed is divided into two periods, one before the scan line called Front Porch and one later called Back Porch.
- HSync: The period during which the scan line is displayed.
This terminology comes from screens which used the CRT to generate the image, where the term pixel does not exist at the physical level, but with LCD screens this does not happen, however when the first PC LCD screens came out. , they had They had to reproduce content in the same way as CRTs, so they had to use the same type of operation as CRT displays, i.e. LCD displays largely emulate the operation of a CRT display. .
VSync is the one who really sets the times
The display driver is an element that is found in all GPUs, it is responsible for reading the frame buffer and transmitting its contents to the video interface.
Think of it as someone who reads text and copies the same text line by line onto a new page. Because VSync defines the speed at which each row will be copied, its value influences the speed at which the image is drawn on the screen in a coordinated manner between the display controller of the GPU and the screen itself.
Display driver follow the times which are marked by the standard of each resolution
Streaming a full frame costs more
When sending each image from the GPU to the monitor, full image is not sent instead, information is sent in the form of bursts of data. The reason is to avoid cluttering the data bus that communicates the graphics card with the screen, regardless of the type of video output interface used.
The reason is that although in theory we could send the data of one complete image at a time on the screen, it would mean that said monitor should have internal memory in which to store this frame buffer, which would incur an additional cost compared to the cost of the monitor and its price.
This is why the final frame buffer information, stored in VRAM, is not sent to the video output at the same time, despite the fact that contemporary displays allow all the pixels on the screen to change. at the same time instead of doing it line by line.
The case of virtual reality
When it comes to VR, reducing the photon time is really important in order to get telepresence and the time it takes to send the VR headset monitor from the GPU is extremely important.
This is why HMD units or virtual reality headsets with an integrated SoC have become so famous, since allows direct placement of the frame buffer on the screen directly, this allows all the pixels in an image to be changed at the same time, which is crucial to keep the entire process below 20ms.
But this impossible with virtual reality connected to a cable, because said cable cannot transmit a complete image at a time. The reason? If we wanted to output a full image from the GPU to the monitor using a video output interface, HDMI DisplayPort style, we would not only need memory to store the image, but we would also need that the interface was extremely large in number of bits and therefore a cable with a large number of pins.
An SoC between the GPU and the monitor
A lot high-end TVs have SoC which allows them to use algorithms based on artificial intelligence to can resize the image or even do a frame interpolation. However, this is not feasible in the case of the PC, the reason is that these TVs decode the original image very quickly, which gives them a very long time to generate a higher resolution version or create images. intermediate frames.
But on PC, the rendering time is very high, long enough for a game to require a high power in order to reduce the little time available. This is why we have the case of NVIDIA and DLSS GPUs which require extremely high speed since they must operate in a time of less than 2.5 ms.