Even with the same bitrate, you’ll need 4K video to get quality comparable to uncompressed 1080p.
For example, mp4 defines that the brightness channel Y should be stored with full resolution, and the color channels Pb Pr should be stored at half resolution. So in a 4K mp4 video at lossless bitrate, the actual colors will only be stored at 1080p. This functionality is called Chroma Subsampling.
Audio Codecs do something very similar, causing these exact issues.
That’s again the same issue. Nice theory, and theoretically is true, but all real algorithms also subsample high frequencies, and scale them back up with nearest-neighbor.
The same issue happens with audio. Nice theory, completely broken realistic implementations.
It's true that nearest neighbor is bad, but the important part is the YUV->RGB conversion. It distributes the missing high frequencies out of the Y channel into all three RGB images.
Even with the same bitrate, you’ll need 4K video to get quality comparable to uncompressed 1080p.
For example, mp4 defines that the brightness channel Y should be stored with full resolution, and the color channels Pb Pr should be stored at half resolution. So in a 4K mp4 video at lossless bitrate, the actual colors will only be stored at 1080p. This functionality is called Chroma Subsampling.
Audio Codecs do something very similar, causing these exact issues.