It makes little sense to buffer a full frame before up scaling. Why would you do that? It’s a total waste of DRAM bandwidth too.
The latency incurred for upscaling depends on number of the vertical filter taps and the horizontal scan time. We’re talking order of 10 microseconds.
The only exception is if you’re using a super fancy machine learning whole-frame upscaling algorithm, but that’s not something you’ll find in an old CRT.
The latency incurred for upscaling depends on number of the vertical filter taps and the horizontal scan time. We’re talking order of 10 microseconds.
The only exception is if you’re using a super fancy machine learning whole-frame upscaling algorithm, but that’s not something you’ll find in an old CRT.