Not sure which one should be the baseline, or even if having a reference codec makes any sense: some codecs are just less efficient and so we will have to sacrifice speed or quality to get closer to the baseline.
But clearly they are too far out at the moment: as per ticket:2029#comment:18, vp8 uses 10x more bandwidth than x264 using the same settings.
We also need to define some typical content that we want to use for calibrating and create some compression test scripts for different values. (speed x quality means quite a few pictures of varying quality will be produced) Then ideally we can use an impartial observer to rank the output without any prior knowledge of the settings used for each.
Bearing in mind that different codecs are used for different content (ie: video when we detect it), at different sizes (ie: no webp at low res), at different quality settings (ie: more jpeg at low quality setting), that those heuristics may still change (#2044), etc
Maybe line them up first using the encoding latency so it will always take the same amount of time to compress an image at the same speed setting? Or match the compression ratio instead... or quality...
this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/2046