If the server send timestamps sound-data and draw packets that represent not just when the data comes out of the encoder, but also when the raw data goes into the encoder. this information could be used by the client to better track how far out of sync audio is compared to video by comparing when the data went into the respective encoders.
The "timestamps" we currently get from the gstreamer pipeline are already from the input element AFAICT (TBC), adding a "do-timestamp=True" to the "pulsesrc" doesn't change that. The problem is that they start from 0, with no way for us to convert them to absolute time.
Looks like we'll need to
sysclock = gst.SystemClock.obtain() self.pipeline.use_clock(sysclock)
But that didn't make any difference.
I think I'll need to ask the gstreamer devs. In any case, this is 2.0 material, let's remove gstreamer 0.10 first.
I've updated the doc links which had bit-rotted, and asked the absolute buffer timestamp from src element.
Judging by the lack of response, there is no easy way to do this. We'll have to write a custom gstreamer audio filter element and expose the timestamp delta as a property we can query from our capture pipeline.
header file for timestamp plugin
use the timestamp plugin to get a real monotonic absolute timestamp we can also use for video frames
To use the new plugin attached:
gcc -I. -I.. \ -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include \ -pthread -Wall -g -O2 -Wall -c gsttimestamp.c -fPIC -DPIC -o gsttimestamp.o gcc -shared -fPIC -DPIC gsttimestamp.o \ -lgstbase-1.0 -lgstcontroller-1.0 -lgstreamer-1.0 -lgobject-2.0 -lglib-2.0 \ -pthread -g -O2 -pthread -Wl,-soname -Wl,libgsttimestamp.so -o libgsttimestamp.so sudo cp libgsttimestamp.so /usr/lib64/gstreamer-1.0/
Then with the xpra patch above, we expose extra attributes in the audio packet metadata:
The timestamp value is sent as part of the audio and video metadata under the key "ts". (the existing "timestamp" is still present for audio packets - but this is an absolute value which cannot be reconciled with the video packets metadata)
$ xpra info | grep speaker.latency client.sound.speaker.latency=76
Note: this is how long it takes from the moment the audio buffer is captured from the source element (typically "pulsesrc") until we receive it in a compressed form in our buffer capture element ("appsink"). It does not include the time it takes to capture from the sound card (usually very low anyway), or the time it takes to forward this buffer through our subprocess wrapper layer back to the server process (also quite low)
Beta builds with those changes are available here http://xpra.org/beta.
this is an important feature - please test
Not heard back, closing.
this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/1370