xpra v2.3-r18646
If viewing a video over html5 with sound forwarding, sometimes there are delays of a few seconds in the sound transfer. I've tested with firefox and chromium and several audio codecs, though not all combinations.
After the delay the sound goes on where it stopped. This leads to a mismatch of video and sound, especially annoying with mismatching dialogues of film characteres. Side effect: after terminating the video player, the sound goes on until the buffer is empty.
I assume the sound is buffered somewhere and not synchronized with xpra display transfer. Maybe an option to enforce video-sound synchronization that drops parts of sound if needed would make sense.
For myself it is not that important as I don't need that, just want to report the observation.
Thanks for reporting this.
sometimes there are delays of a few seconds in the sound transfer.
That should only happen if your connection has limited bandwidth, some jitter or if the browser is running slowly.
This code was added in ticket:1341#comment:6 (16 months ago), see also #845
The media.buffered: The buffered attribute must return a new static normalized TimeRanges object that represents the ranges of the media resource, if any, that the user agent has buffered, at the time the attribute is evaluated. Users agents must accurately determine the ranges available, even for media streams where this can only be determined by tedious inspection..
Oh, what a mess. I'm pretty sure things have changed since the HTML5 mediasource code was merged.
My browsers seem to be playing the stream too fast and changing the sample rate at the gstreamer source pipeline doesn't make any difference. Neither does changing the playback rate in javascript.
PS: chrome 62+ now supports flac+mp4, we could enable it with a version check.
Here's a patch which dumps information to the javascript console:
--- html5/js/Client.js (revision 18706) +++ html5/js/Client.js (working copy) @@ -2308,6 +2308,29 @@ XpraClient.prototype.push_audio_buffer = function(buf) { if (this.audio_framework=="mediasource") { this.audio_source_buffer.appendBuffer(buf); + var b = this.audio_source_buffer.buffered; + if (b && b.length==1) { + /*console.log("buffered=", b.length, "timestampOffset=", this.audio_source_buffer.timestampOffset); + for (var i=0; i<b.length;i++) { + console.log("buffered[", i, "]=", b.start(i), b.end(i)); + } + var p = this.audio.played; + console.log("played=", p.length); + for (var i=0; i<p.length;i++) { + console.log("played[", i, "]=", p.start(i), p.end(i)); + }*/ + var e = b.end(0) + var buf_size = e - this.audio.currentTime; + console.log("buffer size=", Math.round(1000*buf_size), "ms, currentTime=", this.audio.currentTime); + /* + if (this.audio.readyState>=3 && buf_size<0.2) { + this.audio.playbackRate = Math.max(0.9, Math.min(this.audio.playbackRate, 1)-0.01); + } + if (this.audio.readyState>=3 && buf_size>0.8) { + this.audio.playbackRate = buf_size/0.8; + }*/ + console.log("state=", this.audio.readyState, "network state=", this.audio.networkState, "playbackRate=", this.audio.playbackRate); + } } else { this.audio_aurora_ctx.asset.source._on_data(buf);
The solution may be found here: Decode it like it's 1999: WebAudio for Live Streaming: ... With this, you can string very short PCM buffers together without any artefacts. You only have to calculate the start time for the next buffer by continuously adding the duration of all previous ones.
Switching codecs makes no difference. To make matters worse, it seems that you can try the exact same settings and get completely different results. I give up, for now, browser audio is too much of a mess at the moment.
See also #2165
So, it seems that we have no way of knowing how much is buffered internally by the browser with MSE? And that changing the playback rate does not allow us to get back to a small buffer size.. And we also have the problem of #2322.
So Media Source Extensions aren't very helpful at all and I thought I would try to go back to aurora decoding to see if we can get better buffering behaviour, it turns out that only WAV playback works (which is why the other codecs were disabled in r15732) and that we also can't always start an audiocontext when loading the page (no autostart), and that when it fails, it fails with a warning in the javascript console only.. We would need to get to the AudioContext
inside aurora to see if the state
is paused.
On the plus side, AudioContextOptions.latencyHint default value is interactive
: meaning the browser should try to use the lowest possible and reliable latency it can.
See also AudioContext.getOutputTimestamp
WebAudio: Add AudioContext.getOutputTimestamp() method: The AudioContext.getOutputTimestamp() method helps to synchronize DOM time and AudioContext time values. It is used to estimate DOMHighResTimeStamp value of the audio output stream position for the given AudioContext.currentTime value, or do the opposite: estimate AudioContext.currentTime value of the audio output stream position for the given DOMHighResTimeStamp value
outputLatency
HTML5 audio streaming: precisely measure latency?
AudioWorkletNode
PCM to WAV in Javascript
Also, using XPRA_SAVE_TO_FILE=basename
to save the mp3 / flac data to file, it looks like the mp3js code (tested using http://audiocogs.org/codecs/mp3/) needs the new id3v2mux
muxer element added in r23796.
When mp3 decoding does work is when we buffer enough chunks before submitting them to the decoder, then it still errors out after consuming them..
And lately, even wav seems to be broken!?
This could well be aurora bug(s):
FWIW: increasing MIN_START_BUFFERS
to 100 allows the "legacy: mp3" codec to start playing, but it stops shortly after. None of the fixes linked from comment:8 make any difference.
Beats me.
this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/1775