Some important caveats:
simulate_console_user.py
is unpatched and as such failed its test
I will also post the config and modified test_measure_perf.py
files I'm running with, so that others can test this as well.
After posting this, I will copy all the config files to our "dedicated" test box in-office and re-run the test suite with a larger repetition.
My config file
modified test_measure_perf
Xorg data
Xorg log file
Xvfb test data
Xvfb log file
Can you generate some graphs so we can visually inspect? Was there anything suspicious?
I'll work on generating some graphs today.
In the meantime, here are the longer tests I ran on our test box.
Sorry about the typo
Xorg test data
xorg logs
xvfb logs
I've now posted the html files that the test_measure_perf_charts.py
outputs. I didn't get any errors, so it looks to be okay, but I'm trying to deal with getting JS added so I can actually view them.
Please note this is the first time that I've ran this utility. In the process of running it today I noticed that I had the automated tests outputting files with the wrong naming convention.
Of note:
The naming convention should be:
prefix_id_rep
, but test_measure_perf.py
takes the args rep id
...they're reversed.
I also learned that feeding the test_measure_perf.py
a different integer for rep doesn't actually run the tests that many times. So, I've updated my daily test script to reflect this so it'll actually run the tests twice now.
Of course, immediately after posting this, I realized the JS files are located in the same folder where the test_measure_perf_charts.py
originally was. I copied the folder to the folder where I put the .html files, and reloaded the file and immediately got an error:
Error: Invalid dimensions for plot, width = 1220, height = 0
Well...back to editing the file some more.
Okay I fixed the charts and they seem to work now. (removed rgb24 from the list of encodings)
*Should* work now
Fixed it so the headers actually say xvfb in this file
Sorry for the email spam.
Anyways, I've uploaded the now working chart files. You'll need the "js" folder from src/tests/xpra/
folder. Oddly enough, my machine will only display the charts if I open the files in that folder where I've downloaded it on my machine. If I copy the .html and the js folder to somewhere else the charts don't show up and I get the error mentioned comment:4.
No idea why, web development is not my area of expertise.
I'm not sure those ~1MB log do much good in this ticket, if not then please delete them.
The argument to test_measure_perf.py
is just a reference which can be used for differentiating test runs. Especially useful if you're changing environment variables, or anything that is not recorded in the test result data.
Please try to combine xvfb and xdummy in the graphs so we can see them side by side.
Updated charts that have the two data points next to each other
I reworked the charts and now the Xdummy and Xvfb data points are overlaid next to each other. It looks like Xvfb is within 5-10% of Xdummy consistently.
I'm going to set up another test run on our test box with two runs of each test to get some more averaged data.
I accidentally left the tests running for a couple more days, so we have about 5 runs worth of data.
So I forgot to turn off the cron job that ran these tests for a couple days. So we now have approximately 5 runs worth of data that can be averaged over. I re-ran the charting file and got a new set of test data - this time with 5 points of data the charts average over to give us much better information about the performance.
I'll pass this to you to give the latest charts file a lookover. I guess close it when you're done. If there's any more work done on the Xvfb server, I have everything in place to run these tests some more.
Having 5 runs worth of data is actually very useful, can you try to see what the variation is for each test then we can use this as reference in the future. Ideally using some form of Box Plot.
I talked to Nick and he asked me to re-write this into a Feature Ticket and to give that to him. So, what would you like to do with this ticket?
So, what would you like to do with this ticket?
I'll take it and try to publish the charts somewhere.
I finally moved the charts in place: http://xpra.org/stats/charts/Xdummy_vs_Xvfb.html.
There are non-negligible differences between the two options, in particular the "pixels/s sent" is an important metric and is markedly lower with Xvfb. The "min batch delay" is also consistently lower, the values are already very high in both cases to begin with.
There are definitely some things to investigate, we can't use any of the test data if it doesn't make sense overall:
Both of these issues could manifest themselves as a slow or unresponsive server. Is it an artifact of the test setup or a real issue?
The packet accounting is also missing. It would be better to have it, or just remove it from the charts.
New set of data comparing xorg vs xvfb
Trying to get more data on this one. Here is some test data and the charts to go with it.
There is a good set of tests there with rgb and auto encoding with the python2 client.
The gtkperf data seems to be missing for Xorg - did it crash? There are some outliers in there, ie: memscroller - which could be explained by test variance, more test data might smooth it out. The only surprise is how Xvfb does better at regions-per-second and encoding-pixels-per-second.
Yup problems with gtkperf will try and figure out what is going on with it.
Do we need some much longer test runs or more iterations to try and smooth out the data?
Do we need some much longer test runs or more iterations to try and smooth out the data?
Longer test runs are not usually helpful. The problem with more iterations is when one of the iteration failed and you don't notice, then the average is skewed. Maybe we should exclude the 2 problematic tests, and deal with those separately?
Not heard back.
this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/1655