#1655 closed task (needinfo)
Initial Xorg vs Xvfb testing data
Reported by: | J. Max Mena | Owned by: | Smo |
---|---|---|---|
Priority: | critical | Milestone: | 2.2 |
Component: | tests | Version: | trunk |
Keywords: | Cc: |
Description (last modified by )
Some important caveats:
- Only 2 runs through each test suite
- Machine in question is very low end - quad core Atom + some Nvidia Ion "GPU"
- VirtualGL disabled - said GPU doesn't play nicely with VirtualGL (performance is abysmal for OpenGL apps)
simulate_console_user.py
is unpatched and as such failed its test
I will also post the config and modified test_measure_perf.py
files I'm running with, so that others can test this as well.
After posting this, I will copy all the config files to our "dedicated" test box in-office and re-run the test suite with a larger repetition.
Attachments (15)
Change History (37)
comment:1 Changed 4 years ago by
Description: | modified (diff) |
---|
Changed 4 years ago by
Attachment: | config_xvfb.py added |
---|
comment:2 Changed 4 years ago by
Cc: | smo@… sbennett@… afarr@… removed |
---|---|
Owner: | changed from Antoine Martin to J. Max Mena |
Can you generate some graphs so we can visually inspect? Was there anything suspicious?
comment:3 Changed 4 years ago by
I'll work on generating some graphs today.
In the meantime, here are the longer tests I ran on our test box.
comment:4 Changed 4 years ago by
I've now posted the html files that the test_measure_perf_charts.py
outputs. I didn't get any errors, so it looks to be okay, but I'm trying to deal with getting JS added so I can actually view them.
Please note this is the first time that I've ran this utility. In the process of running it today I noticed that I had the automated tests outputting files with the wrong naming convention.
Of note:
The naming convention should be:
prefix_id_rep
, but test_measure_perf.py
takes the args rep id
...they're reversed.
I also learned that feeding the test_measure_perf.py
a different integer for rep doesn't actually run the tests that many times. So, I've updated my daily test script to reflect this so it'll actually run the tests twice now.
Of course, immediately after posting this, I realized the JS files are located in the same folder where the test_measure_perf_charts.py
originally was. I copied the folder to the folder where I put the .html files, and reloaded the file and immediately got an error:
Error: Invalid dimensions for plot, width = 1220, height = 0
Well...back to editing the file some more.
comment:5 Changed 4 years ago by
Okay I fixed the charts and they seem to work now. (removed rgb24 from the list of encodings)
Changed 4 years ago by
Attachment: | charts_xvfb.html added |
---|
Fixed it so the headers actually say xvfb in this file
comment:6 Changed 4 years ago by
Sorry for the email spam.
Anyways, I've uploaded the now working chart files. You'll need the "js" folder from src/tests/xpra/
folder. Oddly enough, my machine will only display the charts if I open the files in that folder where I've downloaded it on my machine. If I copy the .html and the js folder to somewhere else the charts don't show up and I get the error mentioned comment:4.
No idea why, web development is not my area of expertise.
comment:7 Changed 4 years ago by
I'm not sure those ~1MB log do much good in this ticket, if not then please delete them.
The argument to test_measure_perf.py
is just a reference which can be used for differentiating test runs. Especially useful if you're changing environment variables, or anything that is not recorded in the test result data.
Please try to combine xvfb and xdummy in the graphs so we can see them side by side.
Changed 4 years ago by
Attachment: | charts.html added |
---|
Updated charts that have the two data points next to each other
comment:8 Changed 4 years ago by
I reworked the charts and now the Xdummy and Xvfb data points are overlaid next to each other. It looks like Xvfb is within 5-10% of Xdummy consistently.
I'm going to set up another test run on our test box with two runs of each test to get some more averaged data.
Changed 3 years ago by
Attachment: | charts_5.html added |
---|
I accidentally left the tests running for a couple more days, so we have about 5 runs worth of data.
comment:9 Changed 3 years ago by
Owner: | changed from J. Max Mena to Antoine Martin |
---|
So I forgot to turn off the cron job that ran these tests for a couple days. So we now have approximately 5 runs worth of data that can be averaged over. I re-ran the charting file and got a new set of test data - this time with 5 points of data the charts average over to give us much better information about the performance.
I'll pass this to you to give the latest charts file a lookover. I guess close it when you're done. If there's any more work done on the Xvfb server, I have everything in place to run these tests some more.
comment:10 Changed 3 years ago by
Owner: | changed from Antoine Martin to J. Max Mena |
---|
Having 5 runs worth of data is actually very useful, can you try to see what the variation is for each test then we can use this as reference in the future.
Ideally using some form of Box Plot.
comment:11 Changed 3 years ago by
Owner: | changed from J. Max Mena to Antoine Martin |
---|
I talked to Nick and he asked me to re-write this into a Feature Ticket and to give that to him. So, what would you like to do with this ticket?
comment:12 Changed 3 years ago by
Status: | new → assigned |
---|
So, what would you like to do with this ticket?
I'll take it and try to publish the charts somewhere.
comment:13 Changed 3 years ago by
Owner: | changed from Antoine Martin to J. Max Mena |
---|---|
Status: | assigned → new |
I finally moved the charts in place: http://xpra.org/stats/charts/Xdummy_vs_Xvfb.html.
There are non-negligible differences between the two options, in particular the "pixels/s sent" is an important metric and is markedly lower with Xvfb. The "min batch delay" is also consistently lower, the values are already very high in both cases to begin with.
There are definitely some things to investigate, we can't use any of the test data if it doesn't make sense overall:
- max damage latency is through the roof for x11perf (over 30s)
- batch delay is off the scale for many of the test apps
Both of these issues could manifest themselves as a slow or unresponsive server. Is it an artifact of the test setup or a real issue?
The packet accounting is also missing. It would be better to have it, or just remove it from the charts.
comment:14 Changed 3 years ago by
Priority: | major → critical |
---|
comment:15 Changed 2 years ago by
Owner: | changed from J. Max Mena to Jonathan Anthony |
---|
comment:16 Changed 21 months ago by
Owner: | changed from Jonathan Anthony to Smo |
---|
comment:17 Changed 20 months ago by
Owner: | changed from Smo to Antoine Martin |
---|
Trying to get more data on this one. Here is some test data and the charts to go with it.
There is a good set of tests there with rgb and auto encoding with the python2 client.
comment:18 Changed 20 months ago by
Owner: | changed from Antoine Martin to Smo |
---|
The gtkperf data seems to be missing for Xorg - did it crash?
There are some outliers in there, ie: memscroller - which could be explained by test variance, more test data might smooth it out.
The only surprise is how Xvfb does better at regions-per-second and encoding-pixels-per-second.
comment:19 Changed 20 months ago by
Owner: | changed from Smo to Antoine Martin |
---|
Yup problems with gtkperf will try and figure out what is going on with it.
Do we need some much longer test runs or more iterations to try and smooth out the data?
comment:20 Changed 20 months ago by
Owner: | changed from Antoine Martin to Smo |
---|
Do we need some much longer test runs or more iterations to try and smooth out the data?
Longer test runs are not usually helpful.
The problem with more iterations is when one of the iteration failed and you don't notice, then the average is skewed.
Maybe we should exclude the 2 problematic tests, and deal with those separately?
comment:22 Changed 3 months ago by
this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/1655
My config file