Changes between Version 18 and Version 19 of Testing
- Timestamp:
- 04/23/15 18:36:33 (7 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Testing
v18 v19 36 36 {{{#!div class="box" 37 37 == Automated performance and regression testing == 38 The [http://xpra.org/trac/browser/xpra/trunk/src/tests/xpra/test_measure_perf.py xpra.test_measure_perf] script can be used to run a variety of client applications within an xpra session (or optionally vnc) and get statistics on how well the encoding performed. The data is printed out at the end in CSV format which you can then import into any tool to compare results - you can find some examples generated using [http://www.sofastatistics.com/home.php sofastats] [http://xpra.org/stats/ here] and [/ticket/147#comment:11 here]. 38 The [http://xpra.org/trac/browser/xpra/trunk/src/tests/xpra/test_measure_perf.py xpra.test_measure_perf] script can be used to run a variety of client applications within an xpra session (or optionally vnc) and get statistics on how well the encoding performed. The data is printed out at the end in CSV format which you can then import into any tool to compare results - you can find some examples generated using [http://www.sofastatistics.com/home.php sofastats] [http://xpra.org/stats/ here] and [/ticket/147#comment:11 here]. There is a facility for generating charts directly from the CSV data, using a script that we now provide, described below. 39 39 40 It can also be useful to redirect the test's output to a log file to verify that none of the tests failed with any exceptions/errors (looking for exception messages in the log afterwards). 40 At the moment it does not have a command line interface, and all the options have to be edited directly in the source file. 41 [[BR]] 41 42 At the moment it does not have a command line interface, and all the options have to be edited directly. However, we have improved that process by splitting out the configuration data into a separate file: [http://xpra.org/trac/browser/xpra/trunk/src/tests/xpra/perf_config_default.py xpra.perf_config_default.py]. 43 42 44 Note: to take advantage of iptables packet accounting (mostly for comparing with VNC which does not provide this metric), follow the error message and setup iptables rules to match the port being used in the tests, ie: by default: 43 45 {{{ … … 49 51 * disable cron jobs or any other scheduled work (systemd makes this a little harder) 50 52 * etc.. 53 54 To create multiple output files which can be used to generate charts, using [http://xpra.org/trac/browser/xpra/trunk/src/tests/xpra/test_measure_perf_charts.py xpra.test_measure_perf_charts]: 55 * Build a config class by taking a copy from [http://xpra.org/trac/browser/xpra/trunk/src/tests/xpra/perf_config_default.py xpra.perf_config_default.py], then making changes as necessary. 56 * Determine the values of the following variables: prefix: (a string to identify the data set), id: (a string to identify the variable that the data set is testing, for example '14' because we're testing xpra v14 in this data set), repetitions: the number of times you want to run the tests. 57 * The data file names you will produce will then be in the format: prefix_id_rep.csv. 58 * With this information in hand you can now create a script that will run the tests. 59 60 For example: 61 62 {{{ 63 ./test_measure_perf.py all_tests_40 ./data/all_tests_40_14_1.csv 1 14 > ./data//all_tests_40_14_1.log 64 ./test_measure_perf.py all_tests_40 ./data/all_tests_40_14_2.csv 2 14 > ./data//all_tests_40_14_2.log 65 }}} 66 67 In the above example, test_measure_perf is run twice, using a config class named "all_tests_40.py", and outputting the results to data files using the prefix "all_tests_40", for version 14. 68 69 The additional arguments "1 14", "2 14" are custom paramaters which will be written to the "Custom Params" column in the corresponding data files. 70 71 The "1", "2" in the file names or in the parameters, refer to the corresponding repetition of the tests. 72 73 51 74 Please also see: 52 75 }}}