68 | | From {{{ServerSource}}}: |
69 | | * {{{client latency}}}: the client latency measured during the processing of pixel data (when we get the echo back). We know the lowest latency observed, and we try to keep the latency as low as that |
70 | | * {{{client ping latency}}}: as above, but measured from ping packets (client) |
71 | | * {{{server ping latency}}}: latency measured when the client pings the server, the client then sends this information as part of ping echo response packets |
72 | | * {{{damage data queue}}}: the number of damage frames waiting to be compressed, we want to keep this low - especially as this data is uncompressed at this point, so it can be quite large |
73 | | * {{{damage packet queue size}}}: the number of packets waiting to be sent by the network layer - we want to keep this low, but sometimes many small packets can make it look worse than it is.. |
74 | | * {{{damage packet queue pixels}}}: the number of pixels in the packet queue, the target value depends on the current size of the window |
75 | | * {{{mmap area % full}}}: only used with mmap |
| 68 | From {{{GlobalPerformanceStatistics}}}: |
| 69 | * {{{client-latency}}}: the client latency measured during the processing of pixel data (when we get the echo back). We know the lowest latency observed, and we try to keep the latency as low as that |
| 70 | * {{{client-ping-latency}}}: as above, but measured from ping packets (client) |
| 71 | * {{{server-ping-latency}}}: latency measured when the client pings the server, the client then sends this information as part of ping echo response packets |
| 72 | * {{{damage-data-queue}}}: the number of damage frames waiting to be compressed, we want to keep this low - especially as this data is uncompressed at this point, so it can be quite large |
| 73 | * {{{damage-packet-queue-size}}}: the number of packets waiting to be sent by the network layer - we want to keep this low, but sometimes many small packets can make it look worse than it is.. |
| 74 | * {{{damage-packet-queue-pixels}}}: the number of pixels in the packet queue, the target value depends on the current size of the window |
| 75 | * {{{mmap-area}}}: how full the shared memory area is (only when mmap is enabled) |
78 | | From {{{WindowSource}}}: |
79 | | * {{{damage processing latency}}}: how long frames take from the moment we receive the damage event until the packet is queued for sending. This value increases with the number of pixels that are encoded. We want to keep this low. |
80 | | * {{{damage processing ratios}}}: as above, but based on the trend: this measure goes up when we queue more damage requests than we can encode. |
81 | | * {{{damage send latency}}}: how long frames take from the moment the damage event until the packet containing it has made it out of the network layer (it may still be in the operating system's buffers though). Again, we want to keep this low. This value increases when the network becomes the bottleneck. |
82 | | * {{{damage network delay}}}: the difference between {{{damage processing latency}}}} and {{{damage send latency}}}, this should equal to the network latency - but because the values are running averages, it is not very reliable. |
83 | | * {{{network send speed}}}: we keep track of the socket's performance (in bytes per second) |
84 | | * {{{client decode speed}}}: how quickly the client is decoding frames |
85 | | * {{{no damage events for X ms}}}: when nothing happens for a while, this means that the window is not busy and therefore we ought to be able to lower the delay |
| 78 | From {{{WindowPerformanceStatistics}}}: |
| 79 | * {{{damage-processing-latency}}}: how long frames take from the moment we receive the damage event until the packet is queued for sending. This value increases with the number of pixels that are encoded. We want to keep this low. |
| 80 | * {{{damage-processing-ratios}}}: as above, but based on the trend: this measure goes up when we queue more damage requests than we can encode. |
| 81 | * {{{damage-out-latency}}}: how long frames take from the moment the damage event until the packet containing it has made it out of the network layer (it may still be in the operating system's buffers though). Again, we want to keep this low. This value increases when the network becomes the bottleneck. |
| 82 | * {{{damage-network-delay}}}: the difference between {{{damage processing latency}}}} and {{{damage send latency}}}, this should equal to the network latency - but because the values are running averages, it is not very reliable. |
| 83 | * {{{network-send-speed}}}: we keep track of the socket's performance (in bytes per second) - generally the values are very high because the OS will just buffer things for us, but when things start to back up, we can notice that the send speed drops rapidly |
| 84 | * {{{client-decode-speed}}}: how quickly the client is decoding frames |
| 85 | * {{{damage-rate}}}: when nothing happens for a while, this means that the window is not busy and therefore we ought to be able to lower the delay |