efficient network receive buffer management when receiving large chunks
At the moment, we have a read_buffer which is a string and we append to it each time we get more data. We read data from the network 8KB at a time, which means that for an 8MB picture (uncompressed
RGBA at 1080p), we end up copying that string buffer 1000 times... Quick maths tell me we generate
(1000*1001)/2*8K = ~4GB of memory copy for an 8MB picture!
Now, with just lz4 compression, the average frame drops to just a few percent of the original size, ie for 5%: 400KB is 50 packets, which means:
50*51/2*8K= ~10MB (which is still 25 times more than we should!)
h264, the compression is much more efficient, so the average packet size drops to 200KB, still high enough that memory copy is probably costing us.
Change History (7)