Start of main content
Talk type: Talk
How to improve compression by 20% using machine learning, without wasting 300+ machine years of computation
More than 10 years ago one well-known lecture company asked speakers's team to help them reduce the size of lecture recordings by setting parameters of the video codec. To sort out all the options of even good old x264 on the fragment of 20 seconds, you need 2*10^15 machine hours, or 500K of Earth ages, so it's time to use non-trivial algorithms of optimization.
Then the solution helped reduce bitrate to 49% and save video quality compared to the preset used by the lecture company. Now after several hours of research and more than 300 machine man-hours, with machine learning and methods of optimization the team managed to build models of different codecs for a wide range of video types. Thanks to this you may save up to 20% of video bitrate and maintain its quality, changing only one string of parameters of codec launch.
Speaker will talk about how not to get lost in thousand-dimensional spaces, how much you may improve the work of codec by optimal parameterization, and why after having tried to implement this solution by themselves the companies come back to him.