@cbonte opened this Pull Request on June 7th 2014 Contributor

This patch provides an optimization that caches the parsed DeviceDetector.

Using The DeviceDetector2 branch, before the patch, a test case with 100000 line extracted from a real website, the result was :

Total time: 308 seconds
Requests imported per second: 323.87 requests per second

After the patch :

Total time: 144 seconds
Requests imported per second: 690.84 requests per second

The optimization really depends on the number of different user agents and the bulk size but in the worst case, we can expect nearly the same performance as without the patch.

Note : initially, I expected to provide the patch for the master branch but I prefered to provide one for the DeviceDetector2 branch, using CacheStatic instead of a static array.

@cbonte commented on June 7th 2014 Contributor

You're right, I converted the code too quickly to the DeviceDetector2 branch.
Reading the CacheStatic code, I wonder if it's really a good idea to use it in this case : it could leads in a security issue as someone can forge the User Agent to reuse a key that doesn't provide a DeviceDetector.

I'm going to push a commit that reverts the code to the one I used on master.

This Pull Request was closed on June 8th 2014
Powered by GitHub Issue Mirror