New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segmented Visitor Log high CPU load #13329
Comments
This is definitely supposed to be cached. Are you using any non-core plugins? |
Yes:
|
I tried to disable all of them, but the same behavior remained. |
Hi! This is becoming a blocker for us, because we can't use Matomo live to present data e.g. in meetings... Is there any other information I can try to provide that might help to shed a light on this issue? |
For reference I took via strace a snapshot of the Note I am using PHP 7.2 with The rows are sorted by number of occurrences of the syscall (first column). The second colum is the PID of the corresponding php-fpm process.
See how the PID 22846 opened 19887 times The other processes where busy performing bulk tracking (see how the same .php files are opened several times). |
Could it be there is some flag I am missing? |
BTW I am using:
So redis should actually contain the cached assets? |
Seems like https://github.com/matomo-org/matomo/blob/3.x-dev/core/AssetManager/UIAssetMerger.php#L148 (i.e. |
Ha! If by hand I set: private $enableCacheBuster = false; It works like a charme! |
And now the next thing that is happening is that the corresponding strace is filled up with thousands of This is definitively something that could be cached :-) (Maybe to be addressed in a separate issue). |
Just bumping up this issue because it's really a bottle neck for high-traffic websites. |
We're not experiencing any such issues here AFAIK. Also is this used in a load balanced environment? Looking at the code I wouldn't be surprised if redis causes this issue in a load balanced environment as maybe some paths that are different per server are stored in redis cache... cc @mattab fyi... we likely need to deprecate the redis cache adapter unless someone used a different redis config per server... |
If I had to guess, in case you are on a load balanced environment, the |
Please try disable Redis cache and try again? @kaplun |
I can try disabling redis, though I am not on a load balanced environment. |
I confirm say behavior. A storm of re-reading the |
can you maybe add a It might be printing couple of times something... I'm kind of thinking maybe it goes in the rendering of the view too many times for you for some reason... like eg when visitor log is being viewed it does it too often etc. |
Also seeing how often this code https://github.com/matomo-org/matomo/blob/3.6.1-b2/core/View.php#L329-L338 might be executed during visitor log etc it might be actually good to cache the |
refs #13329 Might not fix #13329 but should improve performance when rendering multiple views per pageload. It can be eg many when viewing visitor log etc. As the issue is referring to the "Segmented Visitor Log", it could actually fix that issue. Maybe for some reason the file cannot be written and it always re-generates it. `piwikVersionBasedCacheBuster ` is already cached in the method itself.
Created a PR https://github.com/matomo-org/matomo/pull/13536/files @kaplun can you test this? |
Yes, sorry, I haven't time to try the |
Yay. It seems to fix the problem! |
refs #13329 Might not fix #13329 but should improve performance when rendering multiple views per pageload. It can be eg many when viewing visitor log etc. As the issue is referring to the "Segmented Visitor Log", it could actually fix that issue. Maybe for some reason the file cannot be written and it always re-generates it. `piwikVersionBasedCacheBuster ` is already cached in the method itself.
I am trying to generate "Segmented Visitor Log"s for milldy accessed URLs, say 300 visit today. The server fails to reply on time. I checked and, while the database seems barely used for the response, the corresponding PHP(-fpm) process that is tasked with the reply is taking up 100% CPU and is not able to reply in time (I guess with respect to the Ngnix timeout I have set up).
I checked with strace to see what the process is busy doing and to my surprise I have seen that is opening thousands of times the same files:
This is repeated over and over again. I think there is some space for caching or moving this operation out of a loop.
I have tried enabling also the redis cache but the behavior is the same.
The text was updated successfully, but these errors were encountered: