New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Show Warning if Deleting Old Data Does Not Work #18839
Comments
This issue has been mentioned on Matomo forums. There might be relevant details there: https://forum.matomo.org/t/high-traffic-sites-and-custom-period-date-ranges-selection/44717/2 |
@mritzmann Thanks for creating the issue. |
Hello @sgiehl, thank you very much for your reply.
The PHP memory limit was set to 4G. $ cat php.ini | grep memory_limit
memory_limit = 4G
$ php -r 'echo ini_get("memory_limit");'
4G |
Ok. That should indeed be enough to clean up the data. I guess we need to investigate how our code to remove old visits currently works. Maybe it tries to query all data before removing it or has another memory lack we should close. |
Had a quick look at the code and the delete log data task should be quite low memory and we delete max 2000 visits at once there and shouldn't save much in memory. On our own instance have never seen any memory issues there. This may trigger also the log_actions cleanup though. @mritzmann could you maybe add below entry to your [Deletelogs]
delete_logs_unused_actions_schedule_lowest_interval = 9000000 |
I am sorry, but I can no longer reproduce the problem, as I have since deleted several GB of RAW data (to solve the problem described above). The cronjob is currently running successfully for me. Therefore, it probably doesn't make sense for me to check settings like But I still think that some kind of monitoring check (for example a notification in webui) would be interesting. This would be less a bug, but more a feature request. |
I have a Matomo installation which became slower and slower over time. The cause turned out to be that the table
matomo_log_visit
contained several years of RAW data and thus became several GB in size. This affected the reading of this table noticeably. In the settings, Regularly delete old raw data is set to 30 days, and a cron job is set up.It turns out: the cron job ran out of memory every time during
Tasks.deleteLogData
.Matomo displays a notification in the backend when archiving via cronjob fails. However, when tasks fails, this is not displayed anywhere. My wish would be to proactively display failed tasks and notify the user that the tasks have not been fully completed. The
ScheduledReports
plugin does not show the error either.Summary
Tasks.deleteLogData
are not completed successfully for a long time.Your Environment
The text was updated successfully, but these errors were encountered: