New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When aggregating data tables for a week, month, year or range period, don't store all archive content in memory #18295
Comments
I checked and in the case where we have a memory issue, |
I was running into memory issues in my Matomo installation, which I also discussed in this forum post. It sounds like solving this issue would also resolve my problems with the archiver. Do you also think this is the case, or do you think there is a separate issue going on there? |
It might @gijshendriksen it's hard to say without knowing all the details. Generally what might also help is to lower the number of actions in the report see https://matomo.org/faq/how-to/faq_54/ eg setting something like [General]
datatable_archiving_maximum_rows_actions = 500
datatable_archiving_maximum_rows_subtable_actions = 100 Sometimes this does not fix it alone, and if it happens eg for a yearly period, then the archives for each month in the year would need to be invalidated see https://matomo.org/faq/how-to/faq_155/ Hae you maybe already lowered the number of rows in the reports before? |
@tsteur thanks for your help! I wasn't aware these configuration options would help with the memory usage, but after applying your suggested changes the archiver now seems to be working again. Thanks! |
We're having this issue daily quite a few times with various customers. |
We again have an issue with a customer where archiving stopped working because of this issue |
Seeing this issue happening again for a URL like this:
Not fully sure if it is this specific problem but I assume so and it should reduce memory quite a bit if we had this logic. |
Again issues today |
Again happened few times. |
We're having issues pretty much daily with this one |
This is a follow up from #17817 where we only expand one data table at a time.
We've noticed though we're still seeing sometimes memory issues for eg a yearly archive where it exceeds 8GB. This archive has 9 levels down of subtables and overall a lot of rows. One thing that could help with memory be in to not load the data of all data tables at once into memory.:
Each data table is then later expanded one at a time.
The only problem with loading the needed data tables on demand would be performance as we'd need to do a lot more select queries to get each "subchildren" data table. Maybe that means we might not want to do this or maybe we can find a compromise somehow.
refs L3-126
The text was updated successfully, but these errors were encountered: