@tsteur opened this Issue on October 1st 2021 Member

We noticed a while ago that multiple requests may launch archiving for the same period.

To reproduce this:

  • Disable browser archiving & Disable browser archiving for a segment
    • eg set [General]enable_browser_archiving_triggering = 0 enable_general_settings_admin = 0 browser_archiving_disabled_enforce = 1
  • Have a site with say 30 active goal
  • Have generally data for a few days
  • Trigger archiving using ./console core:archive
  • Then go the Goals Overview page and request a date range eg 2021-09-02,2021,09,29

You might then notice that all does goals trigger the archiving. Eg you could execute a query like

select count(*) from archive_numeric_2021_09 where ts_archived > DATE_SUB(NOW(), INTERVAL 5 MINUTE) and ts_archived < NOW() and name like 'done%'; -- number of unique added archives in the last few minutes which may be pretty much roughly like the number of goals
select count(*) from archive_numeric_2021_09 where ts_archived > DATE_SUB(NOW(), INTERVAL 5 MINUTE) and ts_archived < NOW() ;// how many archive entries were added overall

-- same for blob
select count(*) from archive_blob_2021_09 where ts_archived > DATE_SUB(NOW(), INTERVAL 5 MINUTE) and ts_archived < NOW() ;

I'm meaning that each of these requests like in the screenshot seem to be launching the archiving (eg index.php?forceView=1&viewDataTable=sparklines&module=Goals&action=get&idGoal=7&allow_multiple=0&only_summary=1&idSite=12&period=week&date=today&segment=&showtitle=1&random=1482):

image

As I'm seeing this recently more often I think this was a regression somewhere between 4.3 and 4.4. I quickly looked through the archiving related changes in https://github.com/matomo-org/matomo/compare/4.3.0...4.4.1 and couldn't find anything so far that would explain this. This one peaked my interest but pretty sure it cannot be what caused it: https://github.com/matomo-org/matomo/pull/17657/files

Possibly the regression was in 4.2.0 because of https://github.com/matomo-org/matomo/pull/17379/files maybe

I wonder if this one maybe previously prevented that each request launches the archiving at the same time? Once we can reproduce the issue, we could maybe see if it works with that PR and wouldn't launch multiple archives.

Let me know if there are any issues reproducing this issue.

refs https://github.com/matomo-org/matomo/issues/17774

The goal is that the archiving is triggered only once for all these requests. Meaning below query should return 1:

select count(*) from archive_numeric_2021_09 where ts_archived > DATE_SUB(NOW(), INTERVAL 5 MINUTE) and ts_archived < NOW() and name like 'done%'; -- number of unique added archives in the last few minutes which may be pretty much roughly like the number of goals
@mattab commented on October 1st 2021 Member

fyi this might be related/similar to the old report at https://github.com/matomo-org/matomo/issues/8444

@peterhashair commented on October 19th 2021 Contributor

@tsteur Just trying to understand this. So every single sparkline axis is loading from the archive_numeric_20xx_xx like a cache table when the enable_browser_archiving_triggering is disabled, it won't insert into archive_numeric_20xx_xx table, only ./console core:archive does insert?

@tsteur commented on October 20th 2021 Member

enable_browser_archiving_triggering is disabled, but for range dates by default we still initiate browser archiving and aggregate all the different periods to a range. Meaning it will insert data into the archive tables when viewing it in the browser. Does this help?

@peterhashair commented on October 20th 2021 Contributor

@tsteur kind of, so the action=get request for the same period should only launch archive once, instead of each request per once.

@tsteur commented on October 20th 2021 Member

yes 👍

@tsteur commented on November 18th 2021 Member

This is actually not fixed yet. I added 2 new comments in DEV-2306 with details. It either doesn't help, or it helped but there are still issues.

We can try another fix in Matomo 4.7 @peterhashair

Maybe one potential problem with this is that all requests eventually try to get that lock vs they should maybe just wait until the lock is released if the lock is held by someone else and then not try to get the lock again. I don't think this works using the get_lock method though and we would need to use our other lock. This might not be the problem though. It's just some random thought.

@peterhashair commented on November 18th 2021 Contributor

@tsteur ah on my local it doesn't do duplications, to reproduce this. Maybe can we dump the database from the testing server?

@tsteur commented on November 18th 2021 Member

@peterhashair there are pretty much no duplicates here either except for the goals flag. I'm not sure that's the problem though. I was going to export the data but it has been deleted again already by some task. I can export it again if that's helpful but not sure it would help too much. Because we have disabled range again for this Matomo I can't easily regenerate it right now but could do later if useful.

@peterhashair commented on November 19th 2021 Contributor

@tsteur maybe that's not related to the lock or multiple requests, we can cache the select to the lazy cache instead of MySQL select, not sure that helps. But I can see the CPU is cluster caused by the writer, is that mean insert cause the problem not the select?

@peterhashair commented on November 23rd 2021 Contributor
@tsteur commented on November 23rd 2021 Member

Good find, I can see how that one might be a bit slow. Takes 2s here for me when requesting a range of 9 months. It generally does only a one simple DB query though so it wouldn't cause the big DB load. We could create another issue to improve the performance of this script eventually.

@peterhashair commented on November 23rd 2021 Contributor

@tsteur on my local is wired, if you run it just by itself, it takes 1-2s, but if you run it on the goal overview page. it takes 42s on my one.

image

@tsteur commented on November 24th 2021 Member

We might want to try our other lock see https://github.com/matomo-org/matomo/compare/lockalt?expand=1

The benefit here is that not every request will try to acquire the lock but only one process. And as soon as it's released this lock, the other processes won't fight to get the lock but will just know the archive is finished and continue.

@peterhashair let's also try to move all these goal specific visualisations into one request
image

Happy to show how this could be done. Generally the lock we're having is already helping, but we want to avoid extreme concurrency when people have heaps of goals. We can try to put this into one view and then see if it loads faster etc.

@peterhashair commented on November 25th 2021 Contributor

@tsteur our lock looks great, but when in the larger period, eg: more than the 6-month range of period, it throws a 504 time out see screenshot. CPU usage on MySQL server is about the same 400%+, I have a feeling is the select clause of the CPU usage high, but not too sure, I trying to merge some of the select queries to see if that helps.
143328529-5592a092-8df2-4b4e-aeba-75845d263874

@peterhashair commented on December 9th 2021 Contributor

@tsteur merged sparkline combine into 1 request, let me know if that still causing CPU load problems.

@tsteur commented on December 10th 2021 Member

great, thanks 👍

@sgiehl commented on December 10th 2021 Member

reopening, as the PR actually broke the goals overview and therefor was reverted.

This Issue was closed on December 19th 2021
Powered by GitHub Issue Mirror