@RoyBellingan opened this Issue on August 1st 2019

Tonight One of our server started to lag due to matomo starting a
OPTIMIZE TABLE archivenumeric and archiveblob
Of all the table (in a single query), that took on main DB almost 90min (and would have taken a similar amount on the slave).

A quick solution is to iterate over and perform 1 query per table, It will still hang on the bigger one, but is a much more reasonable delay.

Another solution can be to also set a different
SET SESSION gtid_domain_id=1

So those maintenance job will run on a side thread, and real time ingestion will keep running.

Code wise is probably just a few lines.

This should be an option that will be enabled (if needed).
In case is a big installation (the one that suffer of that problem) the DBA / Admin will probably know what to do and what we are talking about (we just have to leave a few link to documentation).

If you are interested In this second proposal I can try to write some code example.
But again code wise is just a matter of

SET SESSION gtid_domain_id=XYZ;

@tsteur commented on August 1st 2019 Member

@RoyBellingan you could also disable [General] enable_sql_optimize_queries = 0 in the config/config.ini.php just btw. Did you verify it improves things when only executing one table at a time?

@RoyBellingan commented on August 1st 2019

Thank @tsteur I will give a look on this option.

Verified no, but as a general rule the replication hang until the currunt query beeing replicated is not finished.
So yes, splitting in smaller will lead always to some delay, but smaller.
What I can try is if having
Optimistic can help in such case whiteout doing any change in the code...

@tsteur commented on August 1st 2019 Member

@RoyBellingan I think a fix be quite easy to simply iterate over an array here https://github.com/matomo-org/matomo/blob/3.12.0-b1/core/Db.php#L459 and execute each optimize table individually. This should be all that is needed to do here (if there's more to do we might change prioritisation)

@RoyBellingan commented on September 21st 2021

Thank you (Sorry I totally forget about this )

This Issue was closed on September 21st 2021
Powered by GitHub Issue Mirror