Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tracker: Serialization failure: 1213 Deadlock found when trying to get lock #6398

Closed
integrationtester opened this issue Oct 7, 2014 · 61 comments
Labels
Bug For errors / faults / flaws / inconsistencies etc. worksforme The issue cannot be reproduced and things work as intended.

Comments

@integrationtester
Copy link

so i use the tracking api.

my code separates actions into groups of 500-800 (depending on session length)

each group is put into a queue, and threads send these groups to piwik to be tracked.

currently 15 threads send these groups simultaneously.

why i do this:
in my particular setup, i did some testing and around 500 was the sweet spot for piwik's ability to handle bulk tracks (less didn't decrease the time much, and more started increasing the time significantly)

i also noticed that the times it takes to track these hits doesn't seem to increase much if you have multiple bulk requests at the same time.

the problem:
i recently upgraded from 2.2.2 to 2.7.0, and now when i run the same importer, i start getting error 500s from the tracker requests.

looking at the apache error logs, there seems to be deadlock issues:

[Tue Oct 07 13:00:01 2014] [error] [client 130.14.24.60] Error in Piwik (tracker): Error query: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction \t\t\t\t\t\t\t\tIn query: UPDATE piwikdev_log_visit SET idvisitor = ?, visit_total_time = ?, visit_last_action_time = ?, visit_exit_idaction_url = ?, visit_total_actions = visit_total_actions + 1 , custom_var_k1 = ?, custom_var_v1 = ? WHERE idsite = ? AND idvisit = ? \t\t\t\t\t\t\t\tParameters: array

@quba
Copy link
Contributor

quba commented Oct 7, 2014

I was able to reproduce this on my local machine while trying to import big log file (I guess it was Piwik 2.5).

@integrationtester
Copy link
Author

thanks for reproducing

@mattab mattab added Bug For errors / faults / flaws / inconsistencies etc. Major Indicates the severity or impact or benefit of an issue is much higher than normal but not critical. labels Oct 12, 2014
@mattab mattab added this to the Short term milestone Oct 12, 2014
@mattab
Copy link
Member

mattab commented Oct 12, 2014

Thanks for the report, we should investigate this for sure.

@mattab mattab changed the title deadlock issues in newer version(s)? of piwik Tracker: Serialization failure: 1213 Deadlock found when trying to get lock Oct 20, 2014
@imoullet
Copy link

imoullet commented Nov 7, 2014

Hello
Same problem for me

Thanks for the future fix

@imoullet
Copy link

Any news about that point . Every night my importlogs script fails because of that error.
Please let me know urgently if there is a workaround for that . I am losing all my statistics since more than a week now.. !

Thanks

@mattab
Copy link
Member

mattab commented Nov 13, 2014

do you guys get other errors in your web server log or mysql error log? I don't know what can cause this issue so far.

@imoullet
Copy link

Nothing else tnat what jas been mentioned

In piwik output I have

Fatal error: Internal Server Error
You can restart the import of "/var/tmp/stats/app/access_full" from the point it failed by specifying --skip=21056 on the command line.

And in apache log :
[Thu Nov 13 04:08:18 2014] [error] [client 130.223.27.18] Error in Piwik (tracker): Error query: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction In query: UPDATE piwik_log_visit SET idvisitor = ?, visit_total_time = ?, visit_last_action_time = ?, visit_exit_idaction_url = ?, visit_total_actions = visit_total_actions + 1 WHERE idsite = ? AND idvisit = ? Parameters: array ( 0 => 'o\xf9\xfe\x18\xcc\x1e\x91\xc1', 1 => 187, 2 => '2014-11-12 15:57:00', 3 => 331274, 4 => 512, 5 => 18746028, )

Hote that's help

@zero-universe
Copy link

I can also reproduce the error in 2.9.1.

I tried to import a logfile of the size of 60 MB.

This is the output of mysql.err (mariadb 5.5.34)

141120 15:22:20
*** (1) TRANSACTION:
TRANSACTION 97983A, ACTIVE 1 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 6 lock struct(s), heap size 1248, 4 row lock(s), undo log entries 26
MySQL thread id 197643, OS thread handle 0x7f958fb85700, query id 37275520 piwik-db-server.tld piwik-server.tld piwik_basic Updating
UPDATE piwik_log_visit SET idvisitor = '^SscD', visit_total_time = '1265', visit_last_action_time = '2014-10-08 17:57:21', visit_exit_idaction_url = '1607782', visit_total_a
ctions = visit_total_actions + 1 WHERE idsite = '7' AND idvisit = '1300111'
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 1172 page no 2854 n bits 136 index PRIMARY of table piwik_basic.piwik_log_visit trx id 97983A lock_mode X locks rec but not gap waiting
*** (2) TRANSACTION:
TRANSACTION 979835, ACTIVE 1 sec starting index read, thread declared inside InnoDB 500
mysql tables in use 1, locked 1
11 lock struct(s), heap size 1248, 10 row lock(s), undo log entries 175
MySQL thread id 197639, OS thread handle 0x7fabd81b4700, query id 37276630 piwik-db-server.tld piwik-server.tld piwik_basic Updating
UPDATE piwik_log_visit SET idvisitor = '^So', visit_total_time = '1321', visit_last_action_time = '2014-10-08 18:02:39', visit_exit_idaction_url = '1607836', visit_t
otal_actions = visit_total_actions + 1 WHERE idsite = '7' AND idvisit = '1300276'
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 1172 page no 2854 n bits 136 index PRIMARY of table piwik_basic.piwik_log_visit trx id 979835 lock_mode X locks rec but not gap
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 1172 page no 2855 n bits 144 index PRIMARY of table piwik_basic.piwik_log_visit trx id 979835 lock_mode X locks rec but not gap waiting
*** WE ROLL BACK TRANSACTION (1)
InnoDB: transactions deadlock detected, dumping detailed information.

@pietsch
Copy link

pietsch commented Nov 25, 2014

I get the same error messages with Piwik 2.9.1 and Oracle's mysql-community-server-5.5.33 on openSUSE 12.2 (x86_64), and I seem to observe gaps in the imported data.

@pietsch
Copy link

pietsch commented Nov 25, 2014

The error messages no longer appear when I change the (non-default) importer option --recorders=4 to --recorders=1 (or omit it). Of course this workaround slows down the import on a multi-core machine.

@zero-universe
Copy link

My solution is to "downgrade" (reinstallation) to piwik 2.6.1. --recorders=24 is not a problem!
Deadlocks are also gone !

@bratster
Copy link

I have used a number of versions of PIWIK previously and am seeing this problem with 2.9.0.

Previously I have collected about 12 months worth of data and am running via BASh script to process through my archives, consolidate data into reasonable "chunks" and then upload with the python script...

Once I see the error on my main database, it then seems to repeat a lot. It is as if I have blocked the pipe and everything backs up. I have tried a number of things just to experiment to see if I can see a root cause:

  • I can take the same set of data, that fails to upload with this error and upload the same to a fresh new clean and empty database no problem whatsoever
  • I can retry loading to the main database and over time it will upload slowly and eventually
  • I can take other data to another machine in the network and upload from another instance of PIWIK pointing to the same dbase, at high speed

I conject that {for whatever reason} this is the dbase in a bad state, and all MySQL information about these kind of things suggests that its all about how the data is being pushed in, causing MySQL to "protect" its data (my words).

Furthermore pietsch's "fix" suggests that the database end of the connection has reached a state that can acceot data but (humanised word) "slow down" the rate of input....

So a question to PIWIK. Does the database just accept data when uploading with python OR does the php code and database together manipulate it in the upload session.

My current approach is to slow my rebuild right down and also to stop the continuing data collection from my live source... in otherwords I have one stream of data going into the database. My worry os that once a break occurs. That will be that and I will be back to uploading 20 records every 10 minutes or so.

zero-universe's suggestion is most wise at the moment....

What sort of information shall I collect to benefit this forum, assuming that I see the error once again?

Kev

@zero-universe
Copy link

any updates on this topic ?

@mattab
Copy link
Member

mattab commented Dec 3, 2014

@zero-universe nop

@rufinus
Copy link

rufinus commented Dec 18, 2014

same here (v2.9.1), reducing recorders from 12 to 2 helped, but its only a temporary fix. i import hourly, and with only 2 threads it takes to much time. (115 req/sec instead of 280-350 req/sec)

@rojaro
Copy link

rojaro commented Dec 29, 2014

Had to reduce recorders to 1 ... 👎

@scukonick
Copy link

The same affects me (2.9.1)

@mpiscaer
Copy link

The same affects on version 2.10.0

@drhirn
Copy link

drhirn commented Jan 16, 2015

Same here.
Piwik 2.10.0
IIS8.5
PHP 5.4.24
MySQL 5.6.22

Reducing recorders to 1 helps

Error query: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction
In query: UPDATE piwik_log_visit SET idvisitor = ?, visit_total_actions = visit_total_actions + 1 , visit_exit_idaction_url = ?, visit_total_time = ?, visit_last_action_time = ? WHERE idsite = ? AND idvisit = ?

@mddvul22
Copy link

mddvul22 commented Feb 5, 2015

We are also having this problem in this environment:

CentOS 7
Piwik 2.10.0
Apache 2.4.6
PHP 5.4.16
MariaDB 5.5.40

And like others here, reducing the number of recorders to 1 appears to have eliminated the problem.

@Drachenkaetzchen
Copy link

+1 here.

Also had to reduce recorders to 1.

Debian Wheezy
Piwik 2.10.0
Apache 2.2.22-13+deb7u4
PHP 5.4.36-0+deb7u3
MySQL 5.5.41-0+wheezy1

@medic123de
Copy link
Contributor

Hi, I wrote the transaction patch in 2.4.0. I will try to find a solution, I already have an Idea that different Data handling within piwik might be the problem.

As long as the analysis takes, please try to set "bulk_requests_use_transaction = 0" in config/global.ini
That should improve the situation at the cost of speed, but not as much as "recorder = 1"

@OuestNeige
Copy link

You are a genius!
I could import without generating the error.
Thank you.

@esapozhnikov-wish
Copy link

Any update on this issue?

I have also noticed that if you use MySQLi instead of PDO the deadlock errors go away but the number of records/sec drop significantly.

@mattab
Copy link
Member

mattab commented Feb 18, 2015

I'm moving this to 2.12.0 and hopefully we can fix it there. Maybe @medic123de will have a solution for it :-)

@mattab mattab added this to the Piwik 2.12.0 milestone Feb 18, 2015
mattab pushed a commit that referenced this issue Apr 23, 2018
…nd multiple recorders. (#12733)

* visitorGeolocator: output actual changes in debug mode (#12478)

closes #12477

* Revert "visitorGeolocator: output actual changes in debug mode (#12478)" (#12480)

This reverts commit 19a7654.

* Fix SystemIntegration test (#12726)

Found `piwik`

* This addresses the various dead lock issues when using transactions and bulk load.  Issue #6398

* Fix archive test to work with multi recorder fix.

* Minor changes and renames
@mattab mattab modified the milestones: Backlog (Help wanted), 3.5.0 Apr 24, 2018
@mattab
Copy link
Member

mattab commented Apr 24, 2018

Hi everyone

We have merged a Pull request which is supposed to address this issue.
---> Please upgrade to 3.5.0-b2 which includes the fix from #12733

It may not be completely fixed, but we'll need your feedback to confirm this. In the meantime, we'll mark as closed. To use the beta release please see: http://piwik.org/faq/how-to-update/faq_159/

@mattab mattab closed this as completed Apr 24, 2018
@kaplun
Copy link

kaplun commented May 11, 2018

Hi FYI, I am just seeing this with Matomo 3.5.0 and using import_logs.py with --recorders=2:

2018/05/11 08:38:21 [error] 1287#0: *10688 FastCGI sent in stderr: "PHP message: Error in Matomo (tracker): Error query: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction In query: UPDATE matomo_log_visit SET idvisitor = ?, visit_last_action_time = ?, visit_exit_idaction_url = ?,  visit_total_actions = visit_total_actions + 1 ,  visit_total_interactions = visit_total_interactions + 1 , visit_total_time = ?, custom_var_k1 = ?, custom_var_v1 = ? WHERE idsite = ? AND idvisit = ? Parameters: array (   0 => '<95>{<A7><B2>^P<D3><FF><B3>',   1 => '2018-05-11 08:38:18',   2 => 2184827,   3 => 248,   4 => 'App version',   5 => 'Web_3.13.3',   6 => 1,   7 => '1588134', )" while reading response header from upstream, client: 185.70.40.94, server: matomo.example.org, request: "POST //piwik.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm/php-fpm.sock:", host: "matomo.example.org"

@mattab mattab reopened this May 11, 2018
@kachenjr
Copy link
Contributor

kachenjr commented May 11, 2018

Maybe I missed a bit.
While you're running your import, and before you run archive, can you run this query and send the result?

select *
from _yourdbnamehere_.piwik_option
where option_name like 'report_to_invalidate_%';

@kaplun
Copy link

kaplun commented May 12, 2018

Hi! If you were asking me:

MariaDB [(none)]> select * from matomo.matomo_option where option_name like 'report_to_invalidate_%';
+-----------------------------------------+--------------+----------+
| option_name                             | option_value | autoload |
+-----------------------------------------+--------------+----------+
| report_to_invalidate_1_2018-05-11_20584 | 1            |        0 |
| report_to_invalidate_1_2018-05-11_22000 | 1            |        0 |
| report_to_invalidate_1_2018-05-13_20591 | 1            |        0 |
+-----------------------------------------+--------------+----------+
3 rows in set (0.00 sec)

@kachenjr
Copy link
Contributor

Yes. I have to admit, I'm not using the python script for this, so maybe our custom loader has helped with this issue as well. But I had to make those changes to get it to work even with our loader. I'll look again soon and try to reproduce with the import_logs.py. I don't have the free time to look at this right now though. I think maybe the report invalidation transaction should be moved out of this path. Maybe make another API call out of it that just sets the value and commits, and get it out of the mix all together.

kaplun added a commit to kaplun/matomo that referenced this issue May 24, 2018
kaplun added a commit to kaplun/matomo that referenced this issue May 24, 2018
kaplun added a commit to kaplun/matomo that referenced this issue May 24, 2018
kaplun added a commit to kaplun/matomo that referenced this issue May 24, 2018
kaplun added a commit to kaplun/matomo that referenced this issue May 24, 2018
InfinityVoid pushed a commit to InfinityVoid/matomo that referenced this issue Oct 11, 2018
…nd multiple recorders. (matomo-org#12733)

* visitorGeolocator: output actual changes in debug mode (matomo-org#12478)

closes matomo-org#12477

* Revert "visitorGeolocator: output actual changes in debug mode (matomo-org#12478)" (matomo-org#12480)

This reverts commit 19a7654.

* Fix SystemIntegration test (matomo-org#12726)

Found `piwik`

* This addresses the various dead lock issues when using transactions and bulk load.  Issue matomo-org#6398

* Fix archive test to work with multi recorder fix.

* Minor changes and renames
@kkempster94
Copy link

Does anyone have any suggestions to fix this issue in the meantime? I see that there's an open pull request but it looks like it may not be likely to fix the issue? "bulk_requests_use_transaction=0" seems to do very little, if anything.

If it helps, I'm encountering the exact same deadlock issue using the bulk tracking rather than the importer. So it doesn't seem to be related to the log importer.

@mattab
Copy link
Member

mattab commented Jul 3, 2019

Just created an issue see: #14619, Tracker mode: configure MySQL transaction isolation level to READ UNCOMMITTED. to avoid gap locks
we'll work on the issue soon. (Not sure if it's related, but it maybe could help?)

@mattab mattab removed the Major Indicates the severity or impact or benefit of an issue is much higher than normal but not critical. label Oct 21, 2019
@tsteur
Copy link
Member

tsteur commented Jan 13, 2022

Hey everyone, if you still experience this issue please comment.

@SlavikCA
Copy link

SlavikCA commented Feb 3, 2022

reproduced in 4.7.0 running in Docker container:

Error in Matomo (tracker): Error query: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction In query: UPDATE matomo_option SET option_value = ?, autoload = ? WHERE option_name = ? Parameters: array ( 0 => '.......', 1 => 0, 2 => 'SocialDefinitions', )

the error happend, when I was running import_logs.py

@mattab
Copy link
Member

mattab commented Dec 10, 2023

Thanks for contributing to this issue. As it has been a few months since the last activity and we believe this is likely not an issue anymore, we will now close this. If that's not the case, please do feel free to either reopen this issue or open a new one. We will gladly take a look again!

@mattab mattab closed this as completed Dec 10, 2023
@mattab mattab added the worksforme The issue cannot be reproduced and things work as intended. label Dec 10, 2023
@gg-kialo
Copy link

I see a variant of the original error frequently (from the tracker, not importing).

Error in Matomo (tracker): Error query: Error query: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction In query: UPDATE matomo_log_visit SET profilable = ?, visitor_seconds_since_order = ?, visit_total_actions = visit_total_actions + 1 ,  visit_total_events = visit_total_events + 1 , custom_dimension_1 = ?, custom_dimension_2 = ?, custom_dimension _3 = ?, last_idlink_va = ? WHERE idsite = ? AND idvisit = ? Parameters: ...

I'm running the docker image (currently docker.io/matomo:4.16.0-fpm-alpine, but I've seen these errors since I started using matomo in 4.14).
Unlike the original report, failed requests are being responded to with 400 errors, but that's likely just a difference in my nginx config.

Some more information:

  • we run 3 matomo servers in a multi-server environment
  • we are using a MariaDB Galera cluster v10.11.6.
  • these failures also occur when the archiving cron job is not running
  • over 30 days we see a 0.4% error rate on the tracker api, from what I've seen, all these failures are caused by this deadlock
  • adding bulk_requests_use_transaction=0 has no noticeable effect

@mattab
Copy link
Member

mattab commented Jan 29, 2024

@gg-kialo Thanks for the report. Would you maybe be able to create a new bug report for your issue? (your issue might be more edge case than the original (closed) issue here (eg. might be related to MariaDB Galera cluster? or something else like the request payload (do you know if the issue is reproducible?)). we'd still like to fix it eventually but we aren't experiencing it internally at this time so a new issue would be helpful 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug For errors / faults / flaws / inconsistencies etc. worksforme The issue cannot be reproduced and things work as intended.
Projects
None yet
Development

Successfully merging a pull request may close this issue.