Performance regressions & Automated speed testing for Piwik #7889
Labels
c: Performance
For when we could improve the performance / speed of Matomo.
c: Tests & QA
For issues related to automated tests or making it easier to QA & test issues.
Task
Indicates an issue is neither a feature nor a bug and it's purely a "technical" change.
Milestone
The goal of this issue is to build a tool that will make it easy for Piwik developers to detect when Piwik master branch becomes slower, to ensure that Piwik is fast and usable even under extreme conditions.
Why is this important?
we're doing a good job at not often regressing core APIs and key features, and we tend/aim to write tests when fixing regressions. But for Piwik to shine, it must not only be stable and reliable, but also damn fast even when pushed to the limits. Speed is really a key feature of a great software product.
We are proud to say that there is no data limit. To make it a reality we want to create an environment where it's easy for our team members (and the community) to be confident that a given Piwik build is fast even when "limits" are pushed.
Performance test cases
we could measure performance of all main Piwik modules (UIs, Archiving, API, Tracker...) under several performance and load test cases (Tracker requests bombing, creating 50k websites and requesting dashboard, archiving 1k segments, etc.).
Our first automated performance use case could be: All websites dashboard should load fast when there are 20k+ websites added in Piwik. This was regressed a few times in the past eg. #7877 and @tsteur points out in that it's very easy to regress and introduce slowness in particular in the
MultiSites.getAll
API. triggered codepath is very wide and this source code modified & refactored often. We know it's a matter of time before it becomes slower again.Note that in the past the Tracker API also regressed several times under high load (eg. 100 requests per second) in #2944 and recently in #7718.
How to build automated performance regression detector
(some time ago @diosmosis created a Benchmarking system in #3177 which is related. this topic was discussed 4+ years ago in #2000 but better to start fresh)
We could define performance requirements and create acceptance speed tests eg. "All websites dashboard should load in less than 5 seconds when 30k websites are requested and pre-archived". Maybe we add a new CI job at every commit to run those performance tests, and the CI job is Green when the speed requirements are met (how to deal with issues with CI server speed un-reliability). Maybe we need to run such process on a dedicated server we control or rent in a cloud. Let's discuss.
Your thoughts are most welcome!
The text was updated successfully, but these errors were encountered: