Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance regressions & Automated speed testing for Piwik #7889

Closed
mattab opened this issue May 12, 2015 · 2 comments
Closed

Performance regressions & Automated speed testing for Piwik #7889

mattab opened this issue May 12, 2015 · 2 comments
Labels
c: Performance For when we could improve the performance / speed of Matomo. c: Tests & QA For issues related to automated tests or making it easier to QA & test issues. Task Indicates an issue is neither a feature nor a bug and it's purely a "technical" change.

Comments

@mattab
Copy link
Member

mattab commented May 12, 2015

The goal of this issue is to build a tool that will make it easy for Piwik developers to detect when Piwik master branch becomes slower, to ensure that Piwik is fast and usable even under extreme conditions.

Why is this important?

we're doing a good job at not often regressing core APIs and key features, and we tend/aim to write tests when fixing regressions. But for Piwik to shine, it must not only be stable and reliable, but also damn fast even when pushed to the limits. Speed is really a key feature of a great software product.

We are proud to say that there is no data limit. To make it a reality we want to create an environment where it's easy for our team members (and the community) to be confident that a given Piwik build is fast even when "limits" are pushed.

Performance test cases

we could measure performance of all main Piwik modules (UIs, Archiving, API, Tracker...) under several performance and load test cases (Tracker requests bombing, creating 50k websites and requesting dashboard, archiving 1k segments, etc.).

Our first automated performance use case could be: All websites dashboard should load fast when there are 20k+ websites added in Piwik. This was regressed a few times in the past eg. #7877 and @tsteur points out in that it's very easy to regress and introduce slowness in particular in the MultiSites.getAll API. triggered codepath is very wide and this source code modified & refactored often. We know it's a matter of time before it becomes slower again.

Note that in the past the Tracker API also regressed several times under high load (eg. 100 requests per second) in #2944 and recently in #7718.

How to build automated performance regression detector

(some time ago @diosmosis created a Benchmarking system in #3177 which is related. this topic was discussed 4+ years ago in #2000 but better to start fresh)

We could define performance requirements and create acceptance speed tests eg. "All websites dashboard should load in less than 5 seconds when 30k websites are requested and pre-archived". Maybe we add a new CI job at every commit to run those performance tests, and the CI job is Green when the speed requirements are met (how to deal with issues with CI server speed un-reliability). Maybe we need to run such process on a dedicated server we control or rent in a cloud. Let's discuss.

Your thoughts are most welcome!

@mattab mattab added Task Indicates an issue is neither a feature nor a bug and it's purely a "technical" change. c: Tests & QA For issues related to automated tests or making it easier to QA & test issues. labels May 12, 2015
@tsteur
Copy link
Member

tsteur commented May 12, 2015

I'm sure there are solutions for this already (frameworks/libs etc). In the end we only need to call urls (that works even for archiving tests) and measure the time etc. If there's a good, affordable paid service this could be an interesting thing as well.

@mnapoli
Copy link
Contributor

mnapoli commented May 12, 2015

Not a direct solution to the problem but could help: having the platform where the cloud traffic is reproduced and monitoring with New Relic. New Relic supports for example annotating deployments, so you can compare performances before/after deploying a new beta. Of course this is not a catch all, but still better than nothing and probably easier to setup at first?

@mattab mattab added the Major Indicates the severity or impact or benefit of an issue is much higher than normal but not critical. label Jul 15, 2015
@mattab mattab added this to the Mid term milestone Jul 15, 2015
@mattab mattab modified the milestones: Short term, Mid term Sep 23, 2015
@mattab mattab added c: Performance For when we could improve the performance / speed of Matomo. and removed Major Indicates the severity or impact or benefit of an issue is much higher than normal but not critical. labels Oct 21, 2019
@mattab mattab closed this as not planned Won't fix, can't repro, duplicate, stale Dec 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
c: Performance For when we could improve the performance / speed of Matomo. c: Tests & QA For issues related to automated tests or making it easier to QA & test issues. Task Indicates an issue is neither a feature nor a bug and it's purely a "technical" change.
Projects
None yet
Development

No branches or pull requests

3 participants