New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace some usages of Common::printDebug w/ use of logger #13968
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are still a lot usage of Common::printDebug
, guess we need to replace them step by step.
Changes look good and seem to work
I tried replacing all of them, but it was taking a long time... |
CC @tsteur just to make sure there are no potential performance regressions in the tracker |
} | ||
} | ||
$logger = StaticContainer::get('Psr\Log\LoggerInterface'); | ||
if (is_array($info) || is_object($info)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the second is_object
still needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh, looks like it was never needed...
I would say it should be fine but it does add some unneeded behaviour. Is it maybe possible to still check first for |
@tsteur in |
@diosmosis yep meant in |
@tsteur I mean |
Not sure. I suppose on production if we wanted these things to be logged we would also set the flag to true. So far I don't think there's a plan to log these but could do. I'm just not sure in general how much code is executed there in all the log handlers and the logger itself and then the var_export etc. so have no idea re performance. |
Note: in this case it would print debug logs for every request, not just when errors are encountered.
Makes sense, especially in the tracker which would experience the most scale. I can add the check again. |
Yeah I'm just thinking if it's like executed 100mio times it adds up. Looking at the
I see we probably wouldn't want that either. Not really sure here. I suppose we can just leave the code without the flag check and execute all the logging and change it if/when needed. |
Actually, I'll add a new flag to short-circuit |
👍 |
Otherwise looks good if tests pass. I was a bit worried about the logging to file when an error occurs part that it could quickly result in a disk ending up full but I suppose if someone enables @mattab any thoughts? Maybe we still need to check for tracker debug enabled. |
Sounds like something worth protecting against... maybe by default we don't log to file for the tracker, and users have to enable it specifically. If logs are saved to the cloud, then space won't really be an issue. |
It could be a cost problem though if sending eg to CloudWatch maybe. And usually the logs are still for some time on a local machine. Might actually need a different setting for it or so. Not sure how to best handle it. |
maybe it makes sense to log thousands of lines in the log file, if the error occurs thousands of times... Question: when there is an error and it's logged in a file, is it logging one new line in the file (with the error message) or would it write possibly dozens or more lines for each error request? |
dozens. You would still be able to cause a disk to fill. Eg on AWS you have like by default 8GB storage or so only. |
Would it help in terms of space being written to only print debug logs if the error is legitimate and not, eg, a bad request? Ie, if someone puts in an invalid idsite, we don't consider it an error in matomo and don't print the debug logs, but if something unexpected happens in the tracker, we print the debug logs. One other idea: think it would be a good idea to put a cap on the size of matomo.log? Ie, if the file is bigger than 1gb don't log (and maybe show a system check warning or something). |
that sounds good I reckon.
Could have this as a configuration in config.ini.php maybe. By default would maybe not limit though or set it to like 10GB. Although 10GB can be created very quickly (we've seen log files in the past with > 100GB). I would maybe not really limit but not sure, it could be useful. @mattab any thoughts? |
I think a limit would be ok, since in most cases someone will set up log rotation, so the file shouldn't end up being enormous. And anyone who doesn't set up log rotation but enables file logging is just asking for trouble in the long run, since it will eventually get huge... |
@diosmosis there are various tests failing eg see https://travis-ci.org/matomo-org/matomo/jobs/530151402#L815 where it says a debug method isn't there but when I debug this manually in PHPStorm it all works. Might be just a matter of restarting the tests. It says:
There are also some tests failing here: https://travis-ci.org/matomo-org/matomo/jobs/530151401#L812 Otherwise works nicely for me. |
@diosmosis I think there are still some failing tests: https://travis-ci.org/matomo-org/matomo/jobs/586324829#L818 |
Instead of
Common::printDebug
that always prints w/ DEBUG log level and only if tracker debug is on, use the logger directly so error logs will show up in the file logs at least.I changed some of the printDebugs for errors to error logs so the full stack trace will appear in the file logs, and so they can be potentially combined w/ the fingers crossed handler.
Test scenarios:
No error: no logs
w/ error: logs in file not in response
no error: debug logs to file, not in response
w/ error: debug/error logs to file, not in response
no error: prints debug logs to response + file
w/ error: prints debug logs to response + file