New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ingest S3 directly #17622
Comments
Hi @vsoch. Thanks for creating the issue. I assume you are talking about our log-importer? |
Thank you for the speedy response! So what would be best practice to consistently upload new logs from S3 - running a server or something like lambda alongside a Kubernetes deployment to run the log importer? Something else? |
Hi @vsoch, we don't have an established best practice for this specific use case. A lambda probably wouldn't work since there's a hard 15 minute run time limit (if I recall correctly). Can you launch a kubernetes job for this? Eg, when a log file is uploaded to S3 (if that is how you are using S3), launching a kubernetes job to download and import it. There are a lot of ways to accomplish this, it really depends on what you want and how your architecture is set up. |
Ah, gotcha! Thank you for this discussion - we also have in mind to do a kubernetes job, and wanted to check if there was a suggested best practice first. I can come back here and comment after we get it working. But safe to close the issue, thanks again for your help! |
Hi matomo! I am wondering if there is a best practice for ingesting S3 logs directly, ideally from S3 and not needing to sync them to the same matomo server first and then using the script? Thank you!
The text was updated successfully, but these errors were encountered: