@mattab opened this Issue on March 24th 2015 Member

Presto database is a very interesting big data technology that may be a good candidate if/when we need to handle Very Big Data with Piwik.

https://prestodb.io/

Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes.

Presto was designed and written from the ground up for interactive analytics and approaches the speed of commercial data warehouses while scaling to the size of organizations like Facebook.

Facebook uses Presto for interactive queries against several internal data stores, including their 300PB data warehouse. Over 1,000 Facebook employees use Presto daily to run more than 30,000 queries that in total scan over a petabyte each per day.

Related to #2592, #4902, #1999

@diosmosis commented on March 25th 2015 Member

Maybe I'm reading the docs wrong, but presto looks like a way to connect different data sources rather than data source itself. Ie, it says it connects to MySQL, Hadoop, Cassandra, etc.

@mattab commented on March 25th 2015 Member

Yes for Very Big Data, the data could stored in HDFS (which scales) and Presto would read from HDFS (for example).

@diosmosis commented on March 25th 2015 Member

I see, this looks like a potentially easy way to support different backends, then.

@tsteur commented on March 25th 2015 Member

I think it less about supporting different backends but:

New big data tools like Hadoop let companies to store and analyze huge amounts of data relatively cheaply and efficiently. But they initially required serious programming chops to use. Presto, in short, lets data analysts use the SQL skills they already have to query data stores in new age systems, such as Hadoop and Cassandra. Plus, it’s much faster than the standard tools for querying Hadoop. http://www.wired.com/2015/03/open-source-works-just-ask-facebook

and

It’s similar in many ways to other open source tools, such as Cloudera’s Impala and MapR’s Drill, which also seek to speed up and simplify Hadoop queries. But one big difference between Facebook and a company like Cloudera and MapR is that Facebook makes tools for its own use, not tools it thinks other companies will want to use. And that means the software Facebook develops has already been battle tested at one of the largest websites in the world before it’s ever even offered up to the rest of the world. http://www.wired.com/2015/03/open-source-works-just-ask-facebook

So you can just need SQL skills to use it, it is fast, actually scalable and battle tested

@ebuildy commented on November 9th 2015

Hello, has it been tested already?

I am using Presto, a little hard to setup, but when everything is setup, easy to use like a traditional SQL.

The use case I see for Piwik is:

1- Use HTTP server access log instead of Piwik Javascript tracking
2- Add log to HDFS
3- Create Hive table (that parse raw logs)
4- Use Presto to query Hive table
5- Use Piwik as web UI

Main issue, is Presto has only JDBC driver built-in ;-(

@tsteur commented on November 9th 2015 Member

At Piwik we haven't tried it yet but it would be awesome if someone tried to get it working and shared the gained knowledge. It would be interesting to see if it's possible to use with Piwik, and if this is the case we could do some performance tests.

@ebuildy commented on November 9th 2015

I will try ! With apache impala also .

Keep you posted.
Le 9 nov. 2015 8:15 PM, "Thomas Steur" notifications@github.com a écrit :

At Piwik we haven't tried it yet but it would be awesome if someone tried
to get it working and shared the gained knowledge. It would be interesting
to see if it's possible to use with Piwik, and if this is the case we could
do some performance tests.


Reply to this email directly or view it on GitHub
https://github.com/piwik/piwik/issues/7526#issuecomment-155160676.

@tsteur commented on November 9th 2015 Member

Awesome, looking forward to your results if it works :)

@mcolak commented on November 29th 2016

is there any improvement about the issue?

@gebi commented on May 4th 2017

Another option would be to support clickhouse.
https://clickhouse.yandex/

It also has SQL support and is used by yandex as their main storage backend for webanalytics, thus has most features already implemented on needs regarding this task (and it has way less moving parts than a hadoop stack with hive!)

@ngonghi commented on August 29th 2018

Other options is a Amazon athena

https://aws.amazon.com/athena/

Export data from database to csv, and put it in S3. Use Athena to query (also SQL support)

@tsteur commented on February 22nd 2020 Member

Here are more suggestions: https://github.com/matomo-org/matomo/issues/2592

And be good when researching to look into details eg

While it has SQL support you need to look into some details. Eg does it support delete/update and if supported, is it fast. Does it support eg 10 to 20 joins, does it support all kind of alter table statements etc.

If anyone has tried MyRocks be interested to learn more.

Powered by GitHub Issue Mirror