@robocoder opened this Issue on April 6th 2011 Contributor

The current method is:

    function tableInsertBatch($tableName, $fields, $values);
  • Piwik.php: to allow arbitrary file loading, we should refactor the reading & writing into separate methods.
  • ArchiveProcessing.php: bulk insertion stores all the rows in memory before writing to file; perhaps, we could test the LOAD DATA capability beforehand, and if known to be available, write rows concurrently to the infile
  • there are MySQL tuning parameters that users may not be able to change, but might want to be aware of (via FAQ?), e.g., bulk_insert_buffer_size, and key_buffer_size

Question(s):

  • how well does the new code perform for users who have changed the storage engine to InnoDB?
  • there are multiple indices on the archive tables; for a non-empty table, should we be using:
ALTER TABLE $tableName DISABLE KEYS;
LOAD DATA INFILE $path REPLACE INTO  $tableName ...etc...;
ALTER TABLE $tableName ENABLE KEYS;
@mattab commented on April 6th 2011 Member

ALTER TABLE $tableName DISABLE KEYS;

Not necessary, Mysql will automatically disable keys before and build the index once all have been inserted.

I also think the code could be improved, however to really do it properly we need to understand where the memory grows and how it will minimize it. I propose we postpone this work until later on where a script will help us assess the memory limitations of Archiving (kind of #766 )

@mattab commented on September 23rd 2013 Member
This Issue was closed on January 13th 2014
Powered by GitHub Issue Mirror