@robocoder opened this Pull Request on February 12th 2018 Contributor

https://www.percona.com/blog/2010/07/14/why-insert-on-duplicate-key-update-may-be-slow-by-incurring-disk-seeks/ explains:

Instead, what MySQL does is the following:

  • call handler::write_row to attempt an insertion, if it succeeds, we are done
  • if handler::write_row returns an error indicating a duplicate key, outside of the handler, apply the necessary update to the row
  • call handler::update_row to apply the update
    ...
    So, the moral of the story is this. In MySQL, “insert … on duplicate key update” is slower than “replace into”.

REPLACE INTO might be faster but the semantics are:

INSERT;
if failure then
    DELETE;
    INSERT;
fi

Since Option::set() is largely called to update the value for an existing setting, we should try:

UPDATE;

if affected_rows === 0 then
    INSERT;
fi

It's not an atomic operation but overall, this approach strikes a good balance between performance and ACID.

@robocoder commented on February 15th 2018 Contributor

A side-effect of this PR is that the SQL is database agnostic.

Powered by GitHub Issue Mirror