Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automate creation of OmniFixture for UI tests #6582

Closed
tsteur opened this issue Nov 3, 2014 · 3 comments
Closed

Automate creation of OmniFixture for UI tests #6582

tsteur opened this issue Nov 3, 2014 · 3 comments
Assignees
Labels
answered For when a question was asked and we referred to forum or answered it. c: Tests & QA For issues related to automated tests or making it easier to QA & test issues.
Milestone

Comments

@tsteur
Copy link
Member

tsteur commented Nov 3, 2014

Usually one should always recreate the OmniFixture that is used by UI tests when changing anything in a fixture, tracker or archiver. As the past weeks/month has shown this does not really work very well. Basically nobody does it. Therefore we should somehow automate the generation. Not sure how. Maybe on travis with auto-commit? The generation of this can probably take a while...

Otherwise we always have the problem that some/many screenshots fail whenever we change the OmniFixture and we have no clue whether the expected data is correct. I had this case already a few times. What happens is that we simply assume the displayed data is correct but as there were quite a few changes between the generation it is likely that we are actually expecting wrong data meaning the tests are useless or even counter-productive.

@tsteur tsteur added the c: Tests & QA For issues related to automated tests or making it easier to QA & test issues. label Nov 3, 2014
@diosmosis
Copy link
Member

I don't know if this changed in the past month or so, but it used to be that you wouldn't really care if the data for the UI tests changed (other than if the data you've added is displaying correctly or not). See https://github.com/piwik/piwik/blob/master/tests/README.screenshots.md :

Note: When determining whether a screenshot is correct, the data displayed is not important. Report data correctness is verified through System and other PHP tests. The UI tests should only test UI behavior.

@tsteur
Copy link
Member Author

tsteur commented Nov 3, 2014

True. Although not really convinced the System tests actually verify the data correctness so much ;)

In my case I made a change in Archiving and I had to update the OmniFixture otherwise every screenshot would show an error message. After updating the OmniFixture some screenshots still fail and now I don't know whether it is due to my change or due to any previous change. So what I'd have to do now is basically to go to master and cherry-pick the OmniFixture and check whether the same screens fail which is quite a bit of an effort.

I will simply assume the displayed data is still correct and update the screens. Cheers

@tsteur tsteur closed this as completed Nov 3, 2014
@tsteur tsteur added the answered For when a question was asked and we referred to forum or answered it. label Nov 3, 2014
@tsteur tsteur self-assigned this Nov 3, 2014
@tsteur tsteur added this to the Piwik 2.9.0 milestone Nov 3, 2014
@diosmosis
Copy link
Member

Although not really convinced the System tests actually verify the data correctness so much ;)

Maybe not, but I don't think I've ever encountered a situation where you couldn't replicate data related issues in a UI test from a phpunit test (actually I can't remember a situation where the UI test data difference was incorrect). I think the time saved is of more value than catching strange bugs that may never occur in production.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
answered For when a question was asked and we referred to forum or answered it. c: Tests & QA For issues related to automated tests or making it easier to QA & test issues.
Projects
None yet
Development

No branches or pull requests

2 participants