Screenster - visual UI test automation

Screenster provides visual UI test automation for web applications.
It is the only tool that validates the screens that users actually see.
10x productivity without a single line of code.


Problem

  • Existing tools are not productive and require learning
  • Testers need to have web development skills
  • Automated tests duplicate application logic
  • User interface is not truely verified
  • Alternatives like Selenium have low ROI

Our Solution

  • 10x productivity gain compared to traditional testing
  • Screen capture and visual comparison instead of scripting
  • Automation of how you already test instead of learning how to use a tool
  • Empower non-technical people to build and maintain tests
  • Full access to Selenium API when needed
  • Web-based shared workspace instead of heavy local install

Why: Screenster vs Alternatives▲ Top

Feature Screenster QTP Rational Test Complete Selenium
Cross-browser testing
Record and play back
No need to read manuals
No need to write code
Visual diff with the baseline highlighting changes
Guaranteed correctness of layouts and rendered UI
Fully web based
Hours to automate testing of 5 screens (approx) 2 24 24 24 80
Cost for 1 user (approx) $100/month $3,000 $3,000 $2,500 $0

How it works▲ Top

Record visual baseline

Like the traditional tools it works by recording the user interactions with the page. Unlike the other tools, it automatically captures the entire rendered page as an image and stores it as a baseline. As soon as you are done recording, you are practically done automating. No coding, no element ids, no checks and assertions.

Run again to detect changes

After the baseline is created, the development team can make changes to the UI and backend logic without fear. Regression testing is done by running the recorded test case again. The only requirement is that the application is always started in the same state which can typically done by restoring the database or running the same setup scripts.

Review visual differences

If differences are detected between the baseline and regression run screenshots, they are visually highlighted on the screen. Tester can approve the difference as expected change, ignore it from future comparison for dynamic parts of the UI such as a clock, or leave it as a failed test for developers to fix.