Archive

Archive for the ‘Selenium’ Category

Functional Continuous Integration

Driving agile practices over the last 4 years in 3 SAAS companies, it is quite apparent to me that continuous integration (CI) requires both unit test build failure verification and regular deployed functional regression test verification to be really agile. Yes, you need all the SDL (software development lifecycle) practices to manage building the right product and completing work items, but quality of work cannot be compromised in the name of speed. Agile is all about completing small amounts of working (deliverable) software and iterating on continuous feedback. It also includes confidence that you are delivering tested software without regression defects (not breaking what already worked) and confidence future work will not break what was just delivered. Any remaining tests not completed in the scope of work items is considered technical debt. This technical debt is postponed work that results in missed defects.

Quality confidence is achieved by routinely running automated tests at both the code and system levels. Regardless of the agile practices used, design, development and test are interwoven and requires collabration of development and test resources in the delivery team. I believe this is the secret sause that differentiates a waterfall-ish team and an agile-ish team.

  • The waterfall-ish team has the mind-set of develop application code first and develop automation test code later, frequently not including testing in the work item scope.
  • The agile-ish team has the mind-set of developing both unit test and functional test code along with application code, either prior to application code (Test Driven Development) or just after application code, but within the scope of the work item. This includes meeting work item (e.g. user story) acceptance criteria.

Functional Continous Integration (FCI) is continuously creating and updating automated regression tests, and must be the expectation for PO’s when planning work commitments, Executives when assessing progress reporting, Developers including collaboration with QE in their estimating, and Quality Engineers in planning and completing test work. Infrastructure for FCI needs to include integration of automated tests with a test management and reporting database, and needs to be capable of running unattended. I’ve used SeleniumRC with both CruiseControl with Rails test scripts and Hudson with Java test scripts to run build-time deploy and unattended test runs. These CI applications can run with multiple client machines as slaves. This allows CI jobs to run each test suite on a different client machine simultaneaously to speed up the test duration.

The result is that test failures due to problems that break existing code, introduced with changes or new application code, are caught very early and corrected. Further, if these tests are run in the Developer’s sandbox and corrected prior to check-in, there is no defect created, which results in significantly reducing defect counts for the agile team.

Advertisements

Continuous Integration: Selenium RC v.s. XUnit tests

December 26, 2008 Leave a comment

Since April 2008 I’ve been a Consultant/QE Architect at Sermo in Cambridge, MA USA on Ruby on Rails agile teams. The first team started as an experiment to prove rapid development of rails applications, composited with the JBoss-based java core community, could work seamlessly. We have continued to successfully add several more rails applications with this approach. We are now undergoing a major rewrite of the core community and it’s applications entirely in ruby on rails. This new design includes formal SOA interfaces. We are continuously refining our scrum lifecycle as well as our test automation approaches.

We have been focusing on continuous integration with Cruise Control and comprehensive regression testing, including Test:Unit and Selenium tests, an automated test plan generator, and ci_reporter test report, that run with every SVN commit. I also added nightly and weekly batch runs for runtime and more timely tests. The big issue we have been wresting with is…at what level {unit, functional, integration, runtime, browser DOM, load} should acceptance/regression tests be created? This question led to some interesting and healthy debate between development and test staff.

To summarize the testing levels…

  • Rails Test::Unit test levels:
    • unit test coverage is important to verify the methods and their paths
    • functional tests validate controllers operate as intended, including environment and database configuration
    • integration tests validate systemic operations that cross controllers, in render pages properly, irrespective of the browser, including AJAX responses for page load
  • Runtime tests are run on a deployed fleet, either headless or in a simulated browser DOM
  • Selenium tests exercise interactive AJAX and JavaScript in the page, requiring a separate client machine running the selenium-server.jar (Selenium RC for rails) or webrat gem (includes selenium-server.jar and the webrat IDL)
  • Jmeter for load tests (including performance counters)

We found that the most important part of a story is the list of acceptance tests. This list shapes and clearly defines the expectations of the story and what makes it complete. Many times we stub out tests for stories in the sprint in test suites at the beginning a sprint and can be implemented by either a test or development engineer. The biggest problem we found was that adding too many selenium tests made the automated build validation time increase dramatically (4 to 10 times that of integration tests), difficult for developers to run regression tests prior to source code commit, and more fragile as GUI implementation changed. We had to re-factor many tests from selenium to Test::Unit functional or integration tests to improve test performance and reliability.

The key to successful continuous integration is to test continuously, either with TDD practices or TIA (test immediately afterward), as part of accepting stories. This includes all code implemented for the story and any additional tests to cover the acceptance criteria. Testing at the lowest level feasible for code coverage is important for test efficiency. This may require the creation or test fixtures, mocking response expectations, and data factories.

Testing should include both happy path and negative tests (exception handling). Development Engineers need to have a sense of ownership for regression tests. Quality Engineers need to have a sense of test coverage completeness. Together the scrum team needs to hold themselves and each other accountable for not leaving test coverage technical debt beyond the story acceptance. Plan this test engineering time into the story. It may mean that your velocity is a little less than not doing this, but the overall sustainable stride is greater and you really do catch problems prior to (or at the time of) committing changes.

There is a place for automating at the GUI with Selenium or a comparable HTML element or application control automation tool. By limiting the use of these tools to testing AJAX or behavior that requires interactive javascript to render the page, workflows between systems, use cases (data driven), and cross-browser testing.

What we found is that ideally…

1. Test Engineers embedded in a development scrum team should have the ability to:

  • read and exercise application code
  • author unit test cases
  • create and work with test fixtures, test mocks and test data factories
  • assess adequate test coverage for development stories

2. Test Engineers chartered with testing external to the development teams should be able to:

  • deploy to fleets (fully automated is preferred)
  • read and understand mocked interfaces (to exercise actual interfaces)
  • author and exercise run-time tests (cover GUI and API workflows across the system)
  • author and exercise performance/load tests

Continuous Integration assures a solid application code base with full test coverage. It engages all engineers in responsibility for application testing. It allows dedicated Test Engineers to focus on system-level functionality, deployment, load, and user experience.

%d bloggers like this: