Archive

Archive for the ‘Continuous Integration’ Category

Trust Your Instruments

February 4, 2017 Leave a comment

Lesson Learned: Calibrate, monitor, and trust your instruments to keep on track.

I love to hear and tell stories in illustrating lessons learned. They really convey the emotional impact of the lesson at the time is was learned. This is a tale of two opposing true stories related to small craft navigation that actually happened to me. They take place a few years apart but have a number if similarities that make the point of the lesson learned.

Foggy Fail

In the summer of 1996 I was invited to on an night-time striper fishing trip inside the mouth of the Merrimack River. The skipper was the bother of a colleague, and he had a few friends and relatives on the trip. This was a beautiful 36′ twin-screw motor yacht that was only a year old. It was a very calm serene evening trip destined to anchor in the area inside the Ben Butler’s Toothpick rocks opposite Joppa Flats in Newburyport, MA. We dropped anchor in a two-point moor to keep our position fixed. I don’t recall what we were using as bait that night, but I know it was not artificial lures. We fished well into the night with little success, and no keepers, but it was a fun night. Shortly after midnight the fog rolled in quickly from the sea, engulfing the boat. It was a weeknight, getting late, and a bit cooler. So, we pulled up anchor and attempted to head back up river to the slip.

marine-radar-scopeThis boat was equipped with radar and could see the outline of the mainland mass and other larger objects, but couldn’t see the sandbars or markers on the radar, and we couldn’t see the shore. The tide was low and going lower, but could see and sense the flow of the river. The skipper pulled up anchor and started heading upriver. We could now see the first marker and it looked like we were heading in the right direction. A little further and we saw some other boats anchored that we had to move around. And then the boat touched a sandbar. The skipper motored over that one and back into the channel. We thought things were going well, but the appearance of the channel didn’t match the radar. The skipper started to correct to get back on track with the radar, but had to motor over another sandbar. This didn’t make sense. We should be on track and in the main channel, but as we moved forward we keep getting off course. The skipper debated backtracking downriver to get the channel and radar back in sync before going upriver further, but decided his instinct was better than the instruments. Big mistake. As it turns out we were getting further up a tidal channel in Joppa Flats with the tide going out and ran hard aground on the sand flats. We hit hard and you just know there was damage to the underside. The skipper radioed for the tow boat to come help. It was an off-hour call so it would take time for them to get there. Before we know it the boat listed to the side and we were high and dry on the sand. The tow boat could not help except to taxi the guests back to the dock. The skipper had to wait until dawn for the fog to lift and the tide to come back in and float him of. It cost the skipper several thousands of dollars that night in repairs and the tow back to the dock.

Foggy Success

In the summer of 2002 I took my Dad out for a ride in my new 21′ cuddy cabin I/O boat to the Isles of Shoals island of Portsmouth NH. It was a calm Saturday morning early in the season when the water was still a bit cold. I set the Isle of Shoals weigh-point in my dash-mounted GPS and we headed down river. It was nice and sunny, and we were up on a nice plane at 22 knots. We left the mouth of the Merrimack and veered off towards the Isle of Shoals. About 10 minutes into the ride we noticed what looked like fog on the horizon. It was moving towards us at a faster rate than I thought and I could feel a westerly breeze picking up. As the fog approached I noticed my Dad looking around my dashboard curiously. My Dad is an old-school electrical engineer, who was a master at the slide rule, map calipers and parallels, and calibrating an oil-filled compass. So I had to ask him what he was looking for. He ask where my compass is. I pointed down to my foot locker and said it was in there. He looked up at me shocked as the fog began to engulf us and asked how the hell are we going to navigate now? I dropped off the plane to about 8 knots and continued on track.

p_6875_garmin-gps128I pointed to the GPS on the dashboard and said I’m navigating from GPS. It tells me direction, constantly correcting for the westerly breeze nudging us to port. I told him I know how fast we are going, where the nun buoy at the entrance to the harbor is that we are targeting, and how long it is going to take us as this rate. He said oh, and what if that fails? I pulled out my backup waterproof handheld GPS in my pocket with fresh batteries and said “then we use this.” He looked at me with guarded approval and said “ok, if you say so.” Clearly my Dad had not used GPS for navigation before. At this point we had about 50′ of visibility. As we approached the targeted buoy, I slowed right down to a idle and said look off the bow, we should see the buoy in a few seconds. The red nun then popped out of the fog. And my Dad said “son of a gun, it worked!” My Dad and I used very different tools to navigate in nautical waters, but we each trusted our instruments and were able to keep on track with where we are going and when we will get there, even when we can’t visually see the target along the way.

Wisdom

Both these stories are memorable. The both share the lesson that you need to have good instruments, that give you the visibility you need to keep on track, and that you trust the instruments to guide you when you can’t see the target.

The same can be applied to the SaaS industry. We have many ways to measure and calibrate our instruments for applications, their containers and infrastructure and their tests. We must choose our measurement tools wisely, and calibrate, monitor, and trust your instruments to keep on track.

sonar2

Advertisements

Agile Defect Prevention | Part 2

June 24, 2012 2 comments

Continuation of https://davidjellison.wordpress.com/2011/09/23/agile-defect-prevention/

So you have your escaping defects under control and your team is looking to optimize further to become an elite agile team…what more can you do towards defect prevention? Just as WIP Defects (‘Work In Process’ Defects) are an antidote to Defects, TDD (Test Driven Development) is an antidote to WIP Defects. WIP Defects, although a great way to contain faults from getting into the customer hands, is still an anti-pattern to elite agile teams. These teams are test-infected and write more tests than application code, and catch most problems while authoring the application code.

Fine Craftsmanship

In an elite agile team, everyone writes and executes tests. Product Owners carefully craft use cases with well thought through acceptance criteria, and regularly validates both that the working application behaves as intended and is a delightful experience. Software Engineers write unit tests to assure solid code craftsmanship, performance tests in the sandbox to assure efficient basic application behavior, and manually inspect their user experience like a fine cabinetmaker inspecting the glide action of the drawers in the cabinet he is building. Quality Engineers work closely with the Product Owner to understand the intent of the features, and with the Software Engineer to understand the design and share test approaches, authoring and running tests along the way. Quality Engineers peer inspect tests with Software Engineers so that both are intimately familiar with the tests. Both Software and Quality Engineers regularly run unit and regression tests, and validate both performance and user experience in each context the application runs in.

Consider writing failing test cases instead of WIP Defects as a start down this path. I have found that Quality Engineers who are not use to developing tests early and conducting peer inspections of tests with Software Engineers, are initially uncomfortable by the idea of not writing defects. You need to document the defect somehow. The best documentation is the regression test and not a defect report. This will foster correcting the problem promptly as a failing test instead of a WIP Defect hand-off or scheduling an escaping Defect fix. Get into the habit of running regression tests for the feature being worked on often throughout the day in the sandbox environment, with visibility by the whole team, as part of the CI (Continuous Integration) cycle.

Measure cycle time and efficiency in defect prevention over defect counts. You may still need to track defect counts, fix/find rates, etc. (especially in a larger organization), however in an elite agile team these defect counts are so small that everyone in the team is aware of them. Defect prevention and correction are part of the cycle time to deliver the changes. Take the time to be clear on acceptance criteria, compatibility with standards and architecture, and stability of the code changes. Collaborate within the team such that progress on code design and development, and the tests to validate them, is well known. Elite agile teams write and execute the tests early and often that assure working software remains working.

Even in elite agile teams there are Defects that a found late in the iteration or are a larger problem than can be solved at the time of discovery to fix prior to declaring work done. This warrants creating an escaping Defect. The elite agile team carefully scrutinizes each escaping Defect for defect tolerance of the business and impact to the customer, before opting to let the escaping defect into the field. The intent of the agile life cycle is to add business value often, and in small enough iterations to foster continuous feedback, so it might be more important to deliver the change with the defect than delay delivery. An escaping Defect may survive past an iteration and still be held back to release into the field until fixed in another iteration.

Elite agile teams continually test with constant feedback of pass/fail results. There are no unknown failures left without attention to understanding the problem as it is introduced. Problems are corrected promptly and not let into the field for exposure to customers. The whole team is aware of test status through continuous integration practices. There are very few, if any, known defects in the field.

Agile Defect Prevention

September 23, 2011 1 comment

I recall a day in the late ’90’s when assessing readiness for deployment of an application at Kodak after a several month long release cycle having 3,000 deferred defects. WOW! I can’t believe that was acceptable at that time, but in long waterfall release cycles that was the norm at the time. How can you manage defects like this? Today, this is unacceptable. The idea of “deferred defects” has always bothered me in software development. So, what can we do about this?

No BugsAlong comes agile software development cycles where a defect backlog is an anti-pattern (No Bugs). The idea is that through continuous integration, unit tests, early inspection, and regression tests, your team finds problems as they are introduced. This is great in theory, but how do we manage the inevitable defects that we can’t get to and is an acceptable risk to meeting business requirements, and those defects that will come in from the field as support requests to fix? My approach to managing this is to focus on “defect prevention” as opposed to “defect tracking.”

Defect prevention, really? …is that possible? Imagine counting defects on your two hands. constraining escaping defects to what fits on your two handsHow can we accomplish this? Efficient agile organizations focus on defect prevention rather than downstream defect discovery. A culture of defect prevention includes separating “work in process”  defects (WIP Defect) from “escaping” defects (Defect) to minimize defect management that escape beyond the sprint that features are developed in. This results in a much smaller defect backlog to manage and dramatically increased customer satisfaction. Agile is not just about releasing more often, but also with complete and tested features. So, we need to treat defects found in development as actionable sub-tasks of the feature work item. If we treat these WIP Defets as sub-tasks and acceptance criteria of completing the development tasks, then we are not introducing them to the field and not adding to the project team backlog as technical debt.

Escaping defects should then be treated as ranked backlog work items, along with other project work items. They should be prioritized high enough to resolve them within the next sprint or two and not accumulate a growing backlog. Watch the defect backlog as part of the project metrics. A growing defect backlog is a key indicator that the team is taking on more new work than it can handle. It may also be a key indicator that the team is operating as a “mini-waterfall” project, rather than a agile project, requiring more collaboration between Dev and Quality Engineers and early testing. Drop the number of new items the team works on until the escaping defects are well managed or eliminated.

When a WIP Defect must exist past the completion of the parent development task, then promote it to a Defect and place it in the backlog in rank order with other work. However, whenever possible the team should heavily scrutinize this practice and opt to hold delivery of the feature until the WIP Defects are complete. Also, a Defect in the backlog could be demoted and attached to an active development task to include it in the acceptance criteria for that task.

At Constant Contact, we now have our defects in the same tool (Jira/GreenHopper) that we manage new feature work, such that defects are in the same project and iteration backlogs. This provides greater visibility to the product owner ranking the work and the team implementing the work.

https://davidjellison.wordpress.com/2012/06/24/agile-defect-prevention-part-2/

Continuous Integration on our Highways

November 14, 2010 Leave a comment

What if we could apply the zero-defects vision of highly efficient Continuous Integration to our highways? We could then travel our highways at the full speed limit, at a sustained pace during rush hour. We would not have to expand our capacity of the highways nor extend our travel time with travel debt that eats into our private lives. Well, it appears Google is taking a crack at it (http://bit.ly/90RF3Q).

Functional Continuous Integration

Driving agile practices over the last 4 years in 3 SAAS companies, it is quite apparent to me that continuous integration (CI) requires both unit test build failure verification and regular deployed functional regression test verification to be really agile. Yes, you need all the SDL (software development lifecycle) practices to manage building the right product and completing work items, but quality of work cannot be compromised in the name of speed. Agile is all about completing small amounts of working (deliverable) software and iterating on continuous feedback. It also includes confidence that you are delivering tested software without regression defects (not breaking what already worked) and confidence future work will not break what was just delivered. Any remaining tests not completed in the scope of work items is considered technical debt. This technical debt is postponed work that results in missed defects.

Quality confidence is achieved by routinely running automated tests at both the code and system levels. Regardless of the agile practices used, design, development and test are interwoven and requires collabration of development and test resources in the delivery team. I believe this is the secret sause that differentiates a waterfall-ish team and an agile-ish team.

  • The waterfall-ish team has the mind-set of develop application code first and develop automation test code later, frequently not including testing in the work item scope.
  • The agile-ish team has the mind-set of developing both unit test and functional test code along with application code, either prior to application code (Test Driven Development) or just after application code, but within the scope of the work item. This includes meeting work item (e.g. user story) acceptance criteria.

Functional Continous Integration (FCI) is continuously creating and updating automated regression tests, and must be the expectation for PO’s when planning work commitments, Executives when assessing progress reporting, Developers including collaboration with QE in their estimating, and Quality Engineers in planning and completing test work. Infrastructure for FCI needs to include integration of automated tests with a test management and reporting database, and needs to be capable of running unattended. I’ve used SeleniumRC with both CruiseControl with Rails test scripts and Hudson with Java test scripts to run build-time deploy and unattended test runs. These CI applications can run with multiple client machines as slaves. This allows CI jobs to run each test suite on a different client machine simultaneaously to speed up the test duration.

The result is that test failures due to problems that break existing code, introduced with changes or new application code, are caught very early and corrected. Further, if these tests are run in the Developer’s sandbox and corrected prior to check-in, there is no defect created, which results in significantly reducing defect counts for the agile team.

Continuous Integration: Selenium RC v.s. XUnit tests

December 26, 2008 Leave a comment

Since April 2008 I’ve been a Consultant/QE Architect at Sermo in Cambridge, MA USA on Ruby on Rails agile teams. The first team started as an experiment to prove rapid development of rails applications, composited with the JBoss-based java core community, could work seamlessly. We have continued to successfully add several more rails applications with this approach. We are now undergoing a major rewrite of the core community and it’s applications entirely in ruby on rails. This new design includes formal SOA interfaces. We are continuously refining our scrum lifecycle as well as our test automation approaches.

We have been focusing on continuous integration with Cruise Control and comprehensive regression testing, including Test:Unit and Selenium tests, an automated test plan generator, and ci_reporter test report, that run with every SVN commit. I also added nightly and weekly batch runs for runtime and more timely tests. The big issue we have been wresting with is…at what level {unit, functional, integration, runtime, browser DOM, load} should acceptance/regression tests be created? This question led to some interesting and healthy debate between development and test staff.

To summarize the testing levels…

  • Rails Test::Unit test levels:
    • unit test coverage is important to verify the methods and their paths
    • functional tests validate controllers operate as intended, including environment and database configuration
    • integration tests validate systemic operations that cross controllers, in render pages properly, irrespective of the browser, including AJAX responses for page load
  • Runtime tests are run on a deployed fleet, either headless or in a simulated browser DOM
  • Selenium tests exercise interactive AJAX and JavaScript in the page, requiring a separate client machine running the selenium-server.jar (Selenium RC for rails) or webrat gem (includes selenium-server.jar and the webrat IDL)
  • Jmeter for load tests (including performance counters)

We found that the most important part of a story is the list of acceptance tests. This list shapes and clearly defines the expectations of the story and what makes it complete. Many times we stub out tests for stories in the sprint in test suites at the beginning a sprint and can be implemented by either a test or development engineer. The biggest problem we found was that adding too many selenium tests made the automated build validation time increase dramatically (4 to 10 times that of integration tests), difficult for developers to run regression tests prior to source code commit, and more fragile as GUI implementation changed. We had to re-factor many tests from selenium to Test::Unit functional or integration tests to improve test performance and reliability.

The key to successful continuous integration is to test continuously, either with TDD practices or TIA (test immediately afterward), as part of accepting stories. This includes all code implemented for the story and any additional tests to cover the acceptance criteria. Testing at the lowest level feasible for code coverage is important for test efficiency. This may require the creation or test fixtures, mocking response expectations, and data factories.

Testing should include both happy path and negative tests (exception handling). Development Engineers need to have a sense of ownership for regression tests. Quality Engineers need to have a sense of test coverage completeness. Together the scrum team needs to hold themselves and each other accountable for not leaving test coverage technical debt beyond the story acceptance. Plan this test engineering time into the story. It may mean that your velocity is a little less than not doing this, but the overall sustainable stride is greater and you really do catch problems prior to (or at the time of) committing changes.

There is a place for automating at the GUI with Selenium or a comparable HTML element or application control automation tool. By limiting the use of these tools to testing AJAX or behavior that requires interactive javascript to render the page, workflows between systems, use cases (data driven), and cross-browser testing.

What we found is that ideally…

1. Test Engineers embedded in a development scrum team should have the ability to:

  • read and exercise application code
  • author unit test cases
  • create and work with test fixtures, test mocks and test data factories
  • assess adequate test coverage for development stories

2. Test Engineers chartered with testing external to the development teams should be able to:

  • deploy to fleets (fully automated is preferred)
  • read and understand mocked interfaces (to exercise actual interfaces)
  • author and exercise run-time tests (cover GUI and API workflows across the system)
  • author and exercise performance/load tests

Continuous Integration assures a solid application code base with full test coverage. It engages all engineers in responsibility for application testing. It allows dedicated Test Engineers to focus on system-level functionality, deployment, load, and user experience.

%d bloggers like this: