Well it’s 2013 and once again a new tax season starts for the US. Everyone is pulling together all their financial documents, summarizing their income, expenses, and tax write-offs. And, for many, they will be using tax software to assist them with their tax reporting for 2012.
When it comes to financial software, Intuit is one of the leaders in the space. From TurboTax to QuickBooks and others, each has been a part of our lives for many years. They offer financial products that assist consumers and companies with taxes, finances, bookkeeping, accounting, payroll, and many other tasks.
I introduce to you John Ruberto, who is a Test Manager for Intuit. John has been developing software in a variety of roles for over 25 years. Currently, he is the Quality Leader for QuickBooks Online, a web application that helps small business owners save time managing their finances. Before joining Intuit, John was the test manager for element and network management systems at Alcatel-Lucent. He has also held development manager roles for Phoenix Technologies, and was a software engineer for Boeing.
For the past 13 years, John has held test & quality leadership roles. As such, he is known for developing the careers of quality professionals, quality strategy, and bringing innovation to testing organizations.
John will now take your questions:
Question: For improving things (quality, effectiveness of QA) what measures have you taken? Did you find anything (process related things) that was required to be changed? If yes, how did you manage that change? How do you tackle differences between business and development/QA team (on occasions where QA has its own opinion/stance/standards/policies). Lalitkumar Bhamare
John: Each year, we identify one or two big improvements that we would like to make. For example, one year it was to make a large improvement in test automation. The next year, the big improvement was to accelerate our delivery pace. During annual planning, we work directly with the business stakeholders to include our goals right along with the product goals. Having our testing goals on the same page as the product goals really helps to allocate people and time to achieve these goals. The business folks understand the value of quality & testing, especially when they see high quality to be just as important as the features on the roadmap.
One thing that I’ve learned: don’t use standards and policies as reasons to innovate in quality & testing. These goals do not resonate with our business folks. What does work, in our context, is the business impact of poor quality and the business benefit of faster release cycles. Stating our goals in the language of the business stakeholders helps secure their support.
Question: For your website and online offerings, can you share what quality assurance and validation practices take place after deployment to production? Seth Eliot, Senior Knowledge Engineer in Test, Microsoft
John: During the deployment and immediately afterwards, we participate in deployment validation tests. For the few days after deployment, we monitor the customer feedback channels to make sure we got the release right. These channels are primarily: log files, in-product customer ratings and suggestions, and speaking with our customer care agents.
For a new feature, we don’t consider the testing & development complete until our customers give the feature a four-star rating. The quality & development engineers read the feedback, talk with customers, and analyze their feedback to look for ways to improve.
Question: How many teams are involved in performance testing a product like Quickbooks Online, and how are teams’ responsibilities segregated? What are the challenges you face in managing and coordinating the efforts of those teams? Sam Benihya, Director of IT, NRG Global, Inc., Los Angeles, CA
John: QuickBooks Online is an application that is built using several central services for functions like identity management and banking data aggregation. Each of these teams has its own performance test team. My team tests the functionality and customer scenarios that are unique to QuickBooks Online. Obviously, the various performance test teams need to collaborate, whether to coordinate testing activities on the shared servers or to diagnose and isolate issues.
We have a performance test community, where all of the performance engineers share best practices, a mailing list, and other mechanisms to coordinate work.
Question: How do you formulate and update the testing criteria (requirement, objectives or “desirements”) for performance – please describe in terms of what a performance requirement looks like at Intuit, the cadence/frequency for updating and staying current with performance objectives and any challenges you’ve overcome. Mark Tomlinson, Independent Performance Consultant
John: We definitely consider performance to be a “non-negotiable” requirement when it comes to planning & release decisions. Some of the other non-negotiable requirements are in security, privacy, availability, and disaster recovery.
We execute a full performance test suite with every major release. Also, we monitor the on-going performance of production systems using synthetic transactions. Both to make sure that our service is working fine and to confirm that our test results match the reality that our customers face.
One of the challenges in performance testing is replicating the conditions that our customers face so that our results are more relevant. One area is network latency between the load generators and the test servers. To solve these, we use load generators in the public cloud and use test services that allow executing scripts from across various geographic regions.
Question: When you are hiring testers, what specific traits do you look for in interviews? What skills do you look for on a person’s resume, and how do you evaluate those skills during interviews. Rex Black, President of RCBS, Inc.
John: First and foremost, we will search for people that have the technical and domain knowledge required for the particular role. For example, we might be looking for a test automation engineer with strong Java skills. We will ask the person to describe past projects in detail, including why he or she took an approach, alternatives considered, and lessons learned after the fact. They will likely be given some code, and asked to design a test for that code.
Team fit is just as important as technical skills, so candidates should expect several behavioral interview questions designed around our operating values. These values include: teamwork, customer focus, personal growth and development, innovation, drive for results, and learning from successes and failures.
For example, when I interview a candidate for a testing job, one question that I ask is “tell me about a time when one of your bugs was rejected by the development team.” During that story, I’m looking for signs of teamwork and persistence to deliver great results. I’m looking for a story where the “rejection” starts a conversation to understand the different points of view.
Question: Do you have any mobile testing practices? Do you have a separate mobile testing team or do you combine desktop/web testing practices with your mobile testing? Do you combine your mobile testing projects with your desktop and web application projects? Do you have a software test team with specific mobile testing skill sets? Jean Ann Harrison – Independant Consultant
John: Our products have mobile companion apps, and the development & quality teams are organized by operating system (iOS and Android). These teams are grouped by technology to allow for specialization. We’ve also developed a couple of test automation frameworks for mobile, and put these in open source (IMAT: Intuit Mobile Automation Toolkit, and MOET: Mobile Exploratory Testing).
The mobile apps tend to be companion apps that work with our web and desktop products. We have frequent “test jams” to concentrate on interoperability between the mobile and desktop/web products. These test jams are fun and a great way to get the test teams to interact.
Question: You have held a variety of leadership roles across different industries. How would you say your leadership style has evolved over the years? Have you found you needed to shift your style for different roles or industries? What have you found to be different about leading a group of testers versus a group of programmers (or any other group of people)? Do you have any tips for up and coming test leaders? Selena Delesie – Consulting Software Tester, Agile Coach
John: You are right, each company has its own culture, including expectations of its leaders. Elements of leadership style that may work in one place, may not be successful in another. It’s important to learn the styles that are successful, and either adapt or find a place to work that is more in line with your style. For a new test leader, either new to a company or new to the role, I would give the following advice:
- Take the leadership development classes that your company offers. These classes offer three benefits: learning leadership skills, learning the leadership style expected in your company, and helping you build a network of other leaders in the company (your classmates).
- Find several respected leaders and buy them lunch. Ask them for their advice and to be a mentor to you. Even though they may be busy, most leaders that I know are happy to help those who ask.
- Likewise, learn what is important to the business stakeholders in your company. Try to get beyond “bug free,” and learn what they see as the risks & opportunities of software quality in your domain. Your role as a test leader should be a valued consultant for managing business risk, in addition to executing the test plan and reporting bugs.
Question: I read your blog post and PNSQC 2010 presentation proceedings on Customer-Driven Quality. Very impressive! How long is this framework in place? Can you share any “aha moments” resulted from implementing this program? Anna Royzman, QA Manager, Liquidnet Holdings, Inc., New York
John: Thank you for the kind words. Many of the practices described in that paper were in place from the founding of Intuit. In 1993, there was a full company off-site where everyone listed out our core operating values. One of the original 11 values was “Customers Define Quality.” Our best practice then and now is to get close to our customers and really understand what they do.
One “aha” moment for me came in the early days of QuickBooks Online. We started to hear from customers that performance was pretty slow. I called a bunch of the customers to understand what they meant by “slow,” and to get a feel for what they thought the difference between “slow,” “good enough,” and “fast.” I had many good conversations, but noticed one thing. The customers were all on the east coast of the U.S. Our data center, and test infrastructure, was on the west coast. Aha! We need to start testing from the east coast as well. (We now test all over the world.)
If you’d like to hear more about this framework, I’ll be presenting it at the STPCon in October 2013 in Phoenix, AZ.
Question: In terms of bringing innovation to testing organizations, can you describe any of the innovations you’ve managed to introduce at Intuit? I’m interested in how open such a financial software business is to more modern testing approaches (e.g. exploratory testing). Lee Hawkins, Quality Architect, Quest Software – (now part of Dell)
John: Test and quality is an area with tons of opportunities for innovation. In the past couple of years, we’ve tried a variety of different practices and technologies to help improve quality or velocity. Some have worked, and really helped, and others didn’t pan out. It’s important to keep trying. Here are a few of these experiments:
- New test automation frameworks
- Record/playback technology for AppServers
- Log file analysis tools & data aggregation
- Mapping of code changes to test cases
- Static Analysis
- Model-based testing
- Text analytics for customer feedback
- Exploratory Testing
Obviously, we all like the successes, but the failures aren’t too bad either. I like to think that a 50% success to failure ratio is fine. A higher ratio means that we aren’t trying hard enough. If something doesn’t work out, it’s no big deal. We stop the experiment, take the learning, and move on to the next innovation.
Question: There are many different views on test automation, its effectiveness as well as ROI. In your practical experience, what’s your take on test automation? At Intuit what measures do you follow to ensure desired results are achieved from test automation? Sudhir Patil, Director, Qualitia Software, India
John: In my view, automation and manual testing are both vital. Automation for speed, efficiency, and repeatability. Manual testing to ensure correctness and usability.
I look at automation in three categories: unit tests, functional tests, and system tests. The unit tests verify that the code implemented meets (and continues to meet) the developer’s intention. Code coverage (statement, branch, basis path, etc.) are the key measure. (Note: I’m fully cognizant of the limitations of coverage as a metric, but in reality coverage is the metric we tend to use). For unit tests, higher coverage tends to be better than less.
Functional tests tend to be API driven, and test the sub-systems involved in implementing those functions. We use a combination of functional/API coverage along with code coverage for these. Functions tested aren’t considered complete until the API has full coverage.
Lastly, system level tests, which tend to be UI-driven automated tests, test the full system, but tend to be fragile and expensive to triage when they fail. So, we try to select the most valuable tests, using risk-based methods (weighing probability of failure and severity of failure if present). Customer usage analytics is a key data source for helping us decide which UI tests to automate. We have a suite of the top-50 workflows, which represent the most used features.
Question: How do you automate testing? What tools do you use and what are your challenges? Elfriede Dustin
John: One key aspect for our context, as it relates to test automation, is that our quality team is staffed mostly by software engineers with titles like “Software Engineer in Quality.” The team is skilled in programming, and they have developed knowledge in our domain (small business financial management). Being software engineers that know the business domain strongly influenced our choice of tools and strategy.
The key principles we used to choose the tools were: keep the tests as close to the underlying code as possible; choose the best tools to maximize long term productivity; and build flexibility into the tool set. The tools that we are currently using include: junit for unit test and the API tests, TestNG, and WebDriver for UI tests. We also developed two test frameworks for mobile tests (MOET & ITAG, mentioned earlier).
We did experiment with a couple of declarative frameworks (keyword-driven), which had the promise of increasing productivity. Test case creation was faster, but we found that the engineers were able to create more robust tests by working in the underlying language. For a java engineer, the keyword framework just got in the way. Using the underlying language also helped with collaborating with the developers. Developers can (and do) write tests, and prefer seeing the java code when a test reports a failure.
You will notice that all of the tools we chose are open source. We were not constrained by budget, but impressed by the quality of the open source tools, depth of support by these communities, and the speed at which the open soure tools support advancements in technology (especially browser support).
Other teams at Intuit do use commercial tools. They have a different set of constraints (people & technology) than our team.
Question: How do you measure the effectiveness and efficiency of your testing processes, and how do you institute improvements? How do you measure stakeholder satisfaction with the work your group does? Rex Black, President of RCBS, Inc.
John: The primary measure of effectiveness of the quality team is our customer feedback. We call bugs that are found by our customers “escapes,” and we use each one as a learning opportunity for improving our testing practices. We also use a metric called Net Promoter to gauge our customers’ satisfaction with our software.
We perform root cause analysis (RCA) on bugs that escaped, to find opportunities to improve. The RCA examines how a bug was introduced in addition to how it escaped our quality processes.
Question: As the QA profession has evolved for the past 20+yrs., have QA managers seen the benefits and ROI for the corporations in out sourcing QA jobs? Valerie, CEO, RSQE, Inc., Raleigh, NC
John: My first exposure to outsourced testing was 15 years ago, at another company. For that company, the primary purpose was cost reduction – the ability to hire a lot of people to execute tests for a lower cost than badged employees. Now at Intuit, we tend to look at outsource providers for the expertise they can deliver, and as a strategic component to our test strategy.
Question: What lifecycle do you use? Are you transitioning to agile/have you transitioned? Can you tell us the story of that and what happened to your test team? (That might be more than one column’s worth of answers right there!) Johanna Rothman
John:: We use Scrum in my team. Each product team in Intuit has the flexibility to use the life-cycle model that works best for their unique needs and customer base. The company provides a lot of support for agile practices: central licenses for tools, an agile training/advocacy team, and a network of agile coaches.
For our team, agile adoption has increased the collaboration between developers and quality engineers. We use a common testing framework, where the developers and testers both contribute tests. The developer implements features and brings the automated tests to a certain level of coverage, then the quality engineer builds upon those tests to increase coverage, and flesh out other types of tests (for example, boundary conditions, fault injection, etc.)
Question: What is in your title’s name? Do your responsibilities differ from QA Manager/Test Manager? If yes, how? Anna Royzman, QA Manager, Liquidnet Holdings, Inc., New York
John: My title is Quality Leader for QuickBooks Online. You see the words quality, leader, and the product name in that title. This pattern was chosen to emphasize three points:
- We are aligned with the product, and the customers served by that product. We are expected to be customer advocates, and have a strong input into product requirements.
- Quality: We are responsible for quality of the product that we deliver. This role includes prevention, process development, and assessments & product learning in addition to the traditional test manager role.
- Leader: Lastly, we are expected to lead in quality practices, across other functions and teams. Influencing and leading without direct authority are key attributes for the quality team.
Question: I see you moved from testing network management embedded devices to testing web applications. What was most different between these context? What helped in the switch, and what made the transition difficult? Shmuel Gershon
John: Two of the largest differences are the customers and the practices used. In my experience in telcom, we had just two customers, albeit, very large customers. In that context, we knew all of the stakeholders by name, and could actually talk with every customer to understand what is important to them. In my current role, we have millions of customers, and we have to understand them through analytics, and sampling. I always have doubt that we are interacting with a representative sample.
In telcom, the practices used were usually prescribed by standards setting bodies, while with web apps, we have more flexibility to decide what quality practices to employ, and how to employ them.
Question: Do you use any of the cloud providers premises for testing your application? if yes what type of application? was they process smooth?how do they charge you? If no. are you considering taking some of you testing tasks to the cloud? Kabir
John: Yes, we use both public and private cloud applications to help our testing. As mentioned earlier, the public cloud allows us to test things like performance from our customers’ perspective (for example, geographic location). The public cloud allows us to run test scripts from anywhere in the world. Our team only runs test scripts, we don’t store or access customer data from the public cloud.
We also have an extensive private cloud, internally, that we use to deploy test environments. For example, any of our engineers can deploy an environment to run a short term test, or to schedule periodic automated tests.
You can go to http://blog.ruberto.com to read John’s blog, or follow him on twitter where he is @johnruberto
Next issue we’ll interview Jari Laakso, a software tester from Finland, living and working in Romania, who has some hardware background (testing electric wheelchairs, robotics) and a passion for solving and creating puzzles. You can read more about Jari at his blog at http://jarilaakso.blogspot.com or follow him on Twitter @jarilaakso
Please email your questions, name, and location with ‘Ask The Tester’ as subject line to mikewlyles@gmail.com
About the Author
John Ruberto I’ve been in software development for 25 years (really?). I’m currently the Quality manager for QuickBooks Online, a SaaS offering by Intuit.
Before Intuit, I managed a test team at Alcatel-Lucent. Before that, development manager at Phoenix Technologies (yes, the BIOS people). And, before that project engineer for some cool airplanes at McDonnell Douglas (F/A-1http://blog.ruberto.com