Thanks to all of the followers of “Ask The Tester”. In this article, we will be interviewing Melissa Tondi, Vice President of Mobile practice for ProtoTest. You may be asking why we would be interviewing a VP if the article is called “Ask the Tester”. When talking to Melissa, we found she is very active in the testing community, and has a lot of interesting activities running concurrently. We hope you feel the same when you read through the interview.
So, let me tell you a little about Melissa…
- Melissa has been in the Software Test/QA/Quality Engineering field for more than 15 years.
- She has focused on organizing Software Testing teams around three major tenets: efficiency, innovation, and culture.
- She is the founder of Denver Mobile and Automation Quality Engineering (DMAQ) community.
- Her previous roles have been as:
- Director of Software Quality Engineering for a 150+ person org for the world’s largest education company
- QA Consultant for healthcare, finance, and software-as-a-service (SAAS) industries
- President of Colorado’s Software Quality Association of Denver (SQuAD).
- In her role with ProtoTest, she is building a Testing practice in the innovative world of Mobile where the concentration is on Functional, Performance, Security and User Experience and the innovative testing techniques that are rising in the Mobile Testing arena.
Melissa will now take your questions!
QUESTION 1:
Your bio says you organize Testing teams around 3 tenets of ” efficiency, innovation, and culture” – can you expand more on this? Do you measure the testers on these? If so, how – if not, how do you measure your testers?
Surveys go around with stats such as 85% of apps are never tested – what have your experiences been? Given that a lot of apps are free or 99c how do you convince these people to spend money on testing?
Melissa:
I believe these tenets are the foundation of a successful Quality organization (or any organization for that matter). In my experience, good teams have one or more of these characteristics at the forefront in their overall approach to problem solving and solution implementation. I was fortunate enough to be able to emphasize all three. A year ago, when I made my last career move, I was given the opportunity to define how I would build my next team and these themes carried over as the most important. I can measure our current team on two of the three – innovation and efficiency. Culture is one that I need to cultivate on a recurring basis and therefore is much more subjective. For the innovation and efficiency tenets, we build in a certain percentage of Research and Development (R&D) time for each team member as well as encourage peer reviews for technical solutions and strategies we build for each client engagement. We grade ourselves at the end of each engagement on these two (as well as other important factors) and come up with areas that we can either improve on collectively as a team, or individually. Sometimes metrics can be dangerous. They can give useful information, but ‘grading’ people based upon metrics will change their behavior – and not always correctly.
What a great question – again, because I was essentially employee number one on our team, our President and I had a great opportunity to define what our guiding principles would be as far as what type of work would be the most interesting and creative to take in to our lab. We made it very clear that we didn’t want to take the “doom and gloom” approach when winning business and partnering with clients. We aren’t in the business of convincing companies that testing is valuable. They come to us when they’ve either realized that the easy way, or the hard way. Sometimes we work with companies that learn that the hard way (less than 2-star ratings, customer or revenue loss, etc.) on their own and sometimes they have an advocate within the organization that convinces them that testing needs to happen as part of the SDLC. It’s not our job to tell others how they deliver products to users. If they choose to take the risk and deliver an App (or other solution) without testing, we hope they’ve calculated the risks well enough to develop the user relationship. Otherwise, the users will let them know pretty quickly. We want to work with companies who value testing – no matter how they got there.
QUESTION 2:
I got interested in ProtoTest and the way how you perform testing; especially because you mention context-driven on your pages about Staffing Services. Could you explain how you do context-driven testing and what it means for you?
Melissa:
Software doesn’t exist in a vacuum. Users have minimum expectations, customs, and traditions that factor into how they’ll use any application. By being mindful of how a person would perceive the application, we gain greater context for issues or discrepancies that might arise. We also incorporate risk-based testing, which consists of assessing the App Under Test (AUT), determining what systems are being utilized and impacted by a change. Our approach is to understand the system so that we can perform a Risk-based approach.
QUESTION 3:
You have been a QA consultant in SAAS industry. Per my understanding, TaaS model has its own challenges in effectively estimating for a specific testing service and delivering outcome as predicted. This hence needs to be monitored and controlled through an outcome based test metrics model. Moreover, its implementation requires a very high level of test process, tools and people maturity. Can you please share your perspective on it? How have you effectively achieved this in your organization?
Melissa:
Our lab model allows for the ability to “swarm” a project if it gets behind. The built in R&D time allows enough free time that employees are able to mature professionally, and can also be used to cover under-estimated projects. Attitude is key to this issue. An organization comprised of thoughtful, considerate engineers will naturally foster an atmosphere of flexibility, will focus on the quality of service to the client, and will take pride in everyone’s collective success. We are not simply people who work in the same office; we are a community that supports each other.
QUESTION 4:
I’m curious about the strategy protoTest uses for it’s mobile testing – How do you manage the combinatorial explosion of devices that can be supported? Do you crowdsource, use tools (if so which ones) — do you have some preferred strategies?
Melissa:
One of our first R&D projects was to try and solve this problem. Regardless of the technology or solution (web or mobile), compatibility testing and all its permutations has been a challenge that few test teams feel they’ve solved well. One of our Architects, Brian Kitchener, focused his research on two things: how we traditionally tried to solve this challenge and how we make that better. The result was the Device Matrix technique. We were forced to solve the problem for Mobile because we were in the throes of building our own device lab. In the traditional web world, most companies support some combination of the latest 2-5 versions of browsers, hardware, or operating systems because it’s the easy way to ensure a certain percentage of their users will be covered without increasing the testing time associated with validation. Mobile was initially approached that way as well. As we dug deeper into actual usage and the differences in the physical devices we needed to test, we came up with a list of categories that can be used to quickly and easily select and maintain a set of devices. We also use a 3-tier approach, testing functionality on a single device, exploratory testing on a small set of devices, and smoke testing on a large number of devices.
Presently, we do not crowdsource using a third party – not because we don’t see value in it – simply because it’s not a need for us at the moment. However, through our partnership with uTest, we’ll focus on functional in-the-wild testing on real devices in locations around the world.
QUESTION 5:
What differentiates a strong test lead from a QA Manager? What are the key characteristics of a good QA Manager?
Melissa:
A QA Manager does not need to be technical, a team lead does. A good QA manager will provide the support system for QA. His/her job should be to maximize the amount of time the team spends doing actual work. S/he should provide support to allow the team to say “no”. The Manager is the person that supports the Test team that reports to them – HR responsibilities, career enhancement, issue resolution, etc. The QA Manager should be an advocate for the practice of Testing and should be a strong enough people person to recognize those who are excellent at what they do and to make the hard decisions to manage out those that need to move on in their career choice. I believe we have done a disservice to the term “Manager” by making the assumption that the title assumes that they are a good leader. A good Manager will be a good leader and a good Manager will be able to see leadership qualities in their team and recognize them in an appropriate manner. Understanding someone’s resume skill sets (Or “hard” skills) as well as their “soft” skills and making sure they are able to function as close to 100% and removing blocks when they can’t is a critical function.
QUESTION 6:
While a tester’s critical thinking skill is important to designing effective tests, it is not clear to me on how to improve that skill. What resources or advice do you have to help improve this skill in a test team?
Melissa:
Brown Bag or Lunch-and-Learn sessions have been successful in increasing the technical prowess of a department. In addition, hiring technical leaders who can help guide and support the entire department helps tremendously. Allowing team members to freely communicate ideas will greatly aid critical thinking skills. If team members do individually assigned work with no context for the greater purpose and role of a project, their concern becomes to simply complete their work. But studies have shown that people who feel like their choices matter perform better than those who don’t, regardless of IQ or higher education.
QUESTION 7:
Reading your bio, you have been organizing software testing teams around three tenets efficiency, innovation, and culture. Teamworking is common in organizations and agile teams are self-organized teams with the aim to achieve continuous improvements at the work place which goes beyond daily activities. How do you see the relation of person taking over leadership in an agile team to facilitate a climate for change? Is there a relation between the leadership style and the leader team-member and the ability to change so that a climate for innovation will happen. What are the pre-requisite so that a climate for innovation in a self-organized agile team will be possible?
Melissa:
A great quality in a true Leader is understanding that s/he does not have sole responsibility in making decisions. Creating an environment where representatives within departments feel free to make decisions within their role is imperative. Once the burden of being the sole decision maker is removed, the teams automatically feel more responsible for their actions. Leadership needs to make failure okay. Without failure, we never learn. Without risk, we will never make progress. A culture of fear and paranoia will never produce technical innovators. It is the responsibility of the Leadership to supply the vision for a project, product or department, and it is the team’s responsibility to implement that vision effectively. The innovation comes from Leaders that are willing to listen to the feedback of their team, and from team members that trust in the overall strategy the leadership envisions. People are most motivated when they have creative input and when they know their choices matter.
QUESTION 8:
What differences and similarities do you see between your current and past roles? For example, what similar and different challenges you face?
Melissa:
With my current role, I had the opportunity to build a team from the ground up. I take that role very seriously and count myself fortunate to be able to take the lessons I’ve learned along the way and build a team, culture, and environment – mulling over the good and bad parts – into a world-class team. Open communication, honesty, and respect go a long way in any business relationship. Our differences are mainly in how to execute a given strategy, based on differing opinions of priority. By remembering that we are all human beings worthy of honor and respect, our differences become a compromise of perspectives, rather than obstacles to be overcome. Getting to work with multiple companies, and seeing how they work, really provides for great learning opportunities.
QUESTION 9:
In your testing team, what is the ratio of Business testers Vs non-business testers? When there is new member in your team with no prior business knowledge, what type of testing would you let them to do to achieve your testing objectives?
Melissa:
The nature of our work is consultative, so Business knowledge is not applicable. All of our team members have both a deep technical knowledge of software and Testing practices as well as the ability to communicate effectively with various roles in our clients’ companies. All of our engineers have a desire to intellectually grow and challenge their limits. When given a project, our engineers ask for context of the client’s business, forming a knowledge-base that helps guide their decisions. This thirst for learning applies to their place within our own business as well.
QUESTION 10:
What kind of rules of thumb do you have for testability in mobile applications? What are the most common things you have seen to affect on mobile testability?
Melissa:
Some of the trends I’ve seen, researched and spoken on this past year were
- Helping QA/Test teams understand their company’s mobile strategy and the solutions and technology introduced to support it
- Transitioning traditional QA/Test teams to support mobile and
- Staying efficient while supporting more testing permutations
It’s important to note that most apps are “testable”, but not easily automatable. The biggest hurdle you can overcome is to get project team buy-in on the automation effort, and add unique properties to each element to make them easily locatable. One “rule of thumb” for the testability in mobile applications is to allow time for Exploratory and Usability testing. Acceptance criteria on paper will only get you so far. Testers must have the system itself, or at least be present during the initial construction. Without the application itself, testing merely becomes theory input – theory output.
QUESTION 11:
Testing brings up both serious and insignificant issues. How do you deal with filtering those for the best efficiency: at test plan, cultural (tester’s self-censorship), SQA management or Development/Project management level?
Melissa:
We use a risk-based approach. We will always prefer to work in an environment that values efficiency. The more efficient we become (by implementing automation, exploratory testing, etc.) and the earlier we are involved on a project, the more value we can add. Reporting on an issue (whether it is serious or insignificant) early in the SDLC will allow more time to fix it. We have Architects and Senior Engineers who can help determine the potential impact of a defect. We like our team to be as autonomous as possible. Even insignificant issues are usually well received by our clients.
QUESTION 12:
(How) do you approach for example security and user experience differently between mobile device manufacturers?
Melissa:
I don’t think we do. We built our company’s service offerings focusing on Functional, Performance, Security and Usability as equal importance. We approach security and usability the same for all types of device manufacturers. The differences in approach come at a lower level. For example, we would plan security tests differently for iOS versus Android, because they have different security flaws and tech stacks.
Different manufacturers will still create devices meant for mass consumption by a general audience. This means that there are certain commonalities and basic expectations that form a baseline of testing requirements. The next step is adapting to the context of the device itself.
QUESTION 13:
Should the QA department assist with Component Integration Testing (CIT)? Should the QA department force the developers to document their test cases and log their defects in Quality Center?
Melissa:
“Force” is a very strong word. The theme we try to propagate here is that as long as 65% of the team believes it’s valuable, then we do something. If less than 65% do, then we discuss alternatives. If there is value in having a developer (or any other role for that matter) log defects in a Test Management tool and that value is understood by the project team, then a reasonable conversation should transpire. We believe in autonomy and creative problem solving whenever possible. That’s how you get the least amount of overhead, and the most working hours per day. The moment you discourage someone from freely expressing disagreements professionally, problems arise. Yes, I think QA can assist with component testing, but I also believe that Unit testing expectations should be set early within the project team.
QUESTION 14:
How do your testers affect the way the products are programmed? Do they for example advocate for responsive design when they see it useful? Which project meetings they participate? Are they active talkers in all project meetings they participate?
Melissa:
Ideally, we would advocate for industry-defined and accepted coding and testing practices early on – during initial design phases. However, many times we are brought in after these decisions are already made. Sometimes we consult with companies to help them determine what the best mobile approach will be. If a design choice seems to conflict with mobile design trends, or appears to be outdated, we graciously inform them, with proof to back up the direction that mobile design is headed. Clients always appreciate the feedback. Project meetings are usually set by the client, but we always provide consultation for any situation in which we can help.
QUESTION 15:
What kind of local device performance tests exist which test for user experience? Can you share any examples?
Melissa:
Some tests we perform check for application launch and installation time, action time (how long it takes to log in), battery drain, network usage, and thread usage. You can try to measure responsiveness (how long it takes the UI to react). We are also developing an App that measures Key Performance Indicators (KPIs) per device that would complement the back-end Performance testing. These dozen (or so) KPIs were introduced specifically to test for the User Experience on individual devices.
QUESTION 16:
How does a Testing Professional keep himself/herself updated in the current world of dynamic change..!?
Melissa:
In our company, we ensure it by building in R&D time. I set realistic expectations that each member of the team set aside time to focus on R&D tasks that align to both our vision and individual’s career enrichment. Weekly, we scrum on R&D tasks themselves so each team member knows each other’s focus. If there is an area of interest, multiple team members may be working on a common goal. For our Senior and Architect team members, there are expectations on producing Technical papers, and presenting in related forums which assures that they are staying up-to-date with topics of relevancy.
— END —
About the Author
Melissa Tondi