Penny Allen from REI giving the first welcoming keynote speech
Recently, Portland, Oregon hosted the 35th annual Pacific Northwest Software Quality Conference. The Portland World Trade Center was filled with quality testers and engineers, developers, and managers, from all different backgrounds and experiences. Among the crowd were highly talented speakers and workshop leaders who covered topics such as testing, data, development, management, and security.
While there, I was able to attend some extremely engaging and informative talks. Some of my favorites were on general quality practices, automation, and accessibility. Here are 4 takeaways that were highlighted during the conference.
1. Quality belongs to everyone
Nowadays, everything from our televisions to our emergency services depends on some kind of software. A bug in our television system may not seem that important, but a bug in our emergency services could be life-threatening.
As our society’s dependence on software has increased, the quality of software has become more and more important.
Penny Allen, the director of enterprise QA at REI, makes the point that while traditionally accountability for quality has fallen solely on QA, the process works better when that process is owned across an entire project team.
This especially refers to developers but does not exclude project managers, tech leads, designers, and strategists (you know, everyone!).
2. Study telemetry over metrics
Metrics tell us whether something works.
Product telemetry is taking information that our system already has, and then doing something with it. It’s a constant data stream coming from the product that allows the team to see what the customer is doing.
An example to help understand the difference between metrics and telemetry is the use of a car. The metrics would tell us what the weekly mileage of the car actually is and compare it to the ideal mileage given by the manufacturer.
The product telemetry would take this even further and tell us how often the driver uses the car, what times they use it, and what features they use, on top of what their weekly car mileage is.
Although there are downsides to using telemetry (it slows you down and storing it can be expensive), it pays for itself with priceless knowledge.
Telemetry allows us to know things in minutes or days that would typically take weeks to find out from bug reports and crashes.
For companies hesitant to make the investment, Wayne Roseberry, a principal software engineer at Microsoft, reminds us to keep in mind that our competitors may be using it. If so, they will know user information long before those who are not using it, putting companies slow to adapt at a huge disadvantage.
Wayne Roseberry from Microsoft giving a talk on Automation
3. Prioritize accessibility
Accessibility is making information available for people who for various reasons might not be able to access it. When developing accessible software, Michael Larson, a senior quality assurance engineer from SocialText, described an easy guideline:
4. Tools cannot make judgement calls
Programs and tools can allow us to become more efficient in our testing, but we have to realize that they cannot make a judgement call.
This is important to remember when we automate testing or use them to determine whether our software is accessible.
Instead of using our tools to determine the appropriateness of good experiences, we can use them to determine whether something meets a requirement, to confirm the presence of something, and assert states.
This will allow us to maximize the effectiveness of our tools without compromising the quality of our products. Overall, attending the PNSQC was an incredible experience, taking place in a wonderful city, where I learned a variety of new things.
Quality Testing is essential to every piece of product creation at WillowTree, and I’m excited to bring these ideas to my work in that process.
Learn more about the PNSQC here.