My FeedDiscussionsHeadless CMS
New
Sign in
Log inSign up
Learn more about Hashnode Headless CMSHashnode Headless CMS
Collaborate seamlessly with Hashnode Headless CMS for Enterprise.
Upgrade ✨Learn more

Post hidden from Hashnode

Posts can be hidden from Hashnode network for various reasons. Contact the moderators for more details.

3 Software Testing Practices We Should All Stop

3 Software Testing Practices We Should All Stop

Alisha Henderson's photo
Alisha Henderson
·Feb 26, 2019

The software industry has gone through many inventions and evolutions. Presently there have been multiple models, cycles, and frameworks, like waterfall, the V-Model, agile and its several variants, and more.

There have been attempts to standardize testing as well, nonetheless they have been protested by the testing community. Testers believe it's a good thing that there are no standards, and they practice many kinds of testing, including testing in manufacturing, session-based test management, exploratory testing, and test- and behavior-driven development.

But even with diverse practices, testing evolves, also it becomes clear that some concepts wish all used to doing are no longer relevant today. It's dangerous to keep doing something just because it's the way we have always done it, so it's important to occasionally take stock of our testing practices and cull the ones that no lengthier make sense--or are completely harmful.

Here are three common software testing practices it's in our best interests to halt doing.

1. Performance appraisal based on bug matters

If testers are judged by the bugs registered and stuck, the focus is no longer on more expensive for the product. Individuals are now more worried about how exactly many bugs were found, and goal shift comes into picture.

Testers start filing easy-to-find insects or creating multiple pest reports for each and every platform, and developers start rejecting insects as hard to duplicate or not a bug. The tester who was spending time and effort designing a good test for one critical bug is now looking for the low-hanging fruits.

The developer who was designing a long-term solution to produce a more strong product is now using short-term fixes for low-severity bugs.

Bug count by yourself never tells the complete story. Replace the word "bug" with "idea" and the concept is more clear: One person has several ideas and another has two ideas. Does which means that the one with several ideas is the better tester or programmer? Without understanding what the bugs are, or the difficulty of finding or repairing them, we cannot come to good conclusions.

Therefore, how do we appraise testers' performance now?

First, it's a good idea to discuss to the tester and create a plan for the skills that will help both them and the company. If the tester has never examined a mobile software and your team might desire a mobile iphone app tester in the coming months, this can be a good opportunity for everyone.

If you want to quantitatively gauge testers' skills, you can still measure them. Just be as specific as you possibly can in listing out what activities the tester will be able to perform and assign a mutually agreed after timeline to it. For example, the tester should be able to find bugs at UI, database, and storage space levels, to use tools like X, Y, and Z, and perform the following actions.

These sorts of goals give attention to the tester's learning and skills rather than a narrow concentrate on a specific attribute. It might be more labor intensive than just focusing on pest counts, but it is more valuable and boosts both the individual and the company.

2. Check suite pass and fail proportions

Any tester who has found a few bugs sees that the the greater part of bugs are found when deviating from the script. Yet teams spend a lot of the time scripting the test cases some time before interacting with the product. A single test can be a pass on one condition and fall short on multiple other unrecorded conditions. You can even have a 95 percent pass rate and still have a hundred bugs to be fixed.

Writing out there all the test cases and having a test case for each and every bug is not a good idea. Typically the funnier part is when decisions are based on pass/fail percentage alone. Teams rely on these percentages, but it's a bogus hope.

We would do better if we get free of the pass/fail percent and concentrate on the self-confidence given by the groups. We can use low-tech dashboards and have subjective assessments in conditions of coverage (blocked, sanity, deep) and confidence (low, average, good) and debrief centered on the combinations. We have to move away from requesting the number of tests pass and how many tests fail and start asking, "Is there a problem here? "

I've started sharing my dashboard with a set of features and what problems existed in each feature. This lets us all know what's stopping us all from releasing a feature and whether we certainly have protected the critical use instances.

3. Signoff by the testing team

It is well known that quality is everyone's responsibility. Testers can attest for their discuss of quality alone, nevertheless they cannot assure quality for the complete product. Still, the project team expects the QA testing team to offer a signoff after the testing cycle.

There are multiple ways the quality of the product can rest assured, but it would desire a whole-team approach:

  • Everyone wants for a common explanation of acceptable quality and has measurable checkpoints to ensure them

  • Every team is accountable for their share of the quality and takes measures to ensure that the deliverable meets quality standards.

  • The barriers between roles are broken and specialists work together as one team and assist each other

  • Everyone's unique strengths are being used to create a winning team culture and everybody owns quality