Classic Software Testing Mistakes | Planning, Issues, Automation and Code Coverage Classic Testing



The role of Classic testing  ....
  • Believing the Test team is only liable for assuring the excellence of software.
  • Wrong Thinking is that the only purpose of testing is to just search out the issues.
  • Missing the important and urgent bugs.
  • Missing the usability problems.
  • No concentration on estimation of quality.
  • Reporting issue record without putting it into context mean reporting is not proper.
  • Testing process started too late (bug detection, not bug reduction)
Planning for the whole testing stab
  • A testing stab influenced toward purposeful testing.
    Classic Software Testing Mistakes
    Classic Software Testing Mistakes
  • Under-emphasizing installation classic testing.
  • Putting load and stress testing off to the last minute.
  • Not documentation for testing.
  • Not testing configuration process.
  • An over-reliance on beta testing (User Acceptance Testing).
  • Completing one testing effort before moving on to the next stage.
  • Get failed to identify the correct risky areas.
  • Sticking stubbornly to the testing plan.
Personnel issues related to classic testing:
  • Using testing as a non-important job for new programmers.
  • Recruiting Quality Test Engineers from the experience of failed software developers.
  • Quality Test engineers are not domain experts.
  • Not interested applicants from the customer service staff or technical writing staff.
  • Insisting that test engineer be able to do application.
  • A Test Engineer team that lacks variety.
  • A physical division between Test Engineers and Developers.
  • Trusting that developers can't test their own code.
  • Developers are neither trained nor motivated to test.
The Test Engineer at work
  • Paying more and proper concentration to running tests cases than to designing them.
  • Identified the test designs properly.
  • Required Full confidence about test inputs and processes.
  • Not exploring and noticing the "irrelevant" oddities.
  • Verifying that the software product does what it's required to do, but not that it doesn't do what it isn't required to do.
  • Test executable suites that are only understandable only by their own owners.
  • Testing only through the user-visible interface.
  • Poor issue / bug reporting by test engineer.
  • Adding only those test cases where bugs are found.
  • Failing to get instructions for the next testing process.
Test automation for testing:
  • Attempting is required to automate all test cases.
  • Expecting to return manual test cases.
  • Using GUI replay / capture tools to reduce testing cost.
  • Expecting regression test cases to find a very high proportion level for new bugs.
Code coverage for developed code:
  • Embracing code reporting with the dedication that only simple numbers can motivate.
  • Erasing the tests from a regression test process just because they don't add reporting.
  • Using reporting as a performance goal for test engineers.
  • Abandoning reporting entirely. 





No comments:

Post a Comment

Popular Posts