5 Reasons Why Automated Testing Solutions Fail
The Quality Assurance (QA) function helps align the created software to the requirements presented by the client. QA also serves as a check and balance of the development team by ensuring all standards and best practices are being adhered to in order to deliver a presentable end product.
All of us have read that attempts to automate testing are more likely to fail than to succeed. But why do they fail? One would think that actions by QA intended to improve the QA process would naturally lean towards improvement. And everybody in the process wants success. The problem is that automation does not just get added in. For example, adding a pizza topping to a pizza means just adding it in (and possibly needing slightly longer to bake, and having a little more moisture in your pizza). But there is a greater level of complexity if you want automation to be baked into your software QA process.
The Wrong Tool.
If you have a hammer, you want everything in your world to be a nail; a hammer is not useful for turning screws or fixing window panes or covering a wall with paint. Not every tool can do everything, and if it does, it probably has a high licensing fee. Tool selection may mean committing to long-term high licensing fees, strong coding skills for QA, needing a separate reporting or logging plug-in, or adjusting to quirks that can only be learned through experience. And problems during ramp-up may pollute your initial results. Perhaps some facet of testing (like stress testing or multi-threaded testing) may not be part of the tool, and still require a manual solution. And you will probably need the ability to have both front-end and back-end actions execute in your automated test script (like a GUI interaction followed by a SQL query).
Not a Magic Wand.
Automation can help with testing speed, and adherence to the testing steps a manual tester would have done. But it cannot help with problems outside of QA. If the projects are always late because of management or inter-department communications problems or how other departments function (Development, Business, e.g.), the time and expenses imposed by automation may seem mis-spent. There’s a joke: “A wife tells her programmer husband to buy a loaf of bread at the store, and if they have eggs, buy a dozen. When he gets back the wife says, ‘Why did you buy 13 loaves of bread?!?’ and he says, ‘They had eggs.’” The point is, if the needs of Business are not understood, the follow-through will be unhelpful. If QA does not understand the new feature or the critical risks, the test cases will not verify what they need to verify, and may show errors where none exist. Also, realize that some scenarios just do not make sense to automate, because some things may still need to be done manually. Do not determine that anything other than adopting a fully test automation software strategy is failure.
Bad Scripting.
We’ve all seen movies with bad scripts, especially late at night on TV. That’s not what we’re talking about here, despite some similarities. Good scripting should flow smoothly, and accomplish what it sets out to do. Also, it should use naming conventions and comments that will enable it to be edited easily in the future, where that editor may be someone other than yourself. You may even want to add notes for features slated for future releases when you write your scripts, to help the next person have an easy and manageable job . Remember, a newer feature may add a new value to a parameter used in an older regression set, and it should not be hard to figure out how to properly make that update.
Edge Cases.
This seems like a subset of “Not a Magic Wand”, except it is purely QA’s fault unless it comes from misunderstanding Business and/or Development. These less-likely scenarios should still be checked every time, so it helps if these get automated along with the more commonplace test cases. Call it deep testing, call it monkey testing, call it whatever you want, but value is lost if these cases are only checked once manually during a release instead of being recorded so that the knowledge of each of these scenarios is not lost, ensuring continued verification whenever the automated batch gets run.
Regress to the Future!
It would be nice if your test cases become your regression set, then your next set of test cases get cleanly added to them. But the world is not always nice. The newer features may invalidate, limit or expand some of the older test cases in the regression set. The edge cases may not have gotten added to the regression set. Mistakes in the test scripts may not have gotten fixed before copied into the next regression set (or the mistakes may have been merely ignored after recognition of misreported errors. Regardless of how it happens, the regression set in the test suite gets progressively more polluted. Misreported errors may cause distrust in the regression set. Bad updates may cause bugs to get missed.
So, there you have it. Good intentions may yield sloppy results that continue to corrupt. All because of bad original research, unreasonable expectations, improper learning, or lax implementation. So predictable, so easy to ward off through better planning. Please take these suggestions to heart, so you can succeed in your future.
This guest post was written by Scott Andery, an expert marketer, author and consultant who specialize in software testing tools and resources.
Let me tell You a sad story ! There are no comments yet, but You can be first one to comment this article.
Write a comment