• Home
  • >
  • Software Development
  • >
  • ‘Why Didn’t This Get Tested?’ End-to-End Testing with Live User Data – InApps 2022

‘Why Didn’t This Get Tested?’ End-to-End Testing with Live User Data – InApps is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn ‘Why Didn’t This Get Tested?’ End-to-End Testing with Live User Data – InApps in today’s post !

Read more about ‘Why Didn’t This Get Tested?’ End-to-End Testing with Live User Data – InApps at Wikipedia

You can find content about ‘Why Didn’t This Get Tested?’ End-to-End Testing with Live User Data – InApps from the Wikipedia website

Erik Fogg

Erik Fogg is the co-founder and chief revenue officer at ProdPerfect where he oversees revenue and growth. Having initially studied mechanical engineering and political science at MIT, Erik transitioned from a consulting role with Stroud International to an operations and sales executive position in the startup HelmetHub until he co-founded ProdPerfect in 2018. In his free time, Erik interludes as a political author, business book ghost-writer, and consultant for private equity and biotech. Erik ambitions to use artificial intelligence to help large organizations and societies consistently identify truth from falsehood, and make better fact-based decisions.

Insufficient or inconsistent quality assurance test coverage leads to bugs in production, which negatively impact cost, revenue, customer satisfaction and, ultimately, brand value. Bugs in production can cost up to seven times more to resolve than they would have in the testing stage.

However, because end-to-end (E2E) testing is expensive in terms of both money and runtime, QA teams must prioritize what gets tested. Typically, this consists of QA teams getting together in a room, using their knowledge and experience to essentially guess what should be tested, devising the tests, and then using a toolset to build and maintain the test automation.

Read More:   Skippbox Acquisition Brings Kubernetes Expertise to Bitnami – InApps 2022

Unfortunately, these tests are subjective, based on the QA teams’ best-educated guesses. They’re stuck trying to predict what real-life user behavior will be without actual data, and they hope their tests reflect how users behave. Sometimes this might hit the mark, but often it does not.

This gap between what is predicted and what users actually do means that test coverage models will fail to cover certain workflows key to the user experience, and bugs can appear in production that missed the screen of tests. For example, an analytics application stops displaying data when a user modifies the data manually; a sensors application creates false alerts when a user builds the alert before hooking up the device; a CRM won’t let you add a customer after previously adding 14 of them; an ATS won’t let you post a job if it’s associated with a certain team. A bug could also disrupt checkout processes, preventing users from converting, buying a product, or otherwise paying money for goods or services.

Failures like these could mean huge losses in revenue or retention for every minute of downtime, so when they do make it into production, the entire organization goes into panic mode. Engineers drop everything they’re doing. They literally go into a “war room” to collectively hunt down the bug and fix it, and then ship a “hotfix” as soon as possible, hoping the fix doesn’t cause something else to malfunction. Development grinds to a halt so that there are as few changing variables as possible.

Subsequently, QA teams must deal with the post-mortem “blame-storm.” Whose fault was it, and why? How many times have you heard, Why didn’t this get tested? A cycle of blame-storming can lead to a toxic environment of finger-pointing and passing the buck, and if it happens often enough, the engineering team cannot cultivate a culture of excellence. Highly talented engineers leave for better organizations, while less talented people take their place. Bug frequency increases, perpetuating a snowball effect.

Read More:   Cloud Foundry Launches a Developer Training and Certification Program – InApps Technology 2022

On top of all this, engineering teams often can’t update these large, unwieldy tests at the same pace as features change, leading to flaky, unmaintained test suites. Combined with gaps in coverage from best-guess test building, these test suites fail so often that they are no longer paid attention to and provide zero value (this “boy-who-cried-wolf” phenomenon probably sounds familiar to many engineering teams).

Stay out of the War Room

At the end of the day, none of this is the fault of your QA team. Just try to imagine a brute-forced random walk through your application—you’re looking at hundreds of thousands or even millions of unique pathways. QA teams must prioritize, and they typically rely on input from product and engineering teams to do that. Often, when a costly bug does occur, it’s because users are using the application in a way that engineering and product teams didn’t expect; for example, they took a different path to get to checkout, or they filtered products by a set of filters in a way that wasn’t tested (because there are 10,000 possible filter combinations). QA teams have no data informing what tests they should be building.

But what if your users are already telling you precisely how to test your application?

Your QA team can give itself a massive advantage by bringing this data to bear, analyzing real traffic patterns and user behavior on your app, then using that analysis to prioritize and build tests. With production analytics data telling you precisely what to test based on actual user behavior, you’ll catch more bugs that actually matter to your users, and you’ll catch them much sooner. Harnessing data to guide your testing priorities allows you to build a test suite that better protects the quality of your application, without creating unnecessary tests that clog up JIRA boards and deployment pipelines.

Read More:   SAP’s OpenUI5 JavaScript Library is Surprisingly Well Done, Comparable to AngularJS – InApps 2022

The implications ripple through your entire engineering organization. You’ll build and maintain fewer low-priority tests, freeing up your most valuable talent to focus on product rather than test maintenance. You’ll improve the stability and runtime of your E2E test suite, allowing it to run more frequently and provide higher-fidelity feedback to developers. You’ll prevent the bugs that matter to users from ever making it into production.

Some bugs will indeed reach production, even when using analytics to guide your test strategy. But when these bugs get out, you know the impact is low. You know they affect few users and don’t affect core workflows in your application. You now operate in a world where you can move fast and break only the unimportant stuff.

And the next time someone storms in and asks, “Why wasn’t this tested!?” you’ll have an answer: “The data told us it wasn’t a priority.” You’ll be able to demonstrate that the bug that made it to production had minimal impact, and show off the cascading benefits of running a leaner, more data-driven QA process.

List of Keywords users find our article on Google:

jira cascading select
jira test coverage
make custom filters in jira
cascading fields in jira
test coverage jira
prodperfect
filter by custom fields jira
software
can i filter based on custom fields in jira portfolio
jira boards based on custom field
automation for jira else if
jira change filter owner
jira core user
jira cascading select custom field
jira automation user missing
what is test cycle in jira
indeed tests
ghost writer wikipedia
jira custom field placement on screen
jira test case template
make a custom field add up in jira
test case in jira

Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      Success. Downloading...