qa-thinkingcareer

Classical QA is dying. What replaces it

The classical approach to quality was built on a simple worldview. There are requirements, the system has logic, there’s development, there’s testing. After all that, the product is considered verified and ready for production. The core idea sounded like this:

If we test the system well enough before release, we can reasonably trust its behavior after release.

Today this works worse and worse. The object we’re trying to control has radically changed. And if we don’t rebuild our understanding of quality itself, QA risks becoming a set of rituals — creating an illusion of control while drifting further from the actual state of the system.

Where classical QA comes from

Stripped of romance, classical QA grew out of a world of relatively predictable systems. Application logic was fixed. Interfaces didn’t change daily. Input data was bounded. Releases shipped infrequently. Product behavior was determined by what the developers explicitly put into it.

In that environment, you really could build a working process around requirements, test documentation, manual and automated checks, regression, and a final release decision. The model wasn’t primitive — it worked great where the system is reasonably stable and its behavior can be enumerated, checked, and frozen.

But it rested on assumptions that are now breaking down:

  • The product can be treated as an isolated artifact.
  • After release, it won’t change radically without a new change cycle.
  • Behavior is determined by code, not by continuous environmental dynamics.
  • Quality can be confirmed through a predefined set of checks.

Where the assumptions collapse

The main problem isn’t even AI. AI just made the old approach’s crisis obvious. Before AI, we’d already arrived at a world where a software product almost never exists by itself. It depends on external services, cloud infrastructure, analytics platforms, third-party libraries, delivery pipelines, configuration, and constant environmental change.

Quality stopped being a property of one piece of code. It’s now defined by how the system behaves at the seams: between services, between teams, between data and business logic, between documentation and actual implementation, between user expectations and the real product. A bug appears not because some component is formally broken, but because several individually-correct parts started interacting dangerously.

Continuous delivery added to this. When a product is constantly updated, the idea of a single moment-of-truth before release loses meaning. You can’t honestly call a system “verified” if it’ll change tomorrow, get a new config the day after, and run in different conditions a week from now. Quality stops being a one-time act — either it’s maintained continuously, or it slowly degrades.

There’s a third thing — artifact desync. Code, docs, requirements, tests, monitoring, deployment scenarios, and the teams’ mental models of the system live at different speeds. The organization ends up working not with one system, but with several versions of the truth about it. Then things break even without explicit bugs — nobody actually knows what counts as normal behavior anymore.

What does AI have to do with it

AI didn’t just add another component to the product. It struck at the foundation of classical QA — the assumption that system behavior can be sufficiently described upfront and then confirmed by checks.

When you have a regular function with deterministic logic, you can build a clear verification strategy around it. But when the system contains a component whose behavior depends on data, context, the probabilistic nature of a model, retraining, or external feedback — identical inputs can produce different outputs. The model can degrade over time. Explaining the cause of a specific decision is hard even for the developers. And if the model is embedded in a decision-making path — the cost of that opacity rises sharply.

Telling example: large language models embedded as an interface layer. On paper it looks great — user types a query, model helps, company gets a better UX. But once you connect that model to internal system functions, external data sources, and APIs — entirely new classes of vulnerability appear. The unpleasant part: they often don’t reduce to a defect in any single component.

There’s also a flip side. AI doesn’t only destroy old conceptions of quality — it also provides tools for new QA: analyzing logs, finding anomalies, surfacing likely root causes, prioritizing tests, predicting defect-prone areas, syncing documentation, accelerating incident investigation. The paradox: AI simultaneously makes quality harder to ensure — and becomes a necessary tool to handle that complexity.

Quality as a trajectory, not a state

Modern quality is better described not as a static property, but as a trajectory of the system over time. While behavior stays within acceptable bounds, quality is being maintained. When the system drifts outside those bounds, you face a defect, a security incident, or dangerous UX degradation.

In that framing, the goal of quality isn’t to prove correctness once, but to:

  • continuously observe the system’s state,
  • notice deviations from normal,
  • bring the system back into the controlled region.

In this picture, QA becomes closer to management than to inspection. Manual testing, regression, and classical automation don’t disappear — but they stop being the central act.

What changes in the team

Earlier you could think of QA as a separate function that arrives to check the finished result. Now quality increasingly becomes a collective responsibility. Developers, testers, security, management, and analytics need to work within a shared understanding of:

  • what constraints keep the system in an acceptable state,
  • what counts as a signal of degradation,
  • who is responsible for detection and response.

Only this way can you keep a product in acceptable shape, even when it lives in a constantly-changing world. In QA you increasingly have to work not only with tests but with system behavior in the real environment — degradation, security, observability, the impact of AI components, and the quality of decisions after release.

Bottom line

In short: classical QA didn’t “die” — it stopped being sufficient. It remains the base. Skills in test-case design, regression, automation aren’t going anywhere. But a new layer is built on top: continuous quality management in a living system.

This isn’t a future concern. Teams still operating purely in the “test before release” model already see how things break that were never explicitly tested — because testing everything is fundamentally impossible. The only way not to drown is to change the lens. Stop seeing quality as a point. Start seeing it as a trajectory.