A Complete Feasibility Checklist Before Starting Any A/B Test
Running an A/B test without proper feasibility checks is one of the fastest ways to waste traffic, time, and stakeholder trust. Before writing a single line of experiment code or setting up a tool like Optimizely, ABTesty, or Kameleoon, you should validate whether the test is worth running, technically possible, and statistically meaningful. This blog provides a practical, real‑world feasibility checklist you can follow before launching any A/B test.
1. Business Objective Clarity
Before thinking about variants or UI changes, ask:
What is the primary business goal?
- Increase conversion rate?
- Improve lead quality?
- Reduce drop‑offs?
Which metric will define success?
- Primary KPI (e.g., form submission, purchase)
- Secondary KPIs (CTR, engagement, scroll depth)
✅ Checklist
- Primary KPI is clearly defined
- KPI is measurable and already tracked
- Stakeholders agree on what “success” means
❌ Avoid tests with vague goals like “improve UX” without a measurable outcome.
2. Hypothesis Validation
Every A/B test must start with a strong hypothesis:
If we change X for users Y, then metric Z will improve because of reason R.
Example
If we move the CTA above the fold for mobile users, then form submissions will increase because users will see the CTA earlier.
✅ Checklist
- Clear cause‑effect relationship
- Backed by data (analytics, heatmaps, user feedback)
- Not based on personal opinion or stakeholder preference
3. Technical Feasibility
Technical feasibility is the most critical and most underestimated part of any A/B test. Many experiments fail not because the idea was bad, but because the website or app could not reliably support the test from a technical standpoint. This section dives deep into what must be validated before writing a single line of experiment code.
3.1. Page Type & Architecture (SPA vs MPA)
First, identify what kind of page you are testing.
Multi‑Page Application (MPA):
- Full page reloads
- Easier targeting and activation
- Fewer timing issues
Single Page Application (SPA):
- No page reloads on navigation
- URL changes via JavaScript
- Components load asynchronously
- DOM nodes can be destroyed and recreated
Why this matters:
- Default page‑load based triggers often fail in SPAs
- Experiments may never activate or may activate too early
Checklist:
- Is this route SPA‑based?
- Does navigation rely on pushState / replaceState?
- Do components render after API calls?
- If yes → you must plan for SPA‑safe triggers and DOM waiting logic.
3.2. Element Availability & Stability
Before approving a test, confirm:
- Do target elements exist immediately?
- Are they rendered after API responses?
- Do re‑renders replace DOM nodes?
Red flags:
- Elements flicker or re‑mount
- IDs or classes change between renders
- Elements appear only after user interaction
Mitigation:
- Use a DOM observer or helper (e.g. defineOptiReady)
- Target stable attributes (data-test, data-qa)
- Avoid brittle selectors tied to styling classes
3.3. Selector Reliability
Poor selectors cause silent experiment failures.
Validate selectors against:
- Multiple page loads
- Logged‑in vs logged‑out states
- Different user journeys
Good selectors:
- data-test, data-testid, data-qa
- Stable container‑based selectors
Bad selectors:
- Auto‑generated CSS classes
- Deep nth-child chains
- Text‑based selectors in dynamic content
If selectors are not stable → the test is not technically feasible.
3.4. Re‑Execution & Duplication Risk
SPAs can trigger experiment code multiple times.
Validate:
- Does route re‑entry re‑run the experiment?
- Does DOM mutation re‑fire logic?
Mandatory safeguard:
- Global execution guards
Example:
if (window.__expApplied) return;
window.__expApplied = true;
Without this, metrics will inflate and UI will break.
3.5. Interaction With Existing JavaScript
Experiments rarely run in isolation.
Check for conflicts with:
- Analytics listeners
- Form validation
- Framework lifecycle hooks
- Other active experiments
Ask:
- Does the experiment modify shared components?
- Does it interfere with existing events or state?
- If yes → refactor or do not launch.
3.6. Performance & Flicker Risk
Every experiment adds JavaScript.
Validate:
- Will observers watch the entire DOM?
- Will logic run repeatedly?
- Will users see content flicker?
Best practices:
- Scope observers to minimal containers
- Disconnect observers after success
- Avoid heavy synchronous DOM operations
Performance regression = failed experiment.
3.7. Form Reuse Feasibility Checklist (Before Approving the Test)
Before using an existing form from Page X on another page:
☐ Submit the form on the original page and inspect Network tab
☐ Confirm the submission API returns 200 / success
☐ Check if the request includes CSRF / session / page-specific tokens
☐ Verify tokens are not tied to a specific URL or route
☐ Test submission from the new page (or console simulation)
☐ Watch for console errors (auth, token, context issues)
☐ Confirm the entry is stored in the backend / CRM / database
☐ Validate hidden fields and page metadata are populated correctly
☐ Ensure attribution and reporting are not broken
☐ Get backend confirmation if token reuse is required
Rule:
If backend storage or submission fails → the test is not technically feasible.
Final Thoughts
A/B testing in modern web applications is engineering work, not just marketing experimentation.
When you:
- Respect SPA architecture
- Debug via network truth
- Manage experiments like real code
- Validate feasibility before launch
You move from guessing to decision-grade experimentation.
Good experiments don’t just change UI—they change confidence.