Are you certain your A/B testing methodology is statistically sound and unbiased? It’s the CRO Roundtable Roundup!
Thanks to Iqbal Ali, Julie Fragoso, Slobodan Manić, Shiva Manjunath, Craig Sullivan, David Swinstead, and Dewi Williams for joining us. Want to get in on the action and talk with other cool CRO people? Then you should probably…
This Week's Roundtable Topic
Emotional Intelligence in Experimentation
Session 289: March 13, 2026
Here, have some conversation starters:
- What is a good balance of empathy and data when interpreting experiment results?
- When does empathy for users justify deviating from a top-performing variant?
- How can you effectively communicate test results that may be perceived as undesirable by stakeholders?
One Sentence Takeaway
I’m a busy person; give me the TL:DR
Experimentation programs rarely stall for purely technical reasons; cultural incentives, flawed processes, weak data practices, and human bias all compete to shape outcomes.
The Notes
What we discussed this week
- Competing mental models reveal bottlenecks in experimentation programs
- Opposing positions expose assumptions about what limits experimentation growth
- Culture, process, and data get blamed interchangeably
- Practitioners defend their discipline’s strengths while minimizing weaknesses elsewhere
- Structured disagreement encourages deeper reasoning than casual discussion
- Organizational culture determines whether experimentation practices get used
- Teams may have strong data and processes that nobody actually follows
- Motivation and buy-in determine whether testing programs mature
- Early experimentation wins often build momentum for broader adoption
- Leadership behavior strongly shapes whether testing becomes embedded
- Tension between process discipline and experimentation speed
- Governance and approvals can slow experimentation velocity
- Agile experimentation requires bypassing rigid processes
- Misaligned workflows create friction between teams
- Using AI as a tool for adversarial thinking
- AI can simulate skeptical leaders reviewing proposals or decks
- Hypotheses can be stress tested by asking why a test failed
- Synthetic critique surfaces blind spots teams miss
- Even limited insights can improve prioritization
- Cost realities of AI development tools challenge large model hype
- Smaller models often perform similarly for routine tasks
- Large models can be far more expensive with minimal gains
- Workflow orchestration can outperform raw model power
- Why stochastic AI systems struggle to replace deterministic software
- Many software tasks require exact repeatable behavior
- Replacing logic with LLM calls introduces inefficiency
- Basic operations often need far fewer resources
- Engineers rediscover problems solved by traditional code
- Ethical pitfalls and legal risks in common CRO tactics
- Resetting countdown timers may violate consumer protection laws
- Businesses try to legitimize deceptive urgency tactics
- Stakeholders sometimes keep patterns despite neutral results
- Ethical CRO requires resisting manipulative designs
- Flawed ideation and prioritization methods distort experimentation
- Brainstorming sessions often produce weak ideas
- Dot voting rewards consensus rather than insight
- Committee prioritization averages ideas toward mediocrity
- Strong programs separate ideation from decisions
- Quality metrics for experimentation programs must evolve
- Quality frameworks should highlight gaps, not act as checklists
- Metrics work best when targeting specific weaknesses
- Teams often misinterpret scorecards as universal best practices
- Measurement systems should adapt as programs mature
The Quotes
Standout quips from this week
Book Club
Relevant reads recommended this week
No books this week, sorry!
CRO Link Digest
Useful and thought-provoking content shared this week
- If experimentation quality matters so much, why don’t we measure it? – LinkedIn post by Nils Stotz
Off-Topic Sidebars
Experimentation isn’t the only thing we talk about at the CRO Roundtable. There’s often a healthy dose of discussion on shared interests, personal passions, and hobbies.
- AI business economics
- Model pricing versus operational cost realities
- AI subscription plans losing money per user
- Venture funding sustaining unprofitable AI services
- LinkedIn ecosystem frustrations
- Lack of meaningful professional network competitors
- Marketing noise overwhelming genuine industry discussion
- Platform incentives amplifying hype cycles
- Tech industry conspiracies
- PayPal mafia
Sidebar Shareables
Amusing sidebar content shared this week
No Sidebar Shareables this week, sorry!