A free fortnightly email that highlights the relevant tips, advice, and case studies from the world of product and engineering for the SEO community.
Hello 👋,
In this issue, the topic is experimentation and discovery.
Successful product teams use experiments and discovery sprints to get buy-in, reduce risk and continuously learn.
Stay safe and enjoy.
Adam
📰 Newsletter News
I wanted to let you know that I’m currently working on writing my own newsletters based on my experience in product and as an SEO (not just round ups!).
It has been a busy year but I'm prioritising time for writing.
I’ve calculated that even if I wrote 1 a week that would be 8 months of newsletters 😯. I'll have to do some kind of course at this rate...
Anthony Rindone, Product Manager for Split, talks about how to make product teams can make better decisions using experiments.
Adam’s Insight: Did you think SEOs were the only ones who do experiments?
Successful Product and UX teams actually spend time in discovery sprints trying to learn more about the customer's needs through A/B split tests, experiments and minimal viable products (MVPs).
When you test your ideas as early as possible it is very humbling for customers to prove your idea isn't great (or even useful). The most important thing about experimenting is continuous learning. This principle of building and learning quickly — to prove hypotheses, prioritize feature requests and mitigate risk for big (resource-heavy) ideas.
What I like about Anthony's webinar is that he admits what he learned from each experiment and how he'd do it differently in the feature. This is exactly how product teams (in my experience) use experiments.
If you'd like to read general product experimentation this is a good guide here, and a famous SEO experiment guide from Pinterest here.
Product teams use visual models to map out user journeys, tell stories and get buy-in. This is yet another method that, if you're struggling with a problem, can help you discover potential customers.
If you're an SEO interested in learning a visual method to map out and discover different customers and their pain points, I recommend giving this a read.
I'd recommend using a tool like Miro to map out your opportunity trees.
This is part 3 of How Netflix uses A/B split testing. In part 3 the Netflix engineering team talk about interpreting A/B split testing results and interpreting statistical significance. You can read part 1 and 2 below:
Adam’s Insight: Interpreting A/B split test results is tricky even for product and engineering teams.
What I enjoyed about this blog post was that it is an accessible guide on interpreting complex A/B results. For example, they explain false negatives and false positives using cats and dogs!
If you're still getting your head around A/B split testing then this series is a good introduction to how an enterprise company like Netflix run experiments.