{"id":7652,"date":"2025-08-09T13:46:42","date_gmt":"2025-08-09T09:46:42","guid":{"rendered":"https:\/\/www.matsh.co\/en\/?p=7652"},"modified":"2025-08-09T13:46:42","modified_gmt":"2025-08-09T09:46:42","slug":"how-to-use-statistics-in-a-b-testing","status":"publish","type":"post","link":"https:\/\/matsh.co\/en\/how-to-use-statistics-in-a-b-testing\/","title":{"rendered":"How to use statistics in A\/B testing"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/storage.googleapis.com\/48877118-7272-4a4d-b302-0465d8aa4548\/d8a69ed5-48d4-411f-8a77-974817c8fa5a\/6942f55e-e881-4470-9e33-ff77458b2973.jpg\" alt=\"How to use statistics in A\/B testing\" \/><\/p>\n<p>Imagine running marketing campaigns where every choice is backed by <strong>real user behavior<\/strong> instead of hunches. Split testing (often called A\/B testing) turns this vision into reality by comparing two webpage versions to see which resonates better with audiences. It\u2019s like having a compass for optimization \u2013 pointing toward designs that deliver measurable results.<\/p>\n<p>Why does this matter? When we analyze performance through <em>controlled experiments<\/em>, we move beyond surface-level changes. Instead, we uncover patterns in <a href=\"https:\/\/www.invespcro.com\/blog\/ab-testing-statistics-made-simple\/\" target=\"_blank\" rel=\"noopener\">analytics data<\/a> that reveal what truly drives clicks, sign-ups, or purchases. This approach transforms subjective debates into objective conversations about what works.<\/p>\n<p>But here\u2019s the catch: raw numbers alone don\u2019t tell the full story. Without understanding concepts like confidence intervals or sample sizes, even promising results can mislead. That\u2019s where statistical rigor becomes our secret weapon \u2013 separating random noise from meaningful trends.<\/p>\n<h3>Key Takeaways<\/h3>\n<ul>\n<li>Split testing compares webpage variations to identify top performers using real-user data<\/li>\n<li>Statistical analysis prevents guesswork by validating results with mathematical precision<\/li>\n<li>Proper experiment design ensures tests measure actual user preferences, not random chance<\/li>\n<li>Confidence levels and error margins act as quality checks for your findings<\/li>\n<li>Actionable insights from testing can systematically improve conversion rates over time<\/li>\n<\/ul>\n<p>Throughout this guide, we\u2019ll explore how to set up experiments that answer specific business questions. You\u2019ll learn to interpret p-values like a pro and avoid common pitfalls that skew outcomes. Let\u2019s turn those maybe-this-will-work ideas into proven strategies.<\/p>\n<h2>Introduction to Statistics in A\/B Testing<\/h2>\n<p>Think of data as your compass in a sea of marketing decisions. When we compare webpage versions, we&#8217;re not just guessing \u2013 we&#8217;re building evidence for what truly connects with visitors. This evidence becomes actionable through statistical frameworks that turn raw numbers into reliable insights.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/storage.googleapis.com\/48877118-7272-4a4d-b302-0465d8aa4548\/d8a69ed5-48d4-411f-8a77-974817c8fa5a\/56c0fb5e-e2dd-49e0-ab5f-80e82ed8a249.jpg\" alt=\"statistical analysis in A\/B testing\" \/><\/p>\n<h3>The Role of Data in Our Experiments<\/h3>\n<p>Every click and scroll tells a story. Our job is to listen through careful measurement. By tracking specific metrics, we separate meaningful patterns from random noise. For instance, a 5% conversion boost might look promising \u2013 but could it disappear tomorrow? Proper analysis answers this through <strong>confidence intervals<\/strong> and <em>probability calculations<\/em>.<\/p>\n<h3>Why Numbers Need Interpretation<\/h3>\n<p>Raw metrics can deceive. We once saw a holiday campaign show a 12% lift \u2013 until we accounted for seasonal traffic spikes. This is why understanding <a href=\"https:\/\/blog.analytics-toolkit.com\/2022\/a-b-testing-statistics-a-concise-guide\/\" target=\"_blank\" rel=\"noopener\">statistical concepts<\/a> matters. It helps us:<\/p>\n<ul>\n<li>Validate whether changes actually drive improvements<\/li>\n<li>Calculate required sample sizes before launching tests<\/li>\n<li>Determine when to trust the numbers \u2013 and when to retest<\/li>\n<\/ul>\n<p>Without this foundation, we risk making expensive mistakes. But with it, we transform opinions into evidence-based strategies that consistently move the needle.<\/p>\n<h2>Fundamental Concepts and Key Terminology<\/h2>\n<p>Picture two versions of a webpage competing for user attention. To determine which truly performs better, we need clear rules for comparison. This starts with establishing measurable goals and understanding when differences matter.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/storage.googleapis.com\/48877118-7272-4a4d-b302-0465d8aa4548\/d8a69ed5-48d4-411f-8a77-974817c8fa5a\/7e4301e1-55c9-46b0-8667-055a58beb6e7.jpg\" alt=\"hypothesis testing framework\" \/><\/p>\n<h3>Defining Hypotheses and Metrics<\/h3>\n<p>Every experiment begins with two competing ideas. Our <strong>null hypothesis<\/strong> assumes no change from the current version. The <strong>alternative hypothesis<\/strong> claims our proposed variation creates improvement. For example: &#8220;Changing button color from blue to red increases sign-ups.&#8221;<\/p>\n<p>We measure success through <em>conversion metrics<\/em> \u2013 the percentage completing target actions. If 500 visitors generate 25 sign-ups, our rate is 5% (25\/500 \u00d7 100). These numbers become our evidence for comparing versions.<\/p>\n<h3>Statistical Significance Explained<\/h3>\n<p>When results show a difference, we ask: &#8220;Is this real or random?&#8221; Statistical significance answers this using probability. A <a href=\"https:\/\/towardsdatascience.com\/a-b-testing-a-complete-guide-to-statistical-testing-e3f1db140499\/\" target=\"_blank\" rel=\"noopener\">comprehensive statistical testing guide<\/a> explains how p-values below 0.05 suggest less than 5% chance of false positives.<\/p>\n<p>But significance thresholds don&#8217;t guarantee business impact. A 0.1% conversion lift might be statistically confirmed with huge traffic, yet irrelevant for real-world goals. We always pair mathematical certainty with practical relevance.<\/p>\n<p>By framing tests properly and interpreting results through both lenses, we avoid chasing insignificant fluctuations. This balanced approach turns raw data into trustworthy optimization strategies.<\/p>\n<h2>Understanding Discrete and Continuous Metrics<\/h2>\n<p>What if your test results depended on how you measure success? Metrics fall into two categories that shape analysis: those with yes\/no answers and those tracking nuanced behaviors. Choosing the right category determines whether we spot genuine improvements or chase statistical ghosts.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/storage.googleapis.com\/48877118-7272-4a4d-b302-0465d8aa4548\/d8a69ed5-48d4-411f-8a77-974817c8fa5a\/943bcb9c-e12c-4cb1-89f6-6f5ab0380b78.jpg\" alt=\"discrete vs continuous metrics\" \/><\/p>\n<h3>When Outcomes Are Binary<\/h3>\n<p><strong>Discrete metrics<\/strong> act like light switches \u2013 either on or off. A visitor converts (1) or doesn\u2019t (0). These yes\/no measurements include conversion rate and bounce rate. For example, 78 sign-ups from 1,000 visitors mean a 7.8% conversion rate.<\/p>\n<p>Why focus here? These metrics answer direct business questions: &#8220;Did our change increase purchases?&#8221; They\u2019re perfect for initial tests because results are clear-cut. But they don\u2019t reveal <em>how much<\/em> value each user brings.<\/p>\n<h3>Measuring Shades of Gray<\/h3>\n<p><strong>Continuous metrics<\/strong> capture gradients of success. Average revenue per user might be $42.75 in Version A vs $38.90 in B. Session duration could range from 15 seconds to 30 minutes. These numbers show engagement depth.<\/p>\n<p>E-commerce sites often track average order value. Content platforms monitor time spent per article. Unlike binary metrics, these require different math \u2013 comparing means rather than proportions.<\/p>\n<table>\n<tr>\n<th>Metric Type<\/th>\n<th>Key Traits<\/th>\n<th>Common Examples<\/th>\n<th>Analysis Method<\/th>\n<\/tr>\n<tr>\n<td>Discrete<\/td>\n<td>Binary outcomes (0\/1)<\/td>\n<td>Conversion rate, Click-through rate<\/td>\n<td>Chi-squared test<\/td>\n<\/tr>\n<tr>\n<td>Continuous<\/td>\n<td>Range of numerical values<\/td>\n<td>Revenue per user, Session duration<\/td>\n<td>T-tests<\/td>\n<\/tr>\n<\/table>\n<p>Mixing metric types? A <a href=\"https:\/\/medium.com\/data-science\/a-b-testing-a-complete-guide-to-statistical-testing-e3f1db140499\" target=\"_blank\" rel=\"noopener\">comprehensive statistical testing guide<\/a> explains how to handle complex scenarios. Remember: Your choice between these metric families dictates which tools unlock reliable insights.<\/p>\n<h2>How to use statistics in A\/B testing<\/h2>\n<p>Ever wonder why some tests give clear answers while others leave you guessing? Human behavior naturally fluctuates \u2013 visitors might love a new layout today but ignore it tomorrow. Our challenge? Separate genuine preferences from random noise through smart experiment design.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/storage.googleapis.com\/48877118-7272-4a4d-b302-0465d8aa4548\/d8a69ed5-48d4-411f-8a77-974817c8fa5a\/38fcc664-b7cf-493e-bb3f-48d1497b6a87.jpg\" alt=\"A\/B test design\" \/><\/p>\n<p>Natural variability affects every outcome. A 10% conversion boost might vanish next week if we ignore user mood swings. That&#8217;s why our designs need built-in safeguards. We calculate required sample sizes upfront, ensuring we collect enough data without wasting resources.<\/p>\n<p>Three pillars support reliable experiments:<\/p>\n<table>\n<tr>\n<th>Focus Area<\/th>\n<th>Purpose<\/th>\n<th>Key Consideration<\/th>\n<th>Impact<\/th>\n<\/tr>\n<tr>\n<td>Behavior Patterns<\/td>\n<td>Account for user inconsistency<\/td>\n<td>Weekday vs weekend traffic<\/td>\n<td>Reduces false positives<\/td>\n<\/tr>\n<tr>\n<td>Data Thresholds<\/td>\n<td>Determine minimum viable sample<\/td>\n<td>Expected effect size<\/td>\n<td>Prevents early conclusions<\/td>\n<\/tr>\n<tr>\n<td>Confidence Levels<\/td>\n<td>Measure result reliability<\/td>\n<td>95% industry standard<\/td>\n<td>Quantifies uncertainty<\/td>\n<\/tr>\n<\/table>\n<p>Proper planning helps avoid common traps. Setting clear goals before launch keeps us focused on measurable outcomes. We might aim for 500 participants per variation to detect 5%+ conversion changes \u2013 numbers grounded in statistical power calculations.<\/p>\n<p>Uncertainty becomes our ally when handled correctly. Instead of fearing ambiguous results, we use confidence intervals to show possible outcome ranges. A &#8220;12-18% lift likely&#8221; statement often proves more useful than a single misleading percentage.<\/p>\n<p>By respecting variability in our designs, we turn chaotic data into trustworthy guides. The right balance between mathematical rigor and practical insight helps teams make confident decisions that consistently improve user experiences.<\/p>\n<h2>Choosing the Right Statistical Test<\/h2>\n<p>Different data types demand different analytical approaches. Our selection process balances mathematical precision with practical constraints \u2013 like sample availability and distribution patterns. Let&#8217;s explore methods that turn raw numbers into trustworthy conclusions.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/storage.googleapis.com\/48877118-7272-4a4d-b302-0465d8aa4548\/d8a69ed5-48d4-411f-8a77-974817c8fa5a\/dc3fb5dd-b042-4ac1-86ab-38c7fc687c21.jpg\" alt=\"choosing statistical tests\" \/><\/p>\n<h3>Fisher&#8217;s Exact Test and Chi-Squared Test<\/h3>\n<p><strong>Fisher&#8217;s exact test<\/strong> shines with small samples and binary outcomes. When testing a new checkout button with 150 visitors, it calculates exact probabilities using hypergeometric distribution. This precision prevents false conclusions in low-traffic experiments.<\/p>\n<p>Pearson&#8217;s chi-squared test handles larger datasets efficiently. For campaigns reaching 10,000+ users, it approximates results faster while maintaining accuracy. Both methods answer yes\/no questions but scale differently based on data volume.<\/p>\n<h3>T-tests, Welch&#8217;s t-test, and Non-Parametric Methods<\/h3>\n<p>Continuous metrics like purchase amounts require different tools. <em>Student&#8217;s t-test<\/em> compares averages when group variances match. Uneven spreads? <strong>Welch&#8217;s t-test<\/strong> adjusts calculations automatically.<\/p>\n<p>Non-normal distributions call for non-parametric tests. The Mann-Whitney U test ranks values without assuming specific data shapes. It&#8217;s our safety net when revenue patterns skew unexpectedly.<\/p>\n<p>Three decision factors guide our choices:<\/p>\n<ul>\n<li><strong>Data type:<\/strong> Binary vs continuous outcomes<\/li>\n<li><strong>Sample characteristics:<\/strong> Size and variance patterns<\/li>\n<li><strong>Distribution:<\/strong> Normal curves vs irregular spreads<\/li>\n<\/ul>\n<h2>Sample Size, Variability, and Confidence Levels<\/h2>\n<p>What separates reliable test results from random noise? The answer lies in three intertwined factors: enough participants, controlled variability, and clear confidence boundaries. These elements work together like traffic signals \u2013 greenlighting trustworthy conclusions when properly aligned.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/storage.googleapis.com\/48877118-7272-4a4d-b302-0465d8aa4548\/d8a69ed5-48d4-411f-8a77-974817c8fa5a\/f0d66b0f-a97e-4f1f-898c-3a85785d061b.jpg\" alt=\"sample size calculation\" \/><\/p>\n<h3>Calculating the Appropriate Sample Size<\/h3>\n<p>Starting tests without calculating needed participants is like baking without measuring ingredients. We determine minimum requirements using:<\/p>\n<ul>\n<li>Baseline conversion rates<\/li>\n<li>Desired detectable improvement<\/li>\n<li>Statistical power thresholds<\/li>\n<\/ul>\n<p>For most businesses, 100 conversions per variation marks the entry point. However, aiming for 200-300 creates a safety net against unexpected fluctuations. Larger platforms? We recommend waiting for 1,000+ conversions before analyzing \u2013 high traffic demands higher certainty.<\/p>\n<table>\n<tr>\n<th>Website Size<\/th>\n<th>Minimum Conversions<\/th>\n<th>Recommended<\/th>\n<th>Key Benefit<\/th>\n<\/tr>\n<tr>\n<td>Small\/Medium<\/td>\n<td>100<\/td>\n<td>200-300<\/td>\n<td>Balances speed &amp; reliability<\/td>\n<\/tr>\n<tr>\n<td>Large<\/td>\n<td>1,000<\/td>\n<td>1,500+<\/td>\n<td>Reduces margin of error<\/td>\n<\/tr>\n<\/table>\n<h3>Interpreting Confidence Intervals<\/h3>\n<p>Confidence intervals act as probability-based guardrails. A 95% confidence level means repeating the test 100 times would yield similar results 95 times. Wider ranges indicate more uncertainty \u2013 narrower bands signal stronger evidence.<\/p>\n<p>More users shrink these ranges naturally. If Version A shows a 5% lift with 10,000 visitors versus 50, the interval tightens from &#8220;2-8%&#8221; to &#8220;4.8-5.2%&#8221;. This precision helps teams distinguish between temporary spikes and sustainable improvements.<\/p>\n<p>By pairing proper sample sizes with interval analysis, we transform vague guesses into calculated decisions. It\u2019s not about eliminating uncertainty \u2013 it\u2019s about measuring and managing it effectively.<\/p>\n<h2>Interpreting p-values, Confidence Intervals, and Errors<\/h2>\n<p>Think of your experiment results as courtroom evidence \u2013 compelling but needing careful scrutiny. We evaluate this evidence through three lenses: the surprise factor (p-values), error risks, and practical implications. Let&#8217;s break down how these elements work together to separate meaningful wins from statistical illusions.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/storage.googleapis.com\/48877118-7272-4a4d-b302-0465d8aa4548\/d8a69ed5-48d4-411f-8a77-974817c8fa5a\/c0affe6b-ce4c-4ff3-b9ab-c92fe9bbd14f.jpg\" alt=\"p-values and confidence intervals\" \/><\/p>\n<h3>Decoding the Surprise Factor<\/h3>\n<p>A p-value measures how eyebrow-raising our data would be if no real difference existed. Imagine testing a new headline that shows a 15% lift. A p-value of 0.03 means there&#8217;s only a 3% chance we&#8217;d see this result randomly. We treat values below 0.05 as <strong>statistically significant<\/strong> \u2013 but always check if the improvement justifies implementation costs.<\/p>\n<h3>Balancing Error Risks<\/h3>\n<p>Two pitfalls haunt decision-making:<\/p>\n<table>\n<tr>\n<th>Error Type<\/th>\n<th>Real-World Impact<\/th>\n<th>Common Causes<\/th>\n<th>Mitigation Strategy<\/th>\n<\/tr>\n<tr>\n<td>Type I (False positive)<\/td>\n<td>Launching ineffective changes<\/td>\n<td>Early stopping, small samples<\/td>\n<td>Set stricter significance thresholds<\/td>\n<\/tr>\n<tr>\n<td>Type II (False negative)<\/td>\n<td>Missing profitable improvements<\/td>\n<td>Insufficient traffic, tiny effects<\/td>\n<td>Increase sample size or effect sensitivity<\/td>\n<\/tr>\n<\/table>\n<p>Reducing one error often increases the other. A 99% confidence level slashes false positives but raises missed opportunities. The sweet spot? Most teams use 95% confidence while monitoring practical impact. For high-stakes changes like pricing, we recommend tighter intervals and larger samples.<\/p>\n<p>Confidence intervals add crucial context. A &#8220;10-18% lift&#8221; range tells us more than a single p-value. When intervals exclude zero and business goals, we gain confidence to act. Pair this with error awareness, and we make decisions that balance mathematical rigor with real-world pragmatism.<\/p>\n<h2>Data Distribution and Testing Assumptions<\/h2>\n<p>Real-world data rarely behaves like textbook examples. When analyzing experiment outcomes, we often face patterns that challenge traditional assumptions \u2013 especially in metrics like revenue per user.<\/p>\n<h3>When Curves Break the Mold<\/h3>\n<p><strong>Zero-inflated distributions<\/strong> dominate revenue analysis. Most visitors don&#8217;t purchase, creating a spike at zero. Meanwhile, a small group spends hundreds \u2013 skewing results rightward. These patterns demand special handling beyond basic averages.<\/p>\n<p>Multimodal distributions reveal hidden customer segments. Budget shoppers might cluster around $20 purchases, while premium buyers peak at $150. Traditional tests might miss these nuances, leading to misguided conclusions.<\/p>\n<p>Here&#8217;s the good news: <em>sample size rescues normality<\/em>. With 40+ participants per group, the central limit theorem smooths irregularities. Even skewed data produces reliable averages when properly scaled. Larger samples (200+) further reduce variability concerns.<\/p>\n<p>Practical testing strategies emerge when we:<\/p>\n<ul>\n<li>Acknowledge common data quirks upfront<\/li>\n<li>Choose robust methods for non-normal patterns<\/li>\n<li>Validate assumptions through visual checks<\/li>\n<\/ul>\n<p>By embracing data&#8217;s messy reality, we build tests that withstand real-world complexity. This approach turns distribution challenges into opportunities for deeper insights \u2013 helping teams make confident decisions that stand the test of time.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Imagine running marketing campaigns where every choice is backed by real user behavior instead of hunches. Split testing (often called A\/B testing) turns this vision into reality by comparing two webpage versions to see which resonates better with audiences. It\u2019s like having a compass for optimization \u2013 pointing toward designs that deliver measurable results. Why [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[265],"tags":[],"class_list":["post-7652","post","type-post","status-publish","format-standard","hentry","category-education"],"acf":[],"_links":{"self":[{"href":"https:\/\/matsh.co\/en\/wp-json\/wp\/v2\/posts\/7652","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/matsh.co\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/matsh.co\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/matsh.co\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/matsh.co\/en\/wp-json\/wp\/v2\/comments?post=7652"}],"version-history":[{"count":1,"href":"https:\/\/matsh.co\/en\/wp-json\/wp\/v2\/posts\/7652\/revisions"}],"predecessor-version":[{"id":7701,"href":"https:\/\/matsh.co\/en\/wp-json\/wp\/v2\/posts\/7652\/revisions\/7701"}],"wp:attachment":[{"href":"https:\/\/matsh.co\/en\/wp-json\/wp\/v2\/media?parent=7652"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/matsh.co\/en\/wp-json\/wp\/v2\/categories?post=7652"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/matsh.co\/en\/wp-json\/wp\/v2\/tags?post=7652"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}