Shopify Product Tags: Why They Hurt SEO & How to Fix It
Shopify product tags are a useful feature for organizing and filtering products. But from an SEO perspective, they create a serious problem: every tag under every collection generates a separate, indexable URL. For a store with 20 collections and 15 tags each, that is 300 thin pages—all with duplicate titles, descriptions, and minimal unique content.
How Tags Create SEO Problems
When you add tags to products and enable tag filtering on collections, Shopify creates URLs like:
- `/collections/perfume/125ml`
- `/collections/perfume/womens`
- `/collections/perfume/gift-set`
Each of these pages has the same H1, same meta description, and same collection description as the parent—just showing fewer products. Search engines see these as thin, duplicate content.
The Compounding Problem
- Crawl budget waste. Googlebot spends time crawling tag pages instead of your important product and collection pages.
- Index bloat. Hundreds or thousands of low-value pages dilute your domain's quality signals.
- Missed optimization. Tag pages cannot be customized individually (no unique meta description or content), so they compete poorly in search.
How to Check Your Exposure
Search Google for: `site:yourwebsite.com/collections/ intitle:tagged` or `site:yourwebsite.com inurl:tagged`
If you see results, tag pages are in Google's index.
Four Fixes
1. Canonicalize Tag Pages
Point each tag page's canonical to the parent collection. This tells Google the collection is the "real" page.
```liquid {% if template contains 'collection' and current_tags %} <link rel="canonical" href="{{ shop.url }}{{ collection.url }}" /> {% else %} <link rel="canonical" href="{{ canonical_url }}" /> {% endif %} ```
Pros: Consolidates ranking signals. Cons: Googlebot still crawls the pages.
2. Noindex Tag Pages
Add a noindex meta tag so Google drops these pages from its index:
```liquid {% if current_tags %} <meta name="robots" content="noindex, follow"> {% endif %} ```
Pros: Stronger signal than canonical alone. Cons: Still uses crawl budget.
3. Block in Robots.txt
Prevent crawlers from accessing tag pages entirely:
``` Disallow: /collections/*/* ```
Pros: Saves crawl budget. Cons: Also blocks product URLs accessed through collection paths—only safe after fixing internal product links to use `/products/handle`.
4. Delete Tags or Use Them as Subcategories
If tags are not providing value, remove them. Alternatively, with developer help, transform key tags into proper subcollections with unique content, titles, and descriptions.
Recommended Approach
Use a combination: canonicalize as a baseline (always safe), add robots.txt blocking after you have confirmed all internal product links use canonical paths. If using both canonical and noindex on the same page, note that Google may ignore the canonical in favor of the noindex—so pick one primary approach per page.
Important: Do Not Canonicalize and Noindex Together
These are conflicting signals. A canonical says "the real version is here." Noindex says "do not index this page." If both are present, Google will likely respect noindex. Use one or the other.
Practical Checklist
- Audit tag page exposure with a `site:` search
- Fix internal product links to use `/products/handle`
- Add canonical tags pointing to parent collections
- Add `Disallow: /collections/*/*` to robots.txt
- Monitor Google Search Console for index coverage changes
Written by
Simbelle Team
The Simbelle team builds AI-powered tools that help Shopify merchants grow their organic visibility. With deep expertise in SEO, e-commerce, and AI search optimization, we share practical strategies that work in the real world — not just in theory.
