As the resident technical SEO Analyst here at Digital Third Coast, I’m familiar with the less-glamorous aspects of optimizing websites: indexation, site speed, tracking issues, the list goes on. Bringing internal clients, small business owners and executives up to speed on the technical specifics of SEO requires a careful balance of analysis and consultation. However, content marketing now owns the spotlight and is ultimately the new way of
building earning links. Google and other search engines want to serve their users with awesome stuff, and SEO agencies have evolved to meet the new demand for engaging, shareable content.
Sometimes, technical SEOs like myself feel a little left out of the content marketing arms race. If content is king, where can we offer the most value to our clients and team from a technical SEO perspective? Put simply, as your business or clients send their content across the web, it’s more important to make sure that those content pieces are technically sound. Technical SEO should be the foundation of any content marketing efforts, to ensure you get as much value as possible from every investment. Without it, small errors can be quickly amplified; opportunities for links can be missed, or worse, you can risk Google penalty.
In 2013, Dan Petrovic (an Australian SEO expert) made a prediction:
SEO [will become] a strategic, content driven service with technical background at its foundation.
He points out that the prediction hasn’t exactly manifested — many agencies have content marketing services but few are supporting those services with a strong technical background. This presents SEO experts with a huge opportunity: focusing on technical SEO fundamentals at the outset of a content marketing campaign will put you head and shoulders above the competition. To get the most out of their content and avoid these sorts of disasters, content marketers should focus on these five core values of technical SEO:
Anyone interested in SEO has likely been warned of duplicate content issues; however, many don’t realize just how easy it is to create internal duplicate content. Many CMS systems (especially ASP.NET-based CMS’s) will generate multiple version of a page. Internal duplicate content is treated slightly differently than external duplicate content. Internal duplicates don’t seem to incur as much wrath from Google as external duplicates, largely due to Google’s ability to determine which version is the preferred (“canonical”) page. In addition to CMS issues that cause duplicate content, many small businesses are looking at services that “syndicate” their content across the web. Syndicating content is the process of creating some piece of content and publishing it on many other websites around the web. Content syndication is now a key tool in every content marketer’s toolbox, but it must be implemented correctly. ACTION: If syndication is part of your content marketing strategy, then there are a few ways you can avoid duplicate content.
Use these three strategies and you’ll be on your way to a healthy and duplication-free website!
As search engines become smarter, their ability to understand explicit and implicit queries is becoming more advanced. In the past, search engines viewed search queries as a string of words, simply searching and presenting content that matched the words someone put into the search box. Now, search engines view keywords as entities. This means that they are starting to understand that a word may have multiple meanings or be related to other words…or even mean something entirely unique to a specific user. From a content marketing angle, this means that Google is
stealing scraping crawling text on websites and providing searchers with simple, immediate answers. Here’s an example: The query “who won the super bowl” includes the following:
These extra bits of information — called “rich snippets” — build trust and authority with potential site visitors, giving businesses the opportunity to appear as an authority in their niche. Rich snippets are a simple method for making your content get noticed by searchers, which leads to more clicks and higher rankings. There are lots of different snippet types: authorship, recipe, video, event, and music, to name a few. ACTION: To get snippets, simply add a bit of HTML code to your page. Although the process is straightforward, each snippet requires unique code. Check out this post from Amanda DiSilvestro for explanations of how structured data should be implemented for common snippet types.
A robots.txt file is a small text file that goes in the root directory of your server. It allows webmasters to tell search engines which pages should and shouldn’t be crawled. Unfortunately, disallowing bots doesn’t mean they can’t get to content. It just means they usually accept your request. Robots.txt is useful if you have pages or documents on your site that you don’t want to be crawled by search engines, indexed and presented for anyone to access in the SERPs. Content marketers often find themselves working with a new website or CMS. Often times, a CMS will come pre-loaded with directories that have little to no SEO value, such as /login, /member-area, /resources, or /tag, to name a few. These can pose some serious SEO headaches in the form of crawl errors and duplicate content. Duplicate title tags and duplicate content can come from anywhere, but in my experience that is usually one main culprit: a website’s “tagging” functionality (specifically, blog tags). Many CMS’s, such as WordPress, let you tag posts and organize them into categories. This creates a unique URL for each category and each tag. This creates a problem when a single post or product shows up in multiple tag pages or category pages. If search engines spiders can crawl and index tag and category pages, more often than not these pages will have duplicate titles, meta descriptions, or content as other pages on your site. ACTION: Before launching a new content marketing campaign, check your robots file with a technical SEO analyst to make sure possible duplicate-causing pages are being blocked.
Website architecture refers to the layout and navigational structure of your digital resources. A successful website focuses on target audience needs and includes navigational strategy, menu organization and content labeling. When organizing your website content, you must take search engines’ abilities to crawl and index your website into consideration. A concise, descriptive URL structure ensures that search engines crawl and index all pages on a website. Many websites (again, looking at you, ASP.NET-designed sites) have all sorts of phantom directories, URL casing issues, and non-canonical default documents — typically, URLs ending in /default.aspx or /index.aspx. Wyzant, a popular site for finding tutors, is a great example. Here we can see that Wyzant’s URLs are both 1) mixed case (using both Upper and lower case), and 2) have the trailing default document issue. These can dilute inbound links from content marketing efforts, and should be fixed before investing in content. As a content marketer, you can create gorgeous infographics that get lots of links and shares, but if your client’s website isn’t structured properly, the value of those links and shares will be severely diluted. ACTION: Before throwing budget at new content ideas, make sure your website URL structure is set up correctly. If the site is designed in ASP.NET, look for URLs with mixed cases and trailing default document types.
An XML Sitemap is a file on your server that tells search engines exactly where to find all your content. Whereas a robots.txt file tells search engines what they shouldn’t crawl, Sitemaps tell Google, “HEY! I just created some awesome new content. Hurry up and index it please!” Sitemaps don’t just have the URL of your home page, but also include the URLs of every other page on the site, the location of images, video, products, and all the meta data associated with each URL. Most SEO’s and create one big XML Sitemap for every piece of content on a website, and then update the sitemap whenever a new product, image, video, or page is created. However, few SEOs or content marketers use multiple XML Sitemaps for unique content types, page categories. and blog posts. ACTION: First, identify which Sitemaps you need (this step-by-step guide from LunaMetrics explains how to choose your Sitemap types). Then, generate unique Sitemaps based on different content types and site structure. This will help Google and other search engines quickly crawl and index new content, leading to better rankings, more clicks, and more shares.
With these five steps, you can prepare your content for SEO success. In addition to these guidelines, assessing a website’s technical standing with a comprehensive SEO site audit is a great way to uncover potentially-harmful SEO issues. Once complete, you can move forward with confidence, knowing that your website is SEO optimized and ready to stand out on the web!