TLDR: Technical SEO terms can be confusing. Find the definitions for the most common ones below.
Making sure your website’s technical SEO is in place is the foundation of any SEO strategy. Unless you have the basic requirements taken care of, all your work developing on-page and off-site SEO will be for nothing.
However, the world of technical SEO can be daunting to enter, especially as there is so much search engine terminology to contend with.
What is technical SEO?
When we, as humans, interact with a website, we look directly at the website design, usability, and content. Search engines aren’t able to experience websites in the same way we do and rely on non-visual information in order to keep their results relevant and useful for visitors. This is where technical SEO comes in.
Technical SEO is the series of website infrastructure optimizations that are not directly related to the website content or promotion.
With these optimizations properly in place, search engines will be able to see that your website exists and is ready to be included in their indices, which is the only way to show up in search results.
That said, simply letting Google (or other search engines) know that your website is there is not enough in itself. There are many related factors that go into good SEO foundations. For example, you’ll want to make sure Google can find every page on your website—not just the home page. Then, you’ll also need to convince Google that your content is valuable, looks good to human users, and loads quickly.
To achieve this, you must carry out a number of set steps and follow best practices in laying your SEO foundations. Once this is done, you can continue developing your on-site and off-site SEO efforts with greater confidence that they will produce results.
If you already have a grasp on search engine terminology, feel free to skip ahead and begin developing your technical SEO foundations. We have a number of posts that may help you, including:
- Improving Your Site Structure And Navigation For Enhanced Technical SEO
- Technical SEO Simplified: Crawling, Rendering and Indexing
- SEO Best Practices: Eliminate Low Value and Duplicate Content
- More coming soon
Technical search engine terminology explainer
Crawling is a process where a bot called a “crawler” or a “spider” searches through pages to grab content. Google’s own spider is called “Googlebot.” Crawlers also use the links on those pages to find more content. This allows Google (or other search engines) to find content on the web. If you aren’t visible to crawling spiders, you essentially don’t exist on the Internet.
Once the Googlebot crawls your website, it still needs to make sure that your website is valuable to potential visitors. In terms of the code, every website exists in the initial HTML state and the rendered HTML one. When crawling, they look at the initial HTML. However, when rendering, Google runs the code to see what the rendered HTML looks like, allowing the bot to establish the page’s value.
This is a search engine’s database of all the websites they will show when people search for keywords.
This is how you organize your website’s content. The way you group, link, and present your content to the visitor is important for usability and also makes it easier for Google to index your URLs. Categories, tags, internal links, navigation, and breadcrumbs are used to structure your site.
You will likely have heard of URLs or Uniform Resource Locators, which are a webpage’s unique location identifiers. The crawler creates a list of the URLs on your website. (They also use sitemaps to do this.)
In general, sitemaps are blueprints of your website that allow search engines to more easily crawl, index, and assess the importance of the content on your website. XML sitemaps are the most common kind and links to all your pages.
Google prioritizes the URLs on your website that need to be crawled or re-crawled in something called a crawl queue.
The crawl budget is how many pages search engines such as Google are able to crawl in a specific time period. Increasing your crawl budget can help with your indexing.
These are the different systems that take care of canonicalization, send pages to the renderer (to load them like in a browser), and process the pages to get more URLs to crawl.
Every URL is made up of five parts: the scheme, subdomain, top-level domain, second-level domain, and subdirectory. Following a logical structure with your URLs is important in SEO.
This is a standardized format to provide information about a page and classify the content. It helps Google to understand what’s on the page.
On-page content that has either little or no value to the reader. If Google determines that too much of your content is thin, it will affect your SEO.
This is when the same content appears on two or more pages with unique URLs, which can affect your search performance in a number of ways.
If you have content that appears more than once on your website, you can use canonical tags to let Google know which is the master version.
If a page is unavailable or doesn’t exist, visitors will be redirected to a 404 landing page. Linking to a 404 page on your website can harm your SEO.
A secondary navigation scheme that makes it easy for people to see exactly where they are on a website. Not only does it improve user experience, but it also helps Google to crawl your website.
These are pages that aren’t linked to or from anywhere else on your website. Unless a user knows the URL, they won’t be able to access them and search engines will be unable to crawl or index them.
If you have any further questions about developing and implementing an SEO strategy, connect with us at any time.