TLDR: Technical SEO terms can be confusing. Find the definitions for the most common ones below.

Making sure your website’s technical SEO is in place is the foundation of any SEO strategy. Unless you have the basic requirements taken care of, all your work developing on-page and off-site SEO will be for nothing. 

However, the world of technical SEO can be daunting to enter, especially as there is so much search engine terminology to contend with. 

What is technical SEO?

When we, as humans, interact with a website, we look directly at the website design, usability, and content. Search engines aren’t able to experience websites in the same way we do and rely on non-visual information in order to keep their results relevant and useful for visitors. This is where technical SEO comes in.

Technical SEO is the series of website infrastructure optimizations that are not directly related to the website content or promotion.

With these optimizations properly in place, search engines will be able to see that your website  exists and is ready to be included in their indices, which is the only way to show up in search results.

That said, simply letting Google (or other search engines) know that your website is there is not enough in itself. There are many related factors that go into good SEO foundations. For example, you’ll want to make sure Google can find every page on your website—not just the home page. Then, you’ll also need to convince Google that your content is valuable, looks good to human users, and loads quickly.

To achieve this, you must carry out a number of set steps and follow best practices in laying your SEO foundations. Once this is done, you can continue developing your on-site and off-site SEO efforts with greater confidence that they will produce results.

If you already have a grasp on search engine terminology, feel free to skip ahead and begin developing your technical SEO foundations. We have a number of posts that may help you, including: 

Technical search engine terminology explainer

Crawling

Crawling is a process where a bot called a “crawler” or a “spider” searches through pages to grab content. Google’s own spider is called “Googlebot.” Crawlers also use the links on those pages to find more content. This allows Google (or other search engines) to find content on the web. If you aren’t visible to crawling spiders, you essentially don’t exist on the Internet.

Rendering

Once the Googlebot crawls your website, it still needs to make sure that your website is valuable to potential visitors. In terms of the code, every website exists in the initial HTML state and the rendered HTML one. When crawling, they look at the initial HTML. However, when rendering, Google runs the code to see what the rendered HTML looks like, allowing the bot to establish the page’s value. 

Renderer

The renderer loads pages in the same way a browser would using JavaScript and CSS files so that Google can see what most users will experience.

Indexing

This is a search engine’s database of all the websites they will show when people search for keywords.

Site structure

This is how you organize your website’s content. The way you group, link, and present your content to the visitor is important for usability and also makes it easier for Google to index your URLs. Categories, tags, internal links, navigation, and breadcrumbs are used to structure your site. 

URLs

You will likely have heard of URLs or Uniform Resource Locators, which are a webpage’s unique location identifiers. The crawler creates a list of the URLs on your website. (They also use sitemaps to do this.)

XML sitemaps

In general, sitemaps are blueprints of your website that allow search engines to more easily crawl, index, and assess the importance of the content on your website. XML sitemaps are the most common kind and links to all your pages. 

Crawl queue

Google prioritizes the URLs on your website that need to be crawled or re-crawled in something called a crawl queue.

Crawl budget

The crawl budget is how many pages search engines such as Google are able to crawl in a specific time period. Increasing your crawl budget can help with your indexing.

Processing systems

These are the different systems that take care of canonicalization, send pages to the renderer (to load them like in a browser), and process the pages to get more URLs to crawl.

JavaScript

JavaScript is a programming language which is used to create the interactive aspects of websites within web browsers.

URL structure 

Every URL is made up of five parts: the scheme, subdomain, top-level domain, second-level domain, and subdirectory. Following a logical structure with your URLs is important in SEO. 

Structured data

This is a standardized format to provide information about a page and classify the content. It helps Google to understand what’s on the page.

Thin content

On-page content that has either little or no value to the reader. If Google determines that too much of your content is thin, it will affect your SEO. 

Duplicate content

This is when the same content appears on two or more pages with unique URLs, which can affect your search performance in a number of ways.

Canonical tags

If you have content that appears more than once on your website, you can use canonical tags to let Google know which is the master version.

404 pages

If a page is unavailable or doesn’t exist, visitors will be redirected to a 404 landing page. Linking to a 404 page on your website can harm your SEO.

Breadcrumb navigation

A secondary navigation scheme that makes it easy for people to see exactly where they are on a website. Not only does it improve user experience, but it also helps Google to crawl your website.

Orphan pages 

These are pages that aren’t linked to or from anywhere else on your website. Unless a user knows the URL, they won’t be able to access them and search engines will be unable to crawl or index them.

If you have any further questions about developing and implementing an SEO strategy, connect with us at any time.

About The Author
Justin Korn

Justin is the founder of Watchdog Studio, and former Director of IT at both Wells Fargo Securities and AirTreks. A prodigy of the dotcom era, he now provides businesses in Oakland, California and the surrounding Bay Area with honest, expert website services to drive growth.