Everything You Need To Learn About The X-Robots-Tag HTTP Header

Posted by

Seo, in its most basic sense, relies upon one thing above all others: Search engine spiders crawling and indexing your site.

However nearly every site is going to have pages that you don’t want to consist of in this expedition.

For example, do you actually want your personal privacy policy or internal search pages showing up in Google results?

In a best-case circumstance, these are doing nothing to drive traffic to your site actively, and in a worst-case, they could be diverting traffic from more crucial pages.

Fortunately, Google enables web designers to inform search engine bots what pages and material to crawl and what to overlook. There are numerous methods to do this, the most typical being utilizing a robots.txt file or the meta robotics tag.

We have an exceptional and comprehensive explanation of the ins and outs of robots.txt, which you need to certainly check out.

However in high-level terms, it’s a plain text file that resides in your website’s root and follows the Robots Exclusion Protocol (ASSOCIATE).

Robots.txt supplies crawlers with guidelines about the website as a whole, while meta robots tags include instructions for specific pages.

Some meta robotics tags you might use include index, which informs search engines to include the page to their index; noindex, which tells it not to include a page to the index or include it in search results page; follow, which advises an online search engine to follow the links on a page; nofollow, which informs it not to follow links, and an entire host of others.

Both robots.txt and meta robotics tags are useful tools to keep in your tool kit, however there’s also another method to advise online search engine bots to noindex or nofollow: the X-Robots-Tag.

What Is The X-Robots-Tag?

The X-Robots-Tag is another method for you to control how your websites are crawled and indexed by spiders. As part of the HTTP header response to a URL, it manages indexing for a whole page, in addition to the specific elements on that page.

And whereas utilizing meta robots tags is relatively uncomplicated, the X-Robots-Tag is a bit more complex.

However this, of course, raises the concern:

When Should You Utilize The X-Robots-Tag?

According to Google, “Any instruction that can be utilized in a robots meta tag can likewise be specified as an X-Robots-Tag.”

While you can set robots.txt-related instructions in the headers of an HTTP action with both the meta robotics tag and X-Robots Tag, there are particular scenarios where you would wish to utilize the X-Robots-Tag– the 2 most common being when:

  • You want to manage how your non-HTML files are being crawled and indexed.
  • You wish to serve directives site-wide instead of on a page level.

For instance, if you wish to obstruct a specific image or video from being crawled– the HTTP reaction approach makes this easy.

The X-Robots-Tag header is likewise beneficial since it enables you to integrate several tags within an HTTP response or utilize a comma-separated list of instructions to define regulations.

Perhaps you do not want a particular page to be cached and want it to be not available after a specific date. You can utilize a mix of “noarchive” and “unavailable_after” tags to instruct online search engine bots to follow these directions.

Basically, the power of the X-Robots-Tag is that it is far more versatile than the meta robotics tag.

The benefit of utilizing an X-Robots-Tag with HTTP actions is that it permits you to utilize routine expressions to perform crawl regulations on non-HTML, as well as apply parameters on a bigger, worldwide level.

To assist you understand the distinction in between these regulations, it’s practical to categorize them by type. That is, are they crawler directives or indexer instructions?

Here’s an useful cheat sheet to explain:

Spider Directives Indexer Directives
Robots.txt– uses the user representative, permit, prohibit, and sitemap directives to define where on-site search engine bots are enabled to crawl and not enabled to crawl. Meta Robots tag– enables you to define and prevent online search engine from revealing particular pages on a site in search engine result.

Nofollow– permits you to specify links that need to not pass on authority or PageRank.

X-Robots-tag– permits you to manage how specified file types are indexed.

Where Do You Put The X-Robots-Tag?

Let’s state you wish to block particular file types. A perfect approach would be to add the X-Robots-Tag to an Apache setup or a.htaccess file.

The X-Robots-Tag can be contributed to a website’s HTTP responses in an Apache server setup via.htaccess file.

Real-World Examples And Uses Of The X-Robots-Tag

So that sounds excellent in theory, however what does it look like in the real life? Let’s have a look.

Let’s say we wanted search engines not to index.pdf file types. This setup on Apache servers would look something like the below:

Header set X-Robots-Tag “noindex, nofollow”

In Nginx, it would look like the listed below:

area ~ *. pdf$ add_header X-Robots-Tag “noindex, nofollow”;

Now, let’s look at a various situation. Let’s say we want to use the X-Robots-Tag to block image files, such as.jpg,. gif,. png, and so on, from being indexed. You could do this with an X-Robots-Tag that would look like the below:

Header set X-Robots-Tag “noindex”

Please keep in mind that understanding how these instructions work and the effect they have on one another is essential.

For instance, what occurs if both the X-Robots-Tag and a meta robotics tag lie when crawler bots discover a URL?

If that URL is obstructed from robots.txt, then certain indexing and serving instructions can not be found and will not be followed.

If instructions are to be followed, then the URLs consisting of those can not be disallowed from crawling.

Look for An X-Robots-Tag

There are a couple of different techniques that can be utilized to look for an X-Robots-Tag on the website.

The simplest method to check is to set up an internet browser extension that will inform you X-Robots-Tag info about the URL.

Screenshot of Robots Exemption Checker, December 2022

Another plugin you can use to identify whether an X-Robots-Tag is being utilized, for example, is the Web Designer plugin.

By clicking the plugin in your internet browser and browsing to “View Response Headers,” you can see the numerous HTTP headers being used.

Another approach that can be utilized for scaling in order to pinpoint issues on websites with a million pages is Shouting Frog

. After running a site through Shrieking Frog, you can navigate to the “X-Robots-Tag” column.

This will show you which areas of the site are using the tag, together with which particular directives.

Screenshot of Yelling Frog Report. X-Robot-Tag, December 2022 Using X-Robots-Tags On Your Website Comprehending and controlling how online search engine engage with your website is

the cornerstone of seo. And the X-Robots-Tag is a powerful tool you can use to do simply that. Simply be aware: It’s not without its threats. It is extremely easy to slip up

and deindex your whole site. That said, if you’re reading this piece, you’re most likely not an SEO newbie.

So long as you utilize it carefully, take your time and examine your work, you’ll discover the X-Robots-Tag to be an useful addition to your arsenal. More Resources: Featured Image: Song_about_summer/ SMM Panel