Numbers look big and words meaningless! So what does it all mean?!
First thing do is to correct to go into the ‘HTML Suggestions’ section and fix duplicate tags and indexable issues.
Now to the crawl errors. These are the reasons why fixing these should be a big priority:
- Lose of link juice if external sites referencing dead links
- Will take longer to index and rank content if it cannot be crawled
- Crawl budget spent on error pages and not on working pages
Googlebot couldn’t access your site because the request timed out or because your site is blocking Google. As a result, Googlebot was forced to abandon the request.
Excessive page load times, leading to timeouts, can be due to the following:
- Dynamic pages taking too long to respond
- Your site’s hosting server is down, overloaded, or misconfigured
Typical reasons why Google is being blocked:
- DNS configuration issue
- Misconfigured firewall
- CMS protection system
Google discovers content by following links from one page to another. To crawl a page, Googlebot must be able to access it. If you’re seeing unexpected Access Denied errors, it may be for the following reasons:
- Googlebot couldn’t access a URL on your site because your site requires users to log in to view all or some of your content
- Your robots.txt file is blocking Google from accessing your whole site or individual URLs or directories
- Your server requires users to authenticate using a proxy, or your hosting provider may be blocking Google from accessing your site.
Mostly 404 errors on your site. 404 errors can occur a few ways:
- You delete a page on your site and do not 301
- You change the name of a page on your site and don’t 301
- You have a typo in an internal link on you site, which links to a page that doesn’t exist
- Someone else from another site links to you but has a typo in their link
- You migrate a site to a new domain and the subfolders do not match up exactly