Free Online SEO Audit, +90 aspects, checklist

What is an SEO audit and what areas does it cover?

An SEO audit is a website analysis that takes into account parameters that have an impact (both direct and indirect) on the website’s results in the search engine. An SEO audit allows you to effectively diagnose a number of errors and risk areas that should be taken into account in the process of website optimization, as well as its further promotion on Google.

The audit should be carried out in a comprehensive manner, because there are so many issues affecting the situation of the website in the search engine that it is impossible to specify their exact number, and therefore there are a number of aspects to be analyzed. The SEO audit should indicate which factors are the priority and also which errors should be corrected in the first place.

When is it worth doing an online SEO audit?

An SEO audit should be performed at every stage of the work with the website and repeated periodically to prevent errors during SEO optimization and to correct issues that arise on a regular basis.

It’s worth to carry out an SEO audit in the following cases:

●      You create a new website

●      You are migrating a website (therefore URL mapping, etc. is needed)

●      You want to start SEO optimization activities

●      You want to verify the optimization activities that You introduced 

●      Previously done SEO optimization brought very little or no effect on website’s possition in Google

 

 

Tools used in the SEO audit

In order to perform a reliable SEO audit, it is necessary to use paid tools, e.g. with out any limitation on the number of checked pages.

An audit can be performed using a variety of different tools. Here’s a list of most known tools that cover most of SEO related fields, including technical SEO, on- & off-page optimization, link profiling & content creation. All are usefull in SEO audits:

●      Boostsite

●      Screaming Frog

●      DeepCrawl

●      Senuto

●      Ahrefs

●      Majestic

●      SEO Surfer

●      Sitebulb

●      Semstorm

●      Keyword Finder

●      Google Search Console

●      Google Analytics

●      Google Keyword Planner

●      Google Trends

●      W3C validator

●      Ubersuggest

How much does an SEO audit cost?

The price of an SEO audit depends on the type of website for which the analysis should be performed. However, it is impossible to provide a fixed amount, because each service is different. In addition, another issue is whether we add to the price of the SEO audit the cost of tools that is necessary for its implementation (of course, in the right proportion) – after all, the person performing the audit must have them, and therefore spend money on them. Here, in principle, we could end the discussion about the price, but let’s just make a small estimate:

Company website – with a blog or without a company blog (the final price also depends on this), generally does not require work as much as an extensive website, but you still need to use paid tools for the audit to be reliable.

Thematic portal / blog – in this aspect, the technical aspects will probably be repeatable, but the content guidelines can be very extensive, including, for example, thematic clusters.

However, it is necessary to invest the most in e-commerce websites, because the issues that need to be analyzed are incomparable to the above-mentioned types of websites – all issues related to products, descriptions, categories – this requires a lot of work during the analysis. Apart from the fact that an e-commerce website can also (in my opinion, even should) have a blog.

What is worth checking in an SEO audit? Checklist

Below you will find 90 aspects worth checking during the audit (random order). I divided them into a technical section, content section and link building section. For their quick analysis, I recommend our tool – Boostsite.com

Technical section

1. Validator errors

HTML syntax errors should be checked in the W3C validator – https://validator.w3.org/ . Of course, it is worth having a website without errors. What is most important, some HTML syntax errors affect SEO. These aspects may block the rendering of content and generate the so-called critical errors, which additionally prevent the tool from checking further errors.

2. Keywords in alt attributes

Keywords should be included in the alt attributes of images – thanks to this, we increase the probability of ranking a given image in Google Images for the keyword contained in the attribute. Additionally we increase the number of occurrences of specific keywords.

3. Figcaption

An additional reinforcement of the pictures themselves will be including keywords in the <figcaption> HTML element. The scheme in which this type of aspect should be implemented is as follows:

<figure>
<img src=”image.jpg” alt=”Image description”>
<figcaption>Image description</figcaption>
</figure>

4. Unnecessary elements “a”

Element “a” should be implemented on the title of the article from the listing level (if it is a blog), instead of phrases like “read more” etc. We want to strengthen internal linking of keywords and not after phrases without the potential for visibility. Additionally, if we have an element “a” separately in the image, separately in the title and separately in “read more”, we are wasting Crawl Budget by directing the bot 3x to the same address.

5. Redundant subpages indexed

The Google index should include subpages that have the potential for visibility, so as not to waste Google’s budget on rendering subpages. Usually we should exclude all types of logins, registrations, test products, unrelated resources, pdfs, etc.

6. Errors 404, 500, etc.

Any existing and linked URL address should not return the 404 server response code. We should avoid any kind of broken links on our website. All types of errors should be corrected or redirected to their equivalent URLs to maintain the value of incoming links. Also, invalid URLs waste Crawl Budget because Google has to go to a subpage that shows nothing, and Google will index it after some time if the 404 error is not corrected.

7. Incorrect redirects, 302 instead of 301 – if not intended

If you are redirecting the URL, for example, because you changed them and you need to redirect old addresses to new equivalents, use redirect 301. Redirect 301 is permanent, and 302 is temporary, and considering that we want to permanently replace old addresses in the index with new ones, we should use just this redirect.

8. Analysis of exclusions

It is worth analyzing indexation exclusions in Google Search Console. This could also have an impact on Crawl Budget, and depending on errors and exclusions, also on the keyword rankings themselves.

9. Errors and indexing status

This aspect is related to the analysis of exclusions, but it is aimed at locating errors that cause the website to have problems with indexing. Errors will also be displayed in the GSC.

10. Log analysis

In order to identify places for optimization, e.g. to minimize the number of robots’ visits to unjustified subpages, it is worth doing a log analysis.You will learn where the robot enters and which places you should block or at least reduce the frequency of visits. Various tools can be used to deliver this, for instance Splunk, Screaming Frog Log Analyzer etc.

11. Untapped potential of filters – e.g. creating indexed categories from color filters

Filters mainly in e-commerce can prove useful and have visibility potential. It is worth considering indexing them, creating separate categories and linking them in the right places in the website structure.

12. Errors in the console

The current Google bot is updated along with the Google Chrome browser. It is worth correcting the errors in the console in this browser, because the errors that appear to us in the browser may also affect Google’s crawlers.

13. The flow of the Pagerank

The internal linking flow is quite important aspect of SEO. I referr to the frequency of occurrence of individual links contained in “a” elements. It is important that the most frequently linked addresses are not the subpages of the privacy policy, contact, regulations, etc. The highest values ​​of internal PageRank should take the most important subpages. The value of the internal linking flow can be easily checked, for example, in Screaming Frog or in the script in the #R language available on Search Engine Land – these are simple scripts examining the number of occurrences of individual links. The results of #R and SF are very similar. 

14. Subpages indexed and not linked

Subpages not linked on the site will not be easily accessible to search engine robots and if they do not have incoming links, are not placed in the sitemap, the Googlebot will simply not be able to check them.

15. Thematic clusters and linking between them

It is worth creating thematic clusters at the level of keyword analysis. We group similar words, create subpages from groups, and link similar groups internally to create a linking circle between them. In e-commerce cases, we can create linking in subcategories. For example, we can have the following layout:

Category

Sub-category 1

Sub-category 2

Sub-category 3

The scheme will take the following form:

Subcategory 1 links to Subcategory 2 with the exact link name

Subcategory 2 links to Subcategory 3

Subcategory 3 links to Subcategory 1 closing the circle of internal linking at the same time.

Additionally  each of the subcategories also link to the Category.

16. schema.org

Tags effectively distinguish subpages from the level of search results, so it is worth using them as much as possible. Stars in articles, breadcrumbs, in the case of e-commerce, price marks, product availability, address, telephone etc.

17. Breadcrumbs and the keywords it contains

It is a good idea to put keywords in the short breadcrumbs directory tree – it will probably be natural, as categories should contain them, but if they are not, it is worth remembering.

18. Structure of headings

Headers should also contain keywords, in addition, their structure should be preserved. So that there would be no lower-order headers without higher-order headers (i.e., for example, h3 without h2 occurring before). Additionally: the h1 header in the website should appear once or once per section.

19. Correct sitemap uploaded to GSC

In order to make content indexing easier for robots, it is worth uploading a sitemap to Google Search Console.

20. Menu in content

It’s a good idea to use a short menu in e.g. articles (similar to the article you are reading). This increases the occurrence of keywords and the likelihood of short links directly in the search results.

21. Canonical attributes on color versions

A good idea for solving color issues, e.g. in the case of e-commerce websites, is the appropriate use of canonical attributes. For example, we have the following product (each person’s page):

A – general / grouping version

A – red

A – blue

A – yellow

If the products are identical, with duplicate descriptions and such products, we have dozens / hundreds on the website – it may be a good idea to use canonical elements on color subpages, pointing to the general version. You should then remember to include the possibility of selecting other color versions on the grouping subpage and to include colors also in the content / title.

However, this solution should not always be used, because the color versions for different products can be searched frequently, so sometimes you have to consider whether it is worth using canonical or optimizing each product separately – in this matter you have to remember not to duplicate the content.

22. 404 to imaginary addresses

An important issue is to automatically set the server’s 404 response code (or appropriate canonical elements) to addresses that do not exist on the website. For example, the address domain.com/abcd – if the site does not exist, it should not return the server response code 200 and be indexable, because it can be used by the competition. For example, it will create a million such addresses and urge Google to index them (which is of course possible in a relatively simple way). As a result, duplicates of the website may appear in the index, effectively lowering the position of the domain.

23. Personalized error pages

If we decide on 404, it is a good practice to create this personalized subpage so that even such a subpage can bring conversion. Proposed products, featured articles etc. – everything that the user could be interested in when entering a given address.

24. Friendly addresses (no parameters, id etc.)

URLs should not contain parameters, id, underscores, capital letters, punctuation marks, etc. Punctuation marks or underscores may not be interpreted correctly by Google and, for example, it will consider them as missing, which may result in concatenation of keywords, thus reducing their influence on the ranking.

25. Asterisks in articles

I have already written about this issue briefly above – it is worth placing stars in the article that will allow users to rate a given entry / product. Thanks to this, stars will appear in the search results, which will effectively increase the CTR, and thus – organic traffic.

These types of stars should be tagged with appropriate formats – if we have WordPress, we can use e.g. the Yet Another Stars Rating plugin

26. Optimized images + lazy loading (appropriate size, format – WEBP)

You should also keep in mind that the pictures should not be too large, load quickly and possibly in modern WEBP-style formats (by adding conditional instructions for displaying older formats in the event that the browser does not yet support the latest formats – it probably will not be over time. will be needed, just like it was with HTML5 tags).

Images have an impact on the website loading speed, so it is worthwhile to optimize them and load them when they are needed (lazy-loading), and not when a given subpage is turned on.

Site speed affects site rankings, so keep this in mind.

27. Lack of unnecessary elements in the code like empty characters, linear styles or unnecessary text / javascript characters

Each character is a byte, and each byte is a longer page load time. When a subpage has a lot of redundant characters, slower loading times may become noticeable.

28. Optimization of outbound links

It should be remembered that the subpages should not have too many outgoing links, because they can be broken down by the value of the subpage itself, and therefore we can lower its value in the Google algorithm.

29. JavaScript at the end of the body

It is worth placing JavaScript at the end of the body, because some of them may delay or even block the rendering of the site by search engine robots. If only the scripts are not required to be placed higher in the structure (because, for example, they will not work after being moved) – I recommend doing such an action.

30. Missing flash / iframe

Google is not quite coping with it, so I do not recommend using this type of elements, and in the case of using Iframe, sometimes Google may consider the content as a duplicate – which is incorrect, because Iframe can show a view of a different subpage, not the same one, but it happened I think Google has interpreted Iframe in this way. Overall, I do not recommend it, except in insignificant matters.

31. Incomplete encryption / SSL certificate

All subpages should be fully encrypted. This can be disturbed by e.g. resources loaded from unencrypted sources (e.g. without https: // ).

The resources loaded both inside and outside the website should be implemented via an encrypted connection, e.g. by placing them on the server where the main domain is located.

32. Subpages blocked in robots.txt

Apart from the fact that Google does not always respect the directives contained in robots.txt (although it officially informs about the aspects blocked in robots.txt etc., it is enough to look in the logs to see that it is not entirely true in every case), despite all it is not worth putting in it subpages that are indexable or resources that are used, because sometimes it respects the directives appearing in this file.

33. Do not use meta refresh redirection

This redirect is worse for many reasons, e.g. because the redirect declared in e.g. .htaccess is one level higher. Ie. the google crawler will check .htaccess first and then enter the code. So we are actually wasting Crawl Budget by creating a meta refresh redirect. In addition, such redirects can be interpreted differently by Google, not to mention the negative reception by users, who start reloading the enabled website.

34. Appropriate noindex, nofollow instead of noindex, follow – without intentional use

If you are not intentionally using noindex, follow, use noindex, nofollow. Noindex – removes from index, nofollow – tells Google not to go to the subpage.

35. URLs without Polish characters, underscores, capital letters, spaces, ASCII etc.

I also wrote about this issue a bit higher, in order not to use elements that could be misinterpreted by Google when creating friendly URLs. Above I wrote about it in the context of friendly addresses, which are better perceived by search engine bots and it is worth using them, but it is also worth remembering that the addresses do not contain characters that may act unfavorably. Use hyphens instead of underscores. Use only lowercase characters instead of large characters. Instead of punctuation marks, use nothing. Instead of signs like “-i-” – do not use anything.

36. Service speed

Website speed has a big impact on organic results. It should be ensured that the speed for both mobile and PC versions is at the highest possible level (at least within 80 according to PageSpeed ​​Insights).

37. Correct hreflang attributes for language versions

If the website has language versions, it is worth implementing appropriate hreflang elements that will inform Google about the available language versions. They should also be implemented in the .xml sitemap. I recommend that you read the instructions from Google regarding the appropriate marking of language versions.

38. Too high size of subpages

Unnecessary signs, too much size of resources and all other aspects that increase the importance of the subpage should be optimized. If you accumulate a lot of unnecessary areas, the slowdown of the website may become noticeable, so it is worth making sure that there are no unnecessary elements in the code.

39. Optimization of server queries (e.g. external .css files)

In addition to optimizing unnecessary parts of the code and linear styles, it is also worth optimizing the amount of resources loaded from external files so as not to cause further queries to the server.

40. redirects to the encrypted version with one redirect – with or without www

In this aspect, also in order to optimize queries to the server, you should create a redirect to the target, encrypted version of the domain with a single redirect from each possible version (with www, without www, with http without www etc.)

41. The redirect chain

Each redirection, which is not a single one, generates a completely unnecessary query to the server. This type of aspect should be avoided, and in order to simply check for a chain of redirects, I recommend that you perform a site Crawl in any redirect detection tool (e.g. boostsite.com).

42. Ability to crawl resources (not blocked in robots.txt etc.)

It is worth checking if the resources we use on subpages are not blocked, e.g. in robots.txt. At the moment when the Google bot, while rendering the website, encounters a resource that will be blocked for it (theoretically), this may, in a sense, disturb the interpretation process of a given subpage.

43. Address mapping if there was a change of page / addresses

When we move to a new website and URLs change, it is very important to create a complete URL mapping. Each address from the previous version of the site should have an equivalent in the new version and be redirected to it via a permanent redirect, i.e. 301.

If we resign from some subpages and a given address will not have any equivalents, you should also redirect to the most matched subpage (e.g. redirecting an unavailable product to the address of the category in which it was located).

44. 404 redirects to counterparts to keep external linking

If there are errors on the site 404, necessarily need to be redirected to the closest equivalent or sub matched. It is necessary, for example, because such an address may have incoming links, and when it returns the 404 server response code – it will be indexed and the links will not bring any value for the domain.

45. Sitemap without 301, 404, indexed – only indexed 200

A sitemap can to some extent make indexing bots easier, so it is worth remembering not to include indexed, incorrect, redirected addresses etc. in it, because we can unnecessarily negatively affect Crawl Budget.

46. ​​Internal links only returning server 200 response code

The website should not contain links that are not the target version of a given subpage. For example, a redirect link will force the Google bot to go through unnecessarily (if, for example, the target version is already in the index), thus wasting Crawl Budget. Also, incorrect links should not be linked.

47. Crawl Errors (GSC)

It is worth taking a look at the index room in Google Search Console. Often, apart from the analysis of exclusions, it is a good tip for optimizing errors in the website.

48. Improving outbound and internal links so that they are not wrong

Each address on the website should return the server response code 200, i.e. be a valid, target version of itself. In other cases, we waste Crawl Budget and in cases of wrong addresses returning 404 – also the value of the subpage itself.

49. Correct mobile version with products, headers etc. visible in the 1st view.

In the era of Mobile First Index, it is worth bearing in mind that the mobile versions are also properly optimized. Without the need to use the slider, the header + some products for e-commerce should be visible. We want to avoid a situation where, at first glance, after turning on the mobile version, there will be no header defining the subpage or products on which the user could convert.

50. Menu structure

It is good to ensure that the menu communicates the highest PageRank values ​​to the most important subpages. Therefore, it is not worth including privacy policy subpages etc. in the main menu. Similarly, it is not worth including the entire catalog structure in the menu, because we are lowering the value of the subpage on which this type of menu will be located (defacto of each subpage).

51. Pagination subpages

In truth, rel="prev/next" is supposedly not justified, but it is always worth remembering that pagination pages should have a separate URL address and be marked in the title. 

52. Indexing of unnecessary resources by x-robots

In addition to getting rid of unnecessary subpages from the index, it is also worth getting rid of unnecessary resources such as pdf etc. (only if really unnecessary, of course). As it is rather difficult to implement meta robots in .pdf files, you can do it e.g. via x-robots included in the .htaccess file.

53. Sitemaps for resources, photos, pictures etc.

To speed up and optimize resource indexing, it is good to create a separate .xml map especially for them. Then this type of map should be implemented in Google Search Console.

54. Pagination 1 – 2 – 3 etc. rel next / prev

Generally speaking, I am skeptical about information from Google, therefore I still implement rel="prev/next" tags on my websites + add canonical to myself for pagination subpages. Theoretically, and without it, Google should be able to cope with pagination, but I still implement this “unused” tag (I leave the implementation of this aspect to my own judgment, because I have no evidence that implementing this or not may affect the website, although a large number of SEO specialists still use this).

55. The page without JS shows the content

As above – I do not have any evidence, but I have experience with dozens of websites with this problem. At the moment when the website did not display the content without using JavaScript (despite the theoretically well-implemented “JS for SEO”), the number of organic words that were ranked was much smaller than after the implementation of corrections in this aspect. It can be assumed that it was probably not the correct implementation of “JS for SEO”, but each time after repairing it (implementing content using regular HTML), the website’s situation improved drastically.

56. Adaptation to social networks

The website should be adapted to social networks, e.g. through Open Graph . If you have WordPress, you can use one of the free plugins, e.g. this one, or implement this aspect yourself. It is simple, quick and, importantly, useful.  

57. The language tag – html lang etc.

This aspect is of great importance in the case of language versions. At the moment when the Google robot sees the page, it should also be informed about the language of the site, e.g. through the HTML lang attribute.

59. Link depth

Including a website structure cataloged in such a way that reaching individual sections requires too many clicks – it has a negative effect on both the bot and the user. A negative impact on the user may also have a negative impact on traffic, and in a way also on the website items themselves.

60. HTML5 tags like <article> or <nav>

It’s a good idea to use HTML5 tags. Google treats them quite well (I remember a situation where I wanted to force links to show directly from the search results and only including ul, li in the nav section helped ).

61. Setting up a business card on Google My Business

A very important aspect during local activities, about which I wrote a little more in the article: Local SEO in Google, there is also an instruction on how to configure such a map – I encourage you to read it.

62. Valid addresses not included in the XML map

If the address is correct (returns server 200 response code) and is indexed and indexable, it should be included in the .xml sitemap.

64. Non-indexable a-element filters, a waste of Crawl Budget

If you have filters on your site that you don’t want to index, they shouldn’t be “a” elements. They should not, therefore, that Googlebot will unnecessarily pass over them, wasting its budget for the website and there may not be enough space for the target categories (I had several cases of this type in e-commerce websites).

65. Blocked possibility of indexing the search results

Filters-like aspect: Search results should generally not be indexed as they may negatively affect the indexing state. Sometimes it is worth indexing some search results, but only sometimes, in justified cases (it is worth considering this type of aspect, for example, in websites where the user enters codes / numbers that have searches, e.g. websites with VIN number, etc.) and also rather not all.

66. Keyword of a given category in the name and alt attribute of products on the listing

In addition to having a positive impact on the ranking factors themselves, you increase the number of keyword appearances on a given subpage, which may have a positive effect on the positions of, among others the main keyword of the category.

67. Hidden duplicate menus, eg for the mobile version

The menu should be responsive. Deploying a blind copy specifically for the mobile version is not a good idea (I mention it because I still see such a solution very often).

68. Alt attribute for the logo and linking the logo to the home page

Logo = link to the home page. Additionally, if there is an image in the logo, it should have the alt attribute with the main keyword ranking on the home page.

69. Links from the main menu

I strongly advise against these types of aspects, and I also come into contact with them very often. The main menu should only contain internal links.

70. A clickable title of entries instead of “read more” (or 1 link)

When linking a given article, the title should be the link, because it usually has keywords in it and we want to strengthen them through internal linking. Phrases “read more” should rather not be linked, possibly in 1 link together with the picture of the post, or via an external JS script thrown additionally in robots.txt to disallow, so that the bot would not interpret it. However, I recommend 1 link for the title or 1 link for the whole title.

71. Redirects to invalid addresses

I also encounter it very often – redirection should occur only to the correct versions of addresses.

72. Links in search results

You can try to influence the click-through rate from the search results by implementing the ul, li list included in nav with internal links. There is then a slightly higher chance that Google will correctly display the addresses directly in the SERPs.

73. Double elements eg description, canonical etc.

Google can handle double declarations, but it is redundant code that unnecessarily increases the importance of the subpage. These types of mistakes should be avoided, and I come across such things very often.

74. Invalid HTML elements

This includes typos like “lt” instead of “alt” and other similar HTML syntax errors. I am writing about it only because I have recently come across a dozen or so such cases. Typos in the code, especially in the element names, can make the code simply inoperative. Be careful of these types of cases.

75. Redirect from index.html and index.php

To prevent duplication, get rid of index.html, index.php etc. addresses, for example by redirecting 301 to the home page.

Content section

Keywords in headings, titles and meta description

Titles and headings are one of the most important aspects of on-site having a direct impact on the position of the website. Therefore, it is worth including the most important and most searched keywords in them.

CTA and emoji in meta description

Appropriate saturation of the meta description with keywords, Call To Action phrases and Emoji characters, effectively distinguishes the subpage in the search results, thus increasing the click-through rate (CTR), and thus – traffic to the website.

Optimization of existing subpages (GSC)

This aspect is worth doing every now and then. It is about checking in Google Search Console on which keywords a given subpage ranks and saturating the existing text with keywords that are not yet present in it, and at the same time are related to the topic. This way, it will be possible to gain new traffic from long tail keywords.

Questions in the body

In addition to the new traffic coming from long tail keywords (included in the questions), a good number of Google queries are formulated in the form of questions. Adding context capture to it – it’s worth supplementing the content with questions about a given topic.

Content saturation with keywords

For each entry, I recommend that you first create a keyword analysis to list the keywords that you will use in a given article. Thanks to this, it will be possible to obtain more organic traffic on a given subpage.

Placing keywords from Google suggestions

Keywords displayed in Google suggestions will be a good idea to complete the analysis. These types of keywords are generated, for example, by the Ubersuggest tool, although apart from this tool, there are many ways to automatically obtain this type of keywords.

LSI – semantic keywords

As I mentioned briefly above – Google is able to capture the context of statements, therefore it is worth enriching the content with semantic keywords. Thanks to this, you can display on Google for keywords that you did not even mention in the content.

Title length / description

This aspect is often very strictly followed by some SEO workers, in my opinion completely unnecessarily, and even to the detriment of the website. If we are to shorten the title by getting rid of some keywords – I do not recommend doing it. Google will interpret the whole title anyway, even the too long one (I tested it many times), and the only drawback I see with too long titles is that it does not fully display it from the search results. Obviously, the farther the keywords are in the title, the less value they will have for the algorithm, but they will still have it, and in my opinion, a better title is longer with keywords than shorter, without some of the keywords being searched for.

Duplication of content within the site and the Internet

Duplication can cause cannibalization, i.e. subpages competing against each other for the same keywords. It is definitely undesirable, therefore it is worth ensuring that there is no duplication of content primarily within the website, as well as the Internet, i.e. in the case of websites, e.g. e-commerce – I do not recommend copying product descriptions from the manufacturer etc. Unfortunately, we do not have much influence on copying content from us.

Update old posts

The aspect that works on the Internet just like on TV – old, well-known series create new seasons to renew the old interest, or at least restore it a bit. By renewing old entries by updating their content, the effect can be similar, in addition, we can increase traffic thanks to keywords contained in the added content.

Elimination of empty content and thin content (enrichment with content or indexing)

In addition to redundant subpages, the Google index should also not include empty subpages or subpages with a negligible amount of content. A good option to solve this problem is to enrich this type of entries or index them (depending on a specific case).

Proper keyword analysis, also taking into account long tail, blog keywords and transactional keywords

When creating a keyword analysis, it is worth dividing into generic, transactional, blog, etc. keywords. Then, in a much simpler way, it will be possible to create thematic clusters and appropriate content preparation in order to get the greatest possible organic traffic.

Numbers in the titles

This aspect has an impact on the click-through rate (I also used it in this article). Lists of all kinds are attractive to the user because they know exactly what to expect, how much will be and what is most important. Titles like “TOP 10 SEO factors influencing …”, “15 best-earning actors …” – effectively increase CTR, and thus – traffic to the website.

Questions in the entries (to the user)

The questions are kind of a call to action, the reason why the user will want to leave a comment. This is good because in the case of indexable comments (as, for example, it is by default in WordPress), the content may have an impact, inter alia, in for positions, long-tail keywords or traffic to the site. So let’s create opportunities for users to contribute.

Eliminate cannibalization in titles, headers and urls

Black glass

A glass – what is it and how to use it

A glass – what it is

2 aspects from the list above are potentially cannibalized. Sometimes Google will decide that all three are subject to it, but recognizing the first keywords, that is “black glass”, will be a Google error in this case. It happened to me a dozen times, but it was a few years ago. Now, despite everything, I try not to give Google the opportunity to make mistakes, although such situations should theoretically no longer occur.

To sum up: the potential cannibalization of the word “glass” will be at a point where several pages (in this case two) will be dealing with the same aspects. It should also be remembered that the URLs should be visibly unique (i.e. that the words are also unique, and not just an ordinal number to make the address stand out)

Content downloaded from the producer (copy / duplication)

I mentioned this aspect a bit above – the idea is not to generate duplications within the Internet. By copying content from someone (regardless of whether they are product descriptions downloaded from the manufacturer or other type of content), our subpage may be rated worse by the Google algorithm, thus causing worse organic results. The content should be unique – then the risk of duplication does not occur.

Keywords in internal links

Internal links can effectively help you boost your keyword rankings. When linking to a given subpage, it is good to use the main keywords related to it. For example, I would like to strengthen the keyword: seo optimization, because it is an important keyword for me, therefore I link exactly this anchor (link name) to a specific URL.

local keywords on subpages (e.g. cities)

This aspect is very important in local SEO. Subpages should be optimized as much as possible for a specific location. I wrote more about this in the article – Local SEO in Google .

Strengthening internal linking from blog entries

When looking for sites to link keywords internally, a blog will be a good source of these types of links, if there is one on the page. In entries referring to a given aspect, it is worth linking to a specific subpage, using the main keywords in the anchor regarding the linked entry, category, product, etc.

Link building section

Contrary to appearances, this aspect is treated very “superficially” in many companies. Often, any (just any) links from “whitepress” or something of that kind are obtained for clients, without much thought as to whether such links are worth attention.

Choosing the right links is very important and can make a really big difference. Link Building and SEO should go hand in hand. The language of the page from which we link, its environment in the content (is it thematically related), the number of domains linking to the page from which we link, their quality, thematic link, etc. Choosing the right strategy for acquiring external links and acquiring them is a very important aspect that should not be overlooked (and contrary to appearances, I encounter it very often).

One of the good ideas for enriching your link acquisition strategy is to create your own expired domain page network (PBN – Private Blog Network). Expired domains for Google have a high value (as long as they are properly selected). They have it because they are often authentic websites that someone cared for in the past, but for various reasons they stopped and the domain was lost. Such a domain will be valuable and it is worth making good use of it.

Framework for a basic SEO audit

The framework for a basic SEO audit should look like this:


Technical section

The aspects that should be included in this part of the SEO audit include:

●      Website speed optimization

●      Website structure analysis and optimization

●      Optimizing the flow of internal linking

●      Analysis of potential cannibalizations

●      Guidelines for the saturation scheme of repetitive elements with keywords (e.g. to implement keywords in the “alt” attributes of images – this is mainly e-commerce)

●      Analysis and optimization of Google indexed results

●      Analysis and optimization of language versions, if any (aspects related to the correct implementation of hreflang, creating URLs, etc.)

●      Fixed bugs in the console

●      Redirect optimization (address mapping, improvement of double redirects, server response code etc.)

●      Analysis of the competition in terms of the application of technical aspects that will also be helpful for the analyzed website (e.g. implementation of widgets for the proposed products, which the competition has and lacks them for the analyzed website, and their implementation would be justified)

Content section

This section should include, but is not limited to:

●      Keyword analysis

●      Planning the website structure based on the analysis of keywords

●      Planning of thematic clusters

●      Guidelines for optimizing the content of subpages (products, categories, static subpages)

●      Guidelines for meta descriptions (appropriate saturation with keywords, Call To Action phrases and Emoji characters to make the subpage stand out in the search results, thereby increasing the click-through rate (CTR)

●      Guidelines for titles (appropriate keyword saturation, layout + if appropriate – creating a scheme for their creation)

●      Analysis and guidelines for optimizing the structure of headlines

Link building section

The LB section should include, inter alia, the following aspects:

●      Analysis of the link profile of the analyzed website and competition domains

●      A selection of guidelines for acquiring backlinks

●      Keyword analysis in terms of their use in articles generating backlinks

Leave a Comment

Boostsite Sp. z o.o.
św Mikołaja 7
50-125 Wrocław, Poland

Home
polski english english