When it comes to link building idea generation, the sky's the limit! In today's post, Rhea Drysdale offers her tips for best practices and a philosophical approach to link building that will help bring your ideas to life.
Your new post is loading...
Your new post is loading...
What are business attributes, and why should local businesses care? Columnist Adam Dorfman explores.
Norman Pongracz's insight:
TL:DR Google started asking their users to confirm business attributes on Google maps, such as "wheelchair accessibility" or "offers take-out". These attributes are becoming increasingly important in local search because they trigger the "micro-moments" of user journey - moment which influences users' decisions the most - that often leads to actual offline visit. Also Google noted that the number of “near me” searches have increased 146 percent year over year, and 88 percent of these “near me” searches are conducted on mobile devices. Recently Google My Business updates support changing attributes for businesses as well.
Now, when checking into places on Google Maps, you may have noticed that Google prompts you to volunteer information about the place you’re visiting to refine the place's attributes: Attributes consist of descriptive content such as the services a business provides, payment methods accepted or the availability of free parking — details that may not apply to all businesses. Attributes are important because they can influence someone’s decision to visit you.
Many publishers are trying to incentive adding attributes this via programs like Google’s Local Guides, TripAdvisor’s Badge Collections, and Yelp’s Elite Squad because having complete, accurate information about locations makes each publisher more useful.
Basic info such as name, address and phone are basic identifiers for businesses, but attributes are increasingly contribute to local visibility. According to seminal research published by Google, mobile has given rise to “micro-moments,” or times when consumers use mobile devices to make quick decisions about what to do, where to go or what to buy.(more on micromoments: https://www.thinkwithgoogle.com/collections/micromoments.html)
Google noted that the number of “near me” searches (searches conducted for goods and services nearby) have increased 146 percent year over year, and 88 percent of these “near me” searches are conducted on mobile devices.
The recently released Google My Business API update to version 3.0, Google also gave businesses that manage offline locations a powerful competitive weapon: the ability to manage attributes directly.
Norman Pongracz's insight:
1. Click-Through-Rate Affects Organic Rankings
It appears that click volume (or relative click-through-rate) does impact search ranking base on a Moz testing (http://www.slideshare.net/randfish/mad-science-experiments-in-seo-social-media/88-ExperimentsFuture).
The test: The tweet by Rand Fiskin pointed his followers to some basic instructions which asked participants to search “the buzzy pain distraction” then click the result for sciencebasedmedicine.org. Over the next 2.5 hour period following Rand’s initial tweet, a total of 375 participants clicked the result. The effect was dramatic. sciencebasedmedicine.org shot up from number ten, to the number one spot on Google.
How do factor CTRs into your SEO strategy?
/ Craft compelling Page Titles and Meta Descriptions.
/ An established and recognisable brand will attract more clicks.
/ Optimise for long clicks
/ Use genuine tactics – The effects of sudden spike in CTR will only last a short time. When normal click-through-rates return so will your prior ranking position.
2. Mobilegeddon Was Huge
The test: On the week of April 17th, 2015 (pre Mobilegeddon) Stone Temple pulled ranking data on the top 10 results for 15,235 search queries. They pulled data on the same 15,235 search queries again on the week of May 18th, 2015 (after Mobilegeddon).Source: https://www.stonetemple.com/mobilegeddon-may-have-been-bigger-than-we-thought/
Findings on Mobilgeddon:
/ The non-mobile friendly web pages lost rankings dramatically. / In fact, nearly 50% of all non-mobile friendly webpages dropped down the SERPs.
/ The mobile friendly pages (overall) gained in ranking.
3. Link Echoes: How Backlinks Work Even After They Are Removed
/ It does appear that some value from links (perhaps a lot) do remain, even after the links are removed.
/ These higher rankings remained for many months after the links were removed. Source: https://moz.com/blog/link-echoes-ghosts-whiteboard-friday
/ Quality links are worth their weight in gold. Backlinks will continue to give you a good return. The ‘echo’ of a vote once cast (as proven by this test) will provide benefit even when removed.
/ The value of links DO remain for sometime. So, before you get tempted into acquiring illegitimate links, consider if you’re ready to have that remain a footprint for months (or even years) ahead.
4. You Can Rank With Duplicate Content
The test: From time to time a larger, more authoritative site will overtake smaller websites’ in the SERPs for their own content. This is what Dan Petrovic from Dejan SEO decided to test in his now famous SERPs hijack experiment. https://dejanseo.com.au/hijack/. Using four separate webpages he tested whether content could be ‘hijacked’ from the search results by copying it and placing it on a higher PageRank page – which would then replace the original in the SERPs.
/ In all four tests the higher PR copycat page beat the original.
/ In 3 out of 4 cases the original page was removed from the SERPs
Search Console warning on hijacked content Shortly after running the experiment Dejan SEO received a warning message inside their Google Search Console account. The message cited the dejanseo.com.au domain as having low-quality pages, an example of which is ‘copied content.’ Around the same time one of the copycat test pages also stopped showing in SERPs for some terms. This forced Dan to remove the test pages in order to resolve the quality issue for his site. So it seems that whilst you can beat your competitors with their own content, it’s definitely not a good idea to do so.
5. Rich Answers [Number One Is NOT The ‘Top’ Spot]
Rich Answers: Rich Answers are on the rise and the results you see today are a far cry from the ten blue links of the past. Rich Answers are the ‘in search’ responses to your queries you’ve probably been seeing more of in recent times. They aim to answer your query without you having to click through to a website.
/ According to the study by Erez Barak at Optify (http://www.slideshare.net/optify/optify-organic-click-through-rate-webinar-slides), in 2011 the top ranking website would receive as much as 37% of total clicks. But, with the growth of Rich Answers that’s all changing. The ‘number one’ organic result is being pushed further and further down the page. Click volumes for the ‘top spot’ are falling.
The test: But just how much are Rich Answers affecting click-through-rates? Confluent Forms http://www.confluentforms.com/) got a Rich Snippet result listed for their website and CTRs went up once the Rich Answer was added. They went down when the Rich Answer were removed. Rich Answers are intended to solve queries from within the search results. Yet, they can still send additional traffic to your site, just as they did for Confluent Forms.
SEO Best Practice Rich Answers are generally provided for question based search queries.
/ Identify a simple question – Make sure the question is on topic. You can check this by using a Relevancy tool such as nTopic.
/ Provide a direct answer – Ensure that your answer is simple, clear and useful for both users and search engines.
/ Offer value added info – Aside from your concise response to the question, include more detail and value. Be sure not to just re-quote Wikipedia since that’ll not get you very far.
/ Make it easy for users and Google to find – This could mean sharing it with your social media followers or linking to if from your own or third party websites.
6. Using HTTPS May Actually Harm Your Ranking
The test: The HTTPS study (https://www.stonetemple.com/how-strong-is-https-as-a-ranking-factor/) tracked rankings across 50,000 keyword searches and 218,000 domains. They monitored those rankings over time, and observed which URLs in the SERPs changed from HTTP to HTTPS. Of the 218,000 domains being tracked, just 630 (0.3%) of them made the switch to HTTPS. These sites (the ones switched to HTTPs) actually lost ranking. Later they recovered (slowly) to pretty much where they started.
Results: It appears that HTTPS (despite Google wanting to make it standard everywhere on the web) has no significant ranking benefit for now and may actually harm your rankings in the short term.
7. Robots.txt NoIndex Doesn’t (Always) Work
Intro: The common approach adopted by Webmasters is to add a NoIndex directive inside the Robots Metatag ON A PAGE. When the search engine spiders crawl that page, they identify the NoIndex directive in the header of the page and remove the page from the Index. On the other hand, the NoIndex directive placed inside the Robots.txt file of a website will both stop the page from being indexed and stop the page from being crawled.
The test and results: However! According to tests (https://www.stonetemple.com/does-google-respect-robots-txt-noindex-and-should-you-use-it/) Google did not remove any of the pages immediately despite the website’s Robots.txt files being crawled several times per day. But, what we can be sure to say is that (despite Google showing support of Robots.txt NoIndex) it’s slow to work, and sometimes doesn’t work at all.
8. Exact Match Anchor Text Links Trump Non Anchor Match Links
The test and results: What happens when you point 20 links to a website all with the same keyword rich anchor text? Your rankings skyrocket, that’s what! In a series of three experiments, Rand Fishkin tested pointing 20 generic anchor links to a webpage versus 20 exact match anchor text links. In each case the exact match anchor text increased the ranking of the target pages significantly. And, in 2 out of 3 tests the exact match anchor text websites capitulated the generic anchor text websites in the results.
9. Link To Other Websites To Lift Your Rankings
The test: Shai Aharony and the team at Reboot put this notion to the test in their outgoing links experiment. Source: http://www.rebootonline.com/blog/long-term-outgoing-link-experiment
For the experiment Shai setup 10 websites, all with similar domain formats and structure. Each website contained a unique 300 word article which was optimised for a made up word “phylandocic”. Prior to the test the word “phylandocic” showed zero results in Google.
In order to test the effect of outbound links, 3 full follow links were added to 5 of the 10 domains. The links pointed to highly trusted websites:
/ Oxford University (DA 92)
/ Genome Research Institute (DA 85)
/ Cambridge University (DA 93)
Results: EVERY single website with outbound links outranked those without. This means your action step is simple. Each time you post an article to your site, make sure it includes a handful of links to relevant and trustworthy resources.
10. Nofollow Links Increase Your Ranking
The test: Another experiment from the IMEC lab http://www.slideshare.net/randfish/mad-science-experiments-in-seo-social-media/73-12345678910 was conducted to answer: Do no followed links have any direct impact on rankings? Since the purpose of using a “nofollow” link is to stop authority being passed, you would expect nofollow links to have no (direct) SEO value. This experiment proves different.
Results: In the first of two tests IMEC Lab participants pointed links from pages on 55 unique domains at a page ranking #16. After all of the nofollow links were indexed, the page moved up very slightly for the competitive low search volume query being measured. In both tests the websites improved their rankings significantly when nofollow links were received.
/ The first website increased 10 positions.
/ The second website increased 4 positions.
11. Links From Webpages With Thousands of Links Do Work
There is a belief in SEO that links from webpages with many outgoing links are not really worth much. The more outbound links that website has, the less value passed to your site. This theory is reinforced by the notion that directories and other lower-quality sites that have many outbound links should not provide significant ranking benefit to sites they link to. In turn, links from web pages with few outbound links are more valuable.
The test and results Dan Petrovic tested this in his PageRank Split Experiment. https://dejanseo.com.au/pagerank-split-experiment/
In his experiment Dan set up 2 domains (A and B). Both domains were .com and both had similar characteristics and similar but unique content. The only real difference was that during the test Website B was linked to from a site which is linked to from a sub-page on http://www.debian.org (PR 7) which has 4,225 external followed links.
Immediately after Website B was linked to from the PR 7 debian.org (via the bridge website) Website B shot up in rankings, eventually reaching position 2. And, as per Dan’s most recent update (3 months after the test) website B maintained its position, only held off top spot by a significantly more authoritative PageRank 4 page. Website A (which had not been linked to) remained a steady position for a while, then dropped in ranking. So it appears that links from pages that have many outbound links are in fact extremely valuable.
12. Image Links Work (Anchor Text Proximity Experiment)
The test: The experiment (https://dejanseo.com.au/anchor-text-proximity-experiment/) was designed to test the impact of various link types (and their context) on search rankings. To conduct the test Dan registered 4 almost identical domain names:
Each of the 4 domains was then linked to from a separate page on a well-established website. Each page targeted the same exact phrase, but had a different link type pointing to it:
001: [exact phrase] - Used the exact target keyword phrase in the anchor text of the link.
002: Surrounding text followed by the [exact phrase]: http://********002.com.au - Exact target keyword phrase inside a relevant sentence immediately followed by a raw http:// link to the target page.
003: Image link with an ALT as [exact phrase] - An image linking to the target page which used the exact target keyword phrase as the ALT text for the image.
004: Some surrounding text with [exact phrase] near the link which says click here. - This variation used the junk anchor text link “click here” and the exact target keyword phrase near to the link.
Results Unsurprisingly, the exact match anchor text link worked well. But most surprisingly, the ALT text based image link worked best. And, what about the other two link types? The junk link (“click here”) and the raw link (“http//”) results did not show up at all.
Best practices for SEO This is just one isolated experiment, but it’s obvious that image links work really well. Consider creating image assets that you can utilise to generate backlinks. The team at Ahrefs put together a useful post about image asset link building here. https://ahrefs.com/blog/build-links-with-images/
13. Press Release Links Work
SEO Consult put Cutt’s claim to the test this by issuing a press release which linked to Matt Cutts blog, with the anchor text “Sreppleasers” (which is an anagram for 'press release'). The term is not present anywhere on Cutts website, however when searching for the term Matt's blog do show up.
Source: http://www.pr.com/press-release/466131 Discussion: https://www.seroundtable.com/google-press-release-links-16136.html
14. First Link Bias
The test The hypothesis is that if a website is linked to twice (or more) from the same page, only the first link will affect rankings.
In order to conduct the test SEO Scientist (http://www.seo-scientist.com/first-link-counted-rebunked.html) set up 2 websites (A and B). Website A links to website B with two links using different anchor texts.
Test Variation 1 - The websites were set up, then after the links got indexed by Google, the rankings of site B were checked for the two phrases. Result: Site B ranked for the first phrase and not for the second phrase.
Test Variation 2 - Next the position of links to site B were switched. Now the second phrase appears above the previously first phrase on site A and visa versa. Once Google had indexed the change, rankings were again checked for website B. Result: Site B disappeared from the SERPs for the new second phrase (previously first) and appears for the new first phrase (previously second).
Test Variation 3 - To check this was not some anomaly, in the third test variation the sites were reverted back to their original state. Once the sites were re-indexed by Google, the rankings of website B were checked again. Result: Site B reappeared for the initially first phrase and disappeared again for the initially second phrase: No-following the link: SEO Scientist made the first link “nofollow” and still the second link was not counted!
SEO Best practices The lesson from this experiment is clear. If you are “self creating” links ensure that your first link is to your most important target page.
15. The Surprising Influence of Anchor Text on Page Titles
The test: A few years ago Dejan SEO set out to test what factors Google consider when creating a document title when a title tag is not present. They tested several factors including domain name, header tag and URLs – all of which did influence the document title shown in search results. What about anchor text?
Dan Petrovic put it to the test in this follow up experiment (https://dejanseo.com.au/title-rewriting-experiment/). His experiment involved several participants linking to a page on his website using the anchor text “banana”. The page being linked to had the non-informative title “Untitled Document”.
Results: During the test Dan monitored three search queries, where the document title miraculously showed as “banana”. The test goes to prove that anchor text can influence the document title shown by Google in search results.
16. Negative SEO: How You Can (But Shouldn't) Harm Your Competitors Rankings
The test and results: In an attempt to harm search rankings (and prove negative SEO exists) Tasty Placement (https://www.tastyplacement.com/infographic-testing-negative-seo) purchased a large number of spam links which they pointed at their target website Pool-Cleaning-Houston.com. The site was relatively established and prior to the experiment it ranked well for several keyword terms including “pool cleaning houston” and other similar terms. A total of 52 keyword’s positions were tracked during the experiment.
They bought a variety of junk links for the experiment at very low cost; a batch of comment links, which had no effect at all. 7 days later the forum post links were also placed, which was followed by a surprising increase in the site’s ranking from position 3 to position 2. Not what was expected at all. Another 7 days after that the sidebar links were added which resulted in an almost instant plummet down the rankings. Aside from the main keyword, a further 26 keywords also moved down noticeably.
Angular, injects HTML into an already loaded page, meaning that clicking on a link doesn't reload the page. However AngularJS cause some indexation issues on websites. Deepcrawl has advised some best practices on how to solve indexation issues.
Norman Pongracz's insight:
Making Angular sites indexable
Angular JS is a framework for building dynamic web apps that use HTML as a base language. Put simply, Angular, injects HTML into an already loaded page, meaning that clicking on a link doesn’t reload the page, it simply uses the framework to inject a new set of HTML to serve to the user. This means the page doesn’t have to reload and the website is significantly faster and saves the developer time as considerably less code has to be written.
However Angular JS causes many indexation concerns. Deepcrawl built TransferWise that is indexed. http://swift-bic.transferwise.com/en
Recommendation to make Angular site indexable
1. Removing the forced hashes that Angular has set as default.
Hash bang allows Angular to know which HTML elements to inject with JS. Hasb bangs can be removed by configuring $locationProvider: In Angular, the $location service parses the URL in the address bar and makes changes to your application and vice versa. We have to use the $locationProvider module and set html5Mode to true.
2. One might have issues with relative URL’s: Writing these as “<a href=”en/countries/united-kingdom”>…</a>”, when they should have been <a href=”/en/countries/united-kingdom”>…</a>.
Although everything can be seemingly fine for the user, when crawling this with a bot, the strings in the links will be appending to whatever the URL already was. So, for example, the homepage link in the main navigation would append an extra “/en” to the URL, rather than just pointing to itself. This meant that crawling the site gave you an infinite list of URLs with more and more subfolders. (Just adding this as a side note as it is something you might want to test for).
To link around your application using relative links, you will need to set a <base> in the <head> of your document. HTML5 mode set to true should automatically resolve relative links.
If your server can’t handle the crawlers’ requests, this can result in an increase in server errors seen by the search engine. One solution to this is to pre-render in advance server-side, so that when crawlers reach the server, the page is already rendered.
Understand your users’ intents and stage in the sales funnel before you gate content.
Norman Pongracz's insight:
Should you gate content — that is, keep white papers, case studies, or e-books behind a form that becomes the gate-keeper for allowing users access?
Traditional websites have relied on heavy forms to find and convert leads, even at the high risk of losing potential customers. Gated content is particularly common on B2B sites.
There are situations, however, when people are more likely to fill in such forms. Mapping content to the user’s journey will help you determine whether or not to gate content on a case by case basis. The type of content as well as the implementation of the actual “gate” also affect the users’ willingness to go past the gate and fill in the lead-generation form.
When Not to Gate
When to Gate
If you decide to gate content, make sure you:
1 / Provide a reasonable level of content outside of the gate to demonstrate the value of your offering. Prove your worth before asking for something in return. Use the reciprocity principle to motivate engagement. Placing the gate within the content could be a viable option. For example, give people a list of tips but save the most critical ones for after the reader completes the form.
Google can reward structured data use with rich snippets. But semantic web technologies can improve search visibility in other ways too.
Norman Pongracz's insight:
The benefits of employing structured data and associated semantic web technologies extend past rich snippet generation.
OG and Schema
Structured data and eCommerce
Ecommerce-related structured data has changed this. Using schema.org, not only can ecommerce sites expose the same information in markup that is supplied in feeds, but websites can now markup more detailed information about their products, offers and services than are supported by product feeds.
Structured data helps the search engines better understand your site. A breadcrumb rich underlying code helps the search engines more clearly understand the hierarchy of your site and the relationship between pages.
Well-executed structured data can help provide a more consistent and positive experience for website users, whether their exposure to your content is through search results, social media or third-party applications.
Data fidelity as a trust-building measure: Consistency of data is important for search engines. In particular, as this principle applies to structured data, search engines will trust your content more if it can see that your visible content aligns with the data provided in your markup.
An additional point of data reference for the search engines are XML product feeds. The search engines will accord greater trust to ecommerce pages if the product feed, markup and visible content are in sync.
Structured data and semantic architecture
Norman Pongracz's insight:
Implications and potential risks with solutions:
Better Flow of Link Juice
Poor Load Times
Having spent the last 6 years Client side as Head of eCommerce and agency side managing digital marketing teams, one constant has been confusion in new platform builds over what a “search engine friendly” website actually is.
Norman Pongracz's insight:
SEO requirements for setting up new eCommerce domain
Accessibility and Navigation
For many SMBs & SEOs that are new to local search, understanding citations and what’s important about them can be a bit mystifying. On the surface, local directory listings seem as plain as day – how complex can business listings on a website be right?! But once you start getting drawn into the murky world
Norman Pongracz's insight:
Citation Building Basics (Summary)
In simple SEO terms a Local Citation is simply where your company is mentioned on other websites and places found on the Internet. Local citations are used heavily in helping you to rank in local search results.
Citation sources come in 6 main shapes & sizes (see below). Some are specific to an industry or city, while some are much broader in scope and provide listings for all types of business in all towns across the country. As long as the site has some relevance to your business (e.g. offers correct category to list or covers same geographic location) and is decent in quality then it’s a goer.
How can I find out where I’m already listed?
Knowing where you’re listed gives you ½ the picture. To really bring your citation situation into focus you need to know what your business data looks like on these sites.
Where else can I get myself a listing? -The best way to work this out is to spy on your competitors and see where they’re listed. If your competitors can get a listing on a site it follows that you should – in most cases – also be able to get a listing. The same 2 tools that help you find your existing citations (CitationTracker & CitationFinder) can also be used to spy on your competitors.
What category should I use for my business? - Selecting the right category/categories to list your business on aggregator & citation sites is very important. But identifying & selecting the right category can be tricky for some businesses. Tools: https://moz.com/local/categories
How long does it take for listings to go live? If you submit listings manually, direct to sites then the speed of go live tends to be much faster than if you submit via a 3rd party or aggregator service. We typically see 70% of our direct submissions go live within 4 weeks of submission, with many going live instantly or in 48-72 hours.
Most important UK Citation Sources:
Google Analytics Troubleshooting Guide & Auditing Resources - Publisher: MarketingVox
Norman Pongracz's insight:
Debugging Google Analytics Setup - Common Mistakes (Summary)
GA 101: accounts, trackers, domains:
Goals, funnels, and filters
Site search, site overlay, site speed
Asynchronous tracking, ecommerce and custom variables
Auditing and support tools
Your home page is one of the most visited pages on your website. Few people will visit your site without seeing it. But a lot of home pages suck. Read this, and make sure yours doesn’t.
Norman Pongracz's insight:
Homepage Optimisation (Summary)
-Show whatever it is you’re selling - sounds obvious but often sites don’t show off their products. The first thing users should see is whatever you’re selling. It should be big, bold and beautiful.
Ecommerce SEO Tips: User-Focused SEO Strategies For Deleted Products by Linchpin SEO
Norman Pongracz's insight:
Options for handling changes in product pages
Redirect to the Deleted Product’s Category Page - this should be the category that is one level up from the product page, or if there is less than 3 products in the defined category, keep climbing the taxonomy until there is at least 3 products in a category.
A 301 redirect coupled with a noindex/follow meta-tag on the search results page - Whenever a user clicks on an external link –
Manually Redirect to a Similar Product - Manually create a 301 mapping by selecting a similar product or page from the remaining product set so whenever a user clicks on an external link – either from the search results, bookmark, social website, or from a link on another website they are taken to the new page. Create an environment to allow for the deleted item to be redirected to this newly identified page.
Redirect Based on Relevancy Value
Custom 404 Page - Whenever a user clicks on an external link, redirect the user to a custom 404 page This page should: Inform the user the product is no longer available; Provide related product selections; Provide a search box for the user to search the website for other products.
Permanently delete the expired product’s pages, content and URLs. When you have no closely related products to the one that’s expired, you may choose to delete the page completely using a 410 status code (gone) which notifies Google that the page has been permanently removed and will never return.
Reuse URLs. If you sell generic products where technical specifications and model numbers are not relevant, you could reuse your URLs. That way you will preserve the page’s authority and increase your chances of ranking on Google.
Some items deserve to live on. Certain products may have informational value for existing customers or others wanting to research it. Leave these pages intact. Previous buyers can get information, help and service through these pages.
In case of out-of-stock items
Norman Pongracz's insight:
For page Titles Google now use 18px Arial for the title element, previously it was 16px. However interestingly, Google are still internally truncating based on 16px, but the CSS will kick in way before their ellipsis is shown due to the larger font size. The upshot of this change is that text is no longer truncated at word boundaries. Bolding pushes he size of text in pixels. We also see Google moving brand phrases to the start of a title dynamically.
For meta descriptions actually the CSS truncation appears to be around 920 pixels.
This guide explains, in depth, how to get over a manual penalty for inbound links and uses: This Excel template DISCLAIMER: There are undoubtedly faster and shorter processes to audit your links and submit a reconsideration request. Taking shortcuts like that may work, or it may not. This process is the full-blown, no cut corners …
Norman Pongracz's insight:
SummaryFirst Steps / Basic Analysis on Pitch LevelDid the client receive any link warning message in Google Webmaster Tools?Did the client experience any sharp decline in visibility via Search Metrics? (Penguin and manual penalty tends to be a sharp decline in rankings compared to the slow downfall of Panda penalty)Is it possible that the decline was caused by competitors' sites getting penalty removed and regaining their ranking?Did the client experience any sharp decline in traffic via GA? Does this correspond with decline in visibility?Is it possible that decline in traffic was caused by tracking or other issue than penalty?Did the company lost its ranking for brand terms (Google search "Brand term")Has there been any link removal done beforehand? Are there any documentations on it?Does the client have any white/safe-list of links/domains?Does the domain interlink with any other domains? (such as Debenhams.com and Debenhams.ie) Did they implement HrefLang or any other solution to prevent looking spammy?What Is a “Bad” Link?
Basically a “bad” link in Google’s eyes is anything that isn’t editorial – any link that you created for the purpose of SEO.
If someone created a random link to your website on some unrelated forum, that might be a link that we consider not great from an SEO perspective, but from a penalty perspective there’s theoretically nothing wrong with it.
However, if you discover that you have many backlinks from low quality and unrelated domains, they may be worth removing – even if you didn’t make them. Look for patterns. One link from a Spam directory will not result in penalty. A dozens of links from Spam directories might.
In addition, Google seems to give the most relevancy when it comes to penalties to your most recent links (as opposed to links you made five years ago). Try getting your “latest links” look around to see if there is any suspicious recent activity.Link Metrics Analysis
Link Research Tools Detox AnalysisWhat risk did LRTs assign to the domain?Did you classify at least 80% of keywords?What is the risk distribution?Does anchor text distribution look natural?
LRTs is not that useful to determine the site’s spam backlinks but provides an excellent benchmark to understand the toxicity.
Backlink Data CollectionMajestic (Historic preferably)OSEaHrefsMajestic and/or OSE apiUpload list of links on LRTs
The best practice is to review each domain (or a linking page/domain) manually, but many times this isn't possible. So here are some shortcuts.Are any of the links appearing in previous disavow files?Do they have a white list of links?What are the top referring domains? (Worth reviewing the top domains manually)Do links have suspicious domain names and/or URL paths? Spam links tend to have at least one of the following words in their URLs: SEO, link, directory (or many times: dir), submit, web, site, search, Alexa, moz, domain, list, engine, bookmark, rank etc.Spam domains tend to have unusually long URL names (example: best-shoes-for-wedding.wordpress.com) or have marketing sounding path names and titles (example: importance-of-selling-your-stuff-online)Spam domains tend to have low citation and trust flow metrics (under 15)Spam domains tend to have low amount of backlinksSpam domains often created within a short period of time – i.e. low domain age.Are referring domains all have unique IP address? Does some of the domains look like a link network?Are press releases and sponsored posts as well as guest blogging guidelines no-followed?Review for high quality directories that can be whitelisted (dmoz, yahoo directory, yell etc.)
Crawl optimisation (ensuring search engine spiders crawl all unique content
Norman Pongracz's insight:
The case study explores how the parameter settings tool can be best used to guide search engines in crawling and indexing all important pages. The article explains how this admin solution impacts the website's crawl budget allocation, page speed and how it helps to fix some of the general site issues.
Following an entire year of running tests across a large scale, international ecommerce website, we have made several key observations on parameter settings and the consequent impact on search engine indexation.
1. Google de-indexes duplicate pages faster than it picks up pages with unique content. Significant changes in indexation can already be measured 2-3 days after the configuration. Over a one month test, sites displayed a 75% drop in duplication (over 6.2 million pages) but only showcased an 8% increase (about 340K pages) in unique content pages.
2. Indexation rates are based on how parameters are used. Internally used parameters are impacted less than externally used ones. The actual indexation number will depend on how frequently the tracking parameter is used and how well canonicals are configured.
3. Parameter settings reduce the time spent downloading pages. While running the tests over a one month period, The time it took Google to download pages decreased by 17%-20% on average. This resulted in improved crawl rates of at least 7% within the same time period.
4. Risks. If parameter directives are set incorrectly on top level pages, crawl and indexation will significantly change on deeper pages as well.
Parameter settings guide
Essentially, the user needs to decide whether the parameter is used for tracking or not and then tell Google how the parameter should change the page’s content. This offers users the following options to handle URLs with parameters:
/ Only crawl representative URLs – the only option available when parameters are used for tracking.
/ Crawl every URL – this is a common recommendation when parameters are used for pagination, the new pages are somewhat useful or if the parameters replace the original content with manually translated text.
/ Crawl no URLs – this is a common recommendation when parameters create pages with auto-generated translation, sub-par sorted and narrowed lists, or the URLs are simply not intended to be indexed, for instance in the case of parameters triggering pop-up windows. Let Googlebot decide – this setting is a recommended solution when sorting and narrowing parameters are used inconsistently, for example when sales pages load with low-to-high price sorting by default while premium product categories are setup with high-to-low price order.
/ Only crawl URLs with value X – this one is a recommended solution when sorting parameters that are used consistently, such as low-to-high price listing implemented across all category pages the same way.
Finally, if the webmaster believes there might be architectural problems on the site, “Let Googlebot decide” is the preferred option.
Norman Pongracz's insight:
Here are some of the interesting parts from Google's Page Quality Rating Guideline. Considering that Google is trying to mimic user behaviour and rate websites based on these factors, one or more of these below can have impact on rankings:
- Broken links and elements can impact page rating but only if it is significant in scale (implying bad site maintenance).
- Custom 404 pages can have ratings as well, for example the following is a highly rated 404 page: https://www.raterhub.com/evaluation/result/static/a/GG/3.6.5.png
- Already published articles don't have to be updated frequently except if it is encyclopaedia style like Wikipedia
- Frequency of updates on websites matter. Would that mean we should recommend adding timestamps to pages?
- External ratings such as Wikipedia, forums and other rating organisations an be used to rate websites or page quality
- Few negative reviews won't impact page quality
- The amount of on-page content necessary for the page depends on the topic and purpose of the page. A High quality page on a broad topic with a lot of available information will have more content than a High quality page on a narrower topic
- On-page content behind tabs can be considered main content and can thus improve page quality.
- Product pages should feature recommender systems (i.e: "complete the look", "those who bought the product also bought")
- On copied content: Content copied from a changing source, such as a search results page or news feed. You often will not be able to find an exact matching original source if it is a copy of “dynamic” content (content which changes frequently). However, we will still consider this to be copied content. Other times the content found on the original source has changed enough that searches for sentences or phrases may no longer match the original source. For example, Wikipedia articles can change dramatically over time. Text copied from old copies may not match the current content. This will still be considered copied content
- Video quality can have impact on page rating.
Supporting information, support pages
Shopping websites can be judged on
- How detailed the contact information is
- How detailed the store policies are on payment, exchanges, and returns
- Customer service information available
- About us, contact and FAQ pages influence page rating (this includes Facebook pages and corporate websites as well!)
- Shopping cart functionality impacts page rating
These do not have to be presented on the page but need to be somehow linked.
Highest quality product page https://www.raterhub.com/evaluation/result/static/a/GG/BookPack.png - The purpose of this page is to provide information about, and allow users to buy, a specific type of school backpack. The page provides a lot of helpful product information, as well as 600 user reviews. Since the store produces this backpack, they are experts on the product, making the page on their own website authoritative. In addition, this store has a reputation for producing one of the highest quality and most popular school backpacks on the market. This page also has a high quality MC, placed on a trustworthy website and has good reputation.
Highest quality “Custom 404” page https://www.raterhub.com/evaluation/result/static/a/GG/3.6.5.png - The MC of this page is the cartoon, the caption, and the search functionality, which is specific to the content of the website. It is clear that time, effort, and talent was involved in the creation of the MC. This publication has a very positive reputation and is specifically known for its cartoons. Keep in mind that for any type of page, including pages with error messages, there may be a range of highest quality to lowest quality pages. Therefore, it’s important to evaluate the page using the same criteria as all other pages, regardless of what type of page it is. This page also has a high quality SC, placed on a trustworthy website and has good reputation.
High quality category page https://www.raterhub.com/evaluation/result/static/a/GG/BackpacksT.png - The purpose of this page is to allow users to buy a school backpack. The page provides a lot of different backpack options, and some of them have user reviews. This is a well-known, reputable merchant, with detailed Customer Service information on the site. The SC features are particularly helpful. For example, the filters allow users to show results by categories such as color, style, and price. They have satisfying amount of high quality MC, good SC and positive reputation.
High quality category page https://www.raterhub.com/evaluation/result/static/e/PQexamples/2.5.2.png - The Company sells its own line of high end, fashionable baby and children’s furniture and accessories. It has a positive reputation as well as expertise in these specific types of goods. Many products sold on the site are unique to this company. They have satisfying amount of high quality MC, expertise in given field and positive reputation.
High quality product page https://www.raterhub.com/evaluation/result/static/a/GG/GPS.png - There is a very large quantity of MC on this page. Note that the tabs on the page lead to even more information, including many customer reviews. The tabs should be considered part of the MC. They have satisfying amount of high quality MC, good SC and positive reputation.
High quality product page https://www.raterhub.com/evaluation/result/static/a/GG/StandMixerTarget.png - The page from “Target” provides the manufacturer’s product specs, as well as original product information, over 90 user reviews, shipping and returns information, multiple images of the product, etc. Note: Some of the MC is behind links on the page (“item details,” “item specifications,” “guest reviews,” etc.). Even though you have to click these links to see the content, it is still considered MC.
Medium quality Wikipedia page: https://www.raterhub.com/evaluation/result/static/e/PQexamples/2.4.8.png
Medium quality “Custom 404” page https://www.raterhub.com/evaluation/result/static/e/PQexamples/3.6.6.png This page is on a well-known merchant website with a good reputation. However, this particular page displays the bare minimum of content needed to explain the problem to users, and the only help offered is a link to the homepage.
Lowest quality category page https://www.raterhub.com/evaluation/result/static/a/GG/shoesbuy.png This page is selling Nike Air Jordan shoes. When you look at the “Contact Us” page, it does not give the name of a company or a physical address, which also cannot be found anywhere else on the website. This amount of contact information is not sufficient enough for a shopping website. In addition, the “Shipping and Returns” page has the name of another company that seems to be unrelated. There are also official looking logos at the bottom of the homepage, including the Better Business Bureau logo and Google Checkout logo, that don’t appear to be affiliated with the website.
"...there are often situations calling for the use of PDFs...the parameters for grading PDF files are known to be different from HTML files...in the final result- PDFs on sites can successfully compete with HTML pages and rank very high, even in first places on the organic search results."
Norman Pongracz's insight:
Ranking PDF files in the SERPs
Introduction to PDF document optimisation and accessibilityWhen to use PDFs instead of HTML
When it comes to on-site content, it is always preferable to place content in HTML and not PDF. However, there are often situations calling for the use of PDFs, for example- user’s guides, forms that need to be downloaded by the user etc. It is important to realize that even in such situations, usually the use of PDFs does not necessarily mean that we must give up the strategic choice to place the content authority in HTML pages.
Search engines and Google specifically, can crawl and index PDF files. As far as the location in the search results is concerned, PDF’s can and do fully compete with HTML pages.
Although not publicly published by Google, the parameters for grading PDF files are known to be different from HTML files, mostly due to the large textual (and therefore- keywords reach) content volume of PDFs (in comparison to average website’s HTML pages). The difference in grading is created in order to allow a correct comparison between HTML and PDF versions of content, and in the final result- PDFs on sites can successfully compete with HTML pages and rank very high, even in first places on the organic search results.The Best Practices in SEO for PDF Files
Best practices for PDFs in SEO include general on-page recommendations and accessibility recommendations, both of them ensuring that the content of the PDF files can be accessed by both the search engine and the user.SEO RecommendationsIndexation: For several technical reasons, crawling and indexing PDFs takes SE longer than HTML does (usually on the scale of hours to days, but sometimes up to a month more). Therefore, one should encourage and speed up PDF indexing by marking the address of a PDF in the website’s sitemap file, as in any HTML page. It is also possible to use Google Search Console to submit the PDF for crawling (“fetch as Google”), and after the crawl- to submit the results for indexing.Size limitation: as a general rule of thumb, it is advisable to create PDFs as small as possible, and to avoid sizes larger than 2.5 MB. Specifically for Google, PDFs are temporarily transformed to HTML during the crawl, and Google will only index a maximum of 2.5 MB from the temporary HTML file. If the temporary HTML is larger than 2.5 MB, Google will usually crawl the whole file, but index only 2.5 MB of data (usually the first 2.5 MB). Title and Heading markup: Google crawls and indexes titles that are stylistically marked as titles (using Headings), and utilizes them to improve the indexing and association with keywords. Therefore, it is important to use headings markup for titles when creating PDFs.Links within PDFs: As previously mentioned, Google can index links within PDFs, and treats them as it would links in HTML. For this purpose, links must be have a standard link structure (ie structured as >a href=”/page2.html”>link to page 2</a> ). As it is not possible to mark links in PDF with the “no follow” and “no index” tags, if it is undesired that a specific link would transfer authority, then it must not be placed in the HTML.Usage of Rich Media: Google will not index rich media (including pictures of any kind) placed in PDFs. It is necessary to avoid placing texts in images (same as in HTML pages). If a picture is to be indexed, it is possible to place a link to the picture in the PDF, and then the crawler will follow that link and index the picture (as a separate file from the PDF and not as part of its content).PDF produced with text from scanned images of texts (OCR):As previously mentioned, SEs will not index text located in a picture. However, if the text was produced through OCR, it is still considered text, and there should be no problems with indexing.Indexing PDFs but preventing displaying cached versions in Google: if the PDF contains temporary content, or content that changes often, it may be desired to prevent Google for keeping and displaying cached versions of files that are outdated or don’t exist anymore. This is possible to achieve by implementing the X-Robots tag with a “no archive” markup in the PDF HTTP response.Avoid using password-protected PDFs: when creating a PDF, it is sometimes possible to add a password lock to it, to prevent unauthorized access to the file. Obviously, locking the file with a password will prevent SEs from accessing it, so if indexing is desired, password protection must not be used.Preventing content duplication: If, under any circumstances, there is a PDF file available for indexing and at the same time an HTML page with the same (or highly similar) content, or other PDF files with the same (or highly similar) content, it is necessary to specify the proffered version for SE in order to avoid content duplication penalties. This can be achieved using the canonical tag (similar to HTML). However, it’s important to remember that the tag has to be implemented in the header of the PDF’s HTTP response. For further details on this subject, see the following link (and specifically- the example at the bottom of the page for implementing canonical in PDFs) https://support.google.com/webmasters/answer/139066?hl=en. It’s important to remember that such a canonical markup will only work if the PDF is available for indexing- otherwise, the SE will never see the canonical request.Accessibility recommendations
The accessibility level of a document can be scaled from completely inaccessible to greatly accessible. The way the different page elements and text are defined is of major importance to the accessibility of a document. Assistive technologies interpret the tags of a document and renders content to the user accordingly.
One standard that explores all the things that are relevant to creating accessible PDFs is ISO 14289-1 (PDF/UA-1). The official international guidelines for creating accessible web content, WCAG 2.0, also include many of these.Types of actions needed for PDF accessibility
It is important that all text in a document is tagged, be it a paragraph text, a heading, a list, or similar. Not only do you make this evident visually but also clear to all users by providing the correct tagging, as this is what assistive technologies use. This includes:Markup of file languageMarkup to indicate reading directionCaptions for images and photosMarkup of tables (column titles, direction of reading)Creating alternatives for tables and/or complex diagramsMarkup and ensuring readability of foliosPresenting the page as a “single page”
Tagging elements of the pageLanguage: There should be an overall definition of the language the document is written in. Furthermore if lines or blocks of text within the document change language, then the text should be tagged separately.Reading order: The sequence in which a screen reader will render the page content depends on how the document was created. Therefore it is important to ensure that a document has a sensible reading order. Most remediation tools provide the ability to check the reading order of a document.Images: An image can have different purposes depending on how it is used in the document. Many images have a purely decorative purpose and this purpose needs to be conveyed. This is done by giving it a definition of ‘‘artifact’, described further on. Other images may have some sort of function or convey important information, and therefore these need an alternative text stating this.Tables: When data tables are used it is important to tag the structure of these. As a minimum defining which are the column / row headings.Bookmarks: For many users the easiest and most accessible way to have a table of content is to have bookmarks provided from the headings in the document. Stating these gives the user the option to have a panel open with the document containing the bookmarks.Title: As a minimum documents should include basic information such as a document title. Providing the name of the author, as well as a description and some keywords is also a good idea.Security settings: Make sure that the document is not locked in a way that makes assistive technologies unable to extract content and render it to the user.It is also important to ensure that the color of the background and the color of text are in sufficient contrast to each other. The ‘Web Content Accessibility Guidelines’ gives guidance on this in term of recommendations for text sizes and compliance level.Also avoid using references to content and information solely based on a location. Some users will receive the content in one long sequence so for instance a ‘box on the right’ does not exist to them. Make sure to supplement this by referring to a heading also for instance.Make sure that the documents can be zoomed to enlarge text without it becoming difficult to read. One example is when text becomes very pixelated.Images of text should be avoided as they don’t work for well for several user groups, such as those with reading difficulties. Text in an image can be identified by the fact that the text cannot be highlighted.
What files need complex PDF accessibility?Company newslettersCommunication documentsCatalogues and brochuresAnnual reportsSustainable development reportsChecking the accessibility of your PDF file
Check the readability of your PDF file using Jaws (text or image). You can visually and aurally compare them to see if all of the information has been conveyed:Check that you have used an appropriate fontCheck that any words in capitals have the right accents on themCheck the heading levels using Jaws (heading levels 1, 2, 3, etc.)Check the alternatives behind the photos, images, etc.Check that any diagrams that have been marked up as tables or lists of bullet points have been correctly understoodCheck that the links are clear and well indicated with JawsCheck the accessibility of the forms (explicit fields, possibility of filling in the fields and validating them, check that the tab key moves the cursor along through the fields correctly)Check the readability of the cells in the tables and that they relate to a specific headerCheck the file languageCheck any changes in language (for example an English word in a French document); Jaws should be able to change languageTools to export accessible PDFs ad to test accessibilityMicrosoft WordAdobe InDesignQuark XpressJAWS Screen Reading Software freedomscientific.com/Downloads/JAWS - Using JAWS http://webaim.org/articles/jaws/PAC 2.0, a free accessibility checker.CommonLook PDF GlobalAccess, a commercial accessibility validator and remediation tool
References and guidelinesHubspot and Siteimprove white paper: How to create accessible PDFs: https://cdn2.hubspot.net/hubfs/321800/Content/eBooks_Guides_Whitepapers/EN_How-to-create-accessible-pdfs.pdfISO 14289-1:2014 Document management applications: Electronic document file format enhancement for accessibility: Part 1: Use of ISO 32000-1: http://www.iso.org/iso/home/store/catalogue_ics/catalogue_detail_ics.htm?csnumber=64599&ICS1=35&ICS2=240&ICS3=30How to Meet WCAG 2.0 w3.org/WAI/WCAG20/quickref/Overview.phpThe Matterhorn Protocol 1.0 http://www.pdfa.org/2013/08/the-matterhorn-protocol-1-0/WebAIM PDF Accessibility - Converting Documents to PDF: http://webaim.org/techniques/acrobat/convertingAdobe: Creating accessible Adobe PDF files – A guide for document authors: https://www.adobe.com/enterprise/accessibility/pdfs/acro6_pg_ue.pdfPDF Accessibility: http://www.pdf-accessibility.com/accessibility-document-pdf-accessible/Royi Vaknin PDF Files: SEO and Accessibility (Carmelon Digital Marketing): http://www.carmelon-digital.com/articles/seo_pdf_files/Reid Bandremer: SEO for PDFs (Lunametrics) http://www.lunametrics.com/blog/2013/01/10/seo-pdfs/
Structured data can help you to send the right signals to search engines about your business and content. But where do you start? Columnist Tony Edward has some suggestions.
Norman Pongracz's insight:
Google’s John Mueller recently stated that the search engine giant may add structured data markup as a ranking factor (more info: http://searchengineland.com/google-may-add-structured-markup-data-to-ranking-algorithm-230402). So it is definitely worth the effort to implement schema markup on your website, as this is becoming more important to Google.
Organization Schema Markup https://schema.org/Organization
WebSite Schema Markup https://schema.org/website
Breadcrumbs Markup https://schema.org/BreadcrumbList
Site Navigation Schema Markup http://schema.org/SiteNavigationElement
Video Schema Markup https://schema.org/VideoObject
Schema Software Application Markup https://schema.org/SoftwareApplication
Schema Product & Offer Markup http://schema.org/Product and http://schema.org/Offer
Schema Rating Markup https://schema.org/Rating
Schema Article Markup https://schema.org/NewsArticle and https://schema.org/BlogPosting
Schema Local Business Markup https://schema.org/LocalBusiness and https://schema.org/PostalAddress
Event Schema Markup https://schema.org/Event
Google Webmaster Tools (GWTs) is always a good place to start when optmising a site for Search Engines (SEs). In fact this can give you an indication of site health as it highlights any possible issue or problems your site might have. One important indicator is the number of pages you have indexed in Google …
Norman Pongracz's insight:
There have been numerous case studies on how XML sitemaps benefit different sites with increased visits, revenues by just allowing the crawlers to find the important pages faster.
Google himself produced a case study years ago called "Sitemaps: Above and Beyond the Crawl of Duty"http://www2009.eprints.org/100/1/p991.pdf that analysed multiple domains such as Amazon, CNN to demonstrate the effectiveness of sitemaps.
According to the most statistics, Google finds over 70% of the important URLs through XML sitemaps and only about 30% of the URLs through crawling the interlinking structure (often referred as “Discovery”). We, at Forward3D consider XML sitemaps an essential part of every website and these studies show that Google significantly relies on the URLs and information listed in these documents – hence the complexity of the XML sitemap implementation; crawlers will only trust well-defined directives.
XML sitemaps has been tested across multiple industries and domain of different sizes: MOZ’s case study on the UK based Razoo (http://moz.com/blog/multiple-xml-sitemaps-increased-indexation-and-traffic) indicated that after the XML sitemap implementation, the number of pages ranking were increased from 486 to 1240, while the keywords sending organic search traffic were more than doubled (from 548 to 1347).
A case study on a large site (http://www.bruceclay.com/blog/how-to-improve-your-site-indexation-xml-sitemaps-case-study/) showed an increase in page indexation from 24% to 68% in the initial period, right after the XML sitemap was implemented; this resulted in significant improvements in SEO traffic.
Also, there have been some smaller scale experiments; reports showed that Google’s crawl rate on the domain “Scaling Bits” (www.scalingbits.com/web/sitemaps) has increased by 100% after the XML sitemap was implemented driving up traffic by 30%. Finally, TechCrunch had a case study on an extensive SEO project that included an XML sitemap implementation with very similar results: 30% additional traffic across site (http://neilpatel.com/case-study-techcrunch/).
When it comes to New Look, we expect similar results. If pages are indexed properly, traffic can show an increase as much as 30% percentage with a potential increase in revenues by 5.25% at least (based on our initial forecast).
How do people really do local search? We looked at auto insurance in the USA, and found some surprising things about local SEO that folks in every industry should know.
Norman Pongracz's insight:
Case study on American auto insurance local queries by GetStat
Auto insurance industry was chosen because it offers localized products and services but isn’t dependant on brick-and-mortar sales, and one of the biggest American brands "Geico".
Study was based on syntax variants of 33 basic short-tail keywords related to auto insurance with search volume per keyword greater than 100 per month.
Query Structure Matters
Every Query is Local
Big players have weaknesses as well
Google alerted webmasters late yesterday that it will let smartphone searchers know if it thinks a website has a “faulty redirect” in place that sends the searcher to your home page, not the page they clicked on.
Via Bonnie Burns
Norman Pongracz's insight:
We’d like to spare users the frustration of landing on irrelevant pages and help webmasters fix the faulty redirects. Starting today in our English search results in the US, whenever we detect that smartphone users are redirected to a homepage instead of the page they asked for, we may note it below the result. If you still wish to proceed to the page, you can click “Try anyway.”
But Google’s not just warning searchers; there’s also help for webmasters. The “Crawl Errors” section of Webmaster Tools will offer specific information about faulty redirects affecting smartphone crawling.
An overview of debugging tracking code, with information on common debugging tools and a deeper look into Chrome GA Debugger
Norman Pongracz's insight:
Debugging GA with GA Chrome Debugger and other tools without no code access (summary)
Common Debugging Tools
Chrome GA Debugger / ga_debug.js
Fiddler, the browser-independent HTTP debugger and manipulator
Rewrite the HTML with FiddlerScript
Additional tools: AddOn “FiddlerScript” and execute: http://www.telerik.com/fiddler/add-ons => “Syntax-Highlighting Add-Ons”
One of the most prevalent design flaws in B2B websites is the use of carousels (or sliders) on the homepage. Carousels are an ineffective way to target user personas, which ends up hurting the site’s SEO and usability. In fact, at the recent Conversion Conference in Chicago, about 25% of the speakers mentioned carousels — […]
Norman Pongracz's insight:
Problem with B2B Homepage Sliders (Summary)
With B2B websites, carousels seem to only be used for one of three reasons: branding, thought leadership or product/service promotion.
Alternating Headings - Many of the carousels the headings in the slider were wrapped in an h1 tag. Basic SEO best practices state that there should only be one h1 tag per page, and it should appear before any other heading tag. The problem with using h1 or any heading tag in the carousel is that every time the slide changes, the h1 tag changes. A page with five slides in the carousel will have 5 h1 tags, which greatly devalues the keyword relevance.
Norman Pongracz's insight:
Infinite scroll search-friendly recommendations
With infinite scroll, crawlers cannot always emulate manual user behaviour - like scrolling or clicking a button to load more items - so they don't always access all individual items in the feed or gallery. To make sure that search engines can crawl individual items linked from an infinite scroll page, your content management system should produce paginated series (component pages) to go along with your infinite scroll.
Implement replaceState/pushState on the infinite scroll page (The decision to use one or both is up to you and your site’s user behaviour)for the following:
Beginners Guide to Universal Analytics.Learn some quick tips to get started. Learn creating custom dimensions and custom metrics.
Norman Pongracz's insight:
Difference between Universal Analytics (UA) and Google Analytics (GA) - Excerpt
Data Collection and integration - UA provides more ways to collect and integrate different types of data (across multiple devices and platforms) than Google Analytics (GA). UA provides better understanding of relationship between online and offline marketing channels.
Data Processing - UA is visitor based instead of visit based.
Custom Dimensions and metrics - UA allows 'custom dimensions' and ‘custom metrics’ to collect the type of data GA does not automatically collect (like phone call data, CRM data etc). GA uses 'Custom variables' instead. Vostum variables are available in UA still (not sure for how long). User Interface only changes when using custom dimensions and metrics.
Tracking Code - GA uses UA tracking code
Remarketing - UA does not support ‘Re-marketing’ yet.
Referrals Processing - in UA, returning referrals considered to be wo web sessions.
Cookies - While GA can use up to 4 cookies (_utma,_utmb,_utmz and _utmv) to collect visitors’ usage data, UA uses only 1 cookie (called _ga).
Privacy and data usage - You need to give your end users proper notice and get consent about what data you will collect via UA. You also need to give your end users the opportunity to ‘opt out’ from being tracked. That means you need to make changes in your privacy and data usage policies. Google recommends using Google Analytics opt out browser add on if you want to block Google Analytics.
Theme building through directory based and virtual link silos. Europe
Norman Pongracz's insight:
Possible Alternatives to Eliminate Excessive Navigation or Cross Linking (Excerpt)
When it is impossible to remove menus that contradict subject relevant categories, instead use technology to block the search engine spider's indexing of those specific elements to maintain quality subject relevance.
IFRAMEs: If you have repetitive elements, add an IFRAME to isolate the object to one location and eliminate subject confusion from interlinking. The contents of an IFRAME is an external element that is not a part of the page, or any page except the HTML of the IFRAME contents file itself. As such, IFRAME contents does not count as a part of any page displaying the IFRAME code.
Ajax: Ajax code included dynamically into a web page cannot be indexed in search engines, providing the perfect haven for content, menus and other widgets for user's eyes only.