Technical and on-page SEO
1.1K views | +0 today
 
Scooped by Norman Pongracz
onto Technical and on-page SEO
Scoop.it!

33 Link Building Questions Answered

33 Link Building Questions Answered | Technical and on-page SEO | Scoop.it
When it comes to link building idea generation, the sky's the limit! In today's post, Rhea Drysdale offers her tips for best practices and a philosophical approach to link building that will help bring your ideas to life.
Norman Pongracz's insight:

Excellent connection of follow-up question for a mozinar related to link building. The most interesting ideas are:

-Search Engines do not devalue blog platforms just based on the quantity of Spam present on it. The algorithms rate each website by its own and devalue it if it is necessary. 

-If a company does not produce content, you can look at its target audience to identify potential partnerships, testimonials, case studies, product reviews, etc. If the company is purely promotional, you could arrange interviews for the founder(s), have them speak locally/nationally, or invest in an online customer service platform for the products that builds up product-specific content and long-tail queries.

-In case working in a heavily spammed niche, it is still worth focusing on white hat technics in the long term due to the OODA loops (observe-orient-decide-act - adaptation process).

 

more...
No comment yet.
Technical and on-page SEO
Useful scoops from technical and on-page SEO
Your new post is loading...
Your new post is loading...
Scooped by Norman Pongracz
Scoop.it!

Why Google is mining local business attributes

Why Google is mining local business attributes | Technical and on-page SEO | Scoop.it
What are business attributes, and why should local businesses care? Columnist Adam Dorfman explores.
Norman Pongracz's insight:
TL:DR Google started asking their users to confirm business attributes on Google maps, such as "wheelchair accessibility" or "offers take-out". These attributes are becoming increasingly important in local search because they trigger the "micro-moments" of user journey - moment which influences users' decisions the most - that often leads to actual offline visit. Also Google noted that the number of “near me” searches have increased 146 percent year over year, and 88 percent of these “near me” searches are conducted on mobile devices. Recently Google My Business updates support changing attributes for businesses as well.

-- 

Now, when checking into places on Google Maps, you may have noticed that Google prompts you to volunteer information about the place you’re visiting to refine the place's attributes: Attributes consist of descriptive content such as the services a business provides, payment methods accepted or the availability of free parking — details that may not apply to all businesses. Attributes are important because they can influence someone’s decision to visit you.

Many publishers are trying to incentive adding attributes this via programs like Google’s Local Guides, TripAdvisor’s Badge Collections, and Yelp’s Elite Squad because having complete, accurate information about locations makes each publisher more useful.

Basic info such as name, address and phone are basic identifiers for businesses, but attributes are increasingly contribute to local visibility. According to seminal research published by Google, mobile has given rise to “micro-moments,” or times when consumers use mobile devices to make quick decisions about what to do, where to go or what to buy.(more on micromoments: https://www.thinkwithgoogle.com/collections/micromoments.html)

Google noted that the number of “near me” searches (searches conducted for goods and services nearby) have increased 146 percent year over year, and 88 percent of these “near me” searches are conducted on mobile devices.

The recently released Google My Business API update to version 3.0, Google also gave businesses that manage offline locations a powerful competitive weapon: the ability to manage attributes directly.
more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

16 SEO Experiments And Their (Surprising) Results

16 SEO Experiments And Their (Surprising) Results | Technical and on-page SEO | Scoop.it
Empty description
Norman Pongracz's insight:
------------ 
 1. Click-Through-Rate Affects Organic Rankings 

It appears that click volume (or relative click-through-rate) does impact search ranking base on a Moz testing (http://www.slideshare.net/randfish/mad-science-experiments-in-seo-social-media/88-ExperimentsFuture). 

The test: The tweet by Rand Fiskin pointed his followers to some basic instructions which asked participants to search “the buzzy pain distraction” then click the result for sciencebasedmedicine.org. Over the next 2.5 hour period following Rand’s initial tweet, a total of 375 participants clicked the result. The effect was dramatic. sciencebasedmedicine.org shot up from number ten, to the number one spot on Google. 

How do factor CTRs into your SEO strategy? 
/ Craft compelling Page Titles and Meta Descriptions. 
/ An established and recognisable brand will attract more clicks. 
/ Optimise for long clicks 
/ Use genuine tactics – The effects of sudden spike in CTR will only last a short time. When normal click-through-rates return so will your prior ranking position. 

------------

2. Mobilegeddon Was Huge 

The test: On the week of April 17th, 2015 (pre Mobilegeddon) Stone Temple pulled ranking data on the top 10 results for 15,235 search queries. They pulled data on the same 15,235 search queries again on the week of May 18th, 2015 (after Mobilegeddon).Source: https://www.stonetemple.com/mobilegeddon-may-have-been-bigger-than-we-thought/

Findings on Mobilgeddon: 
/ The non-mobile friendly web pages lost rankings dramatically. / In fact, nearly 50% of all non-mobile friendly webpages dropped down the SERPs. 
/ The mobile friendly pages (overall) gained in ranking. 

 ------------ 

 3. Link Echoes: How Backlinks Work Even After They Are Removed 

/ It does appear that some value from links (perhaps a lot) do remain, even after the links are removed. 
/ These higher rankings remained for many months after the links were removed. Source: https://moz.com/blog/link-echoes-ghosts-whiteboard-friday ;

Implications 
/ Quality links are worth their weight in gold. Backlinks will continue to give you a good return. The ‘echo’ of a vote once cast (as proven by this test) will provide benefit even when removed. 
/ The value of links DO remain for sometime. So, before you get tempted into acquiring illegitimate links, consider if you’re ready to have that remain a footprint for months (or even years) ahead. 

------------ 

4. You Can Rank With Duplicate Content 

The test: From time to time a larger, more authoritative site will overtake smaller websites’ in the SERPs for their own content. This is what Dan Petrovic from Dejan SEO decided to test in his now famous SERPs hijack experiment. https://dejanseo.com.au/hijack/. Using four separate webpages he tested whether content could be ‘hijacked’ from the search results by copying it and placing it on a higher PageRank page – which would then replace the original in the SERPs. 

Results: 
/ In all four tests the higher PR copycat page beat the original. 
/ In 3 out of 4 cases the original page was removed from the SERPs 

Search Console warning on hijacked content Shortly after running the experiment Dejan SEO received a warning message inside their Google Search Console account. The message cited the dejanseo.com.au domain as having low-quality pages, an example of which is ‘copied content.’ Around the same time one of the copycat test pages also stopped showing in SERPs for some terms. This forced Dan to remove the test pages in order to resolve the quality issue for his site. So it seems that whilst you can beat your competitors with their own content, it’s definitely not a good idea to do so. 

------------

5. Rich Answers [Number One Is NOT The ‘Top’ Spot] 

Rich Answers: Rich Answers are on the rise and the results you see today are a far cry from the ten blue links of the past. Rich Answers are the ‘in search’ responses to your queries you’ve probably been seeing more of in recent times. They aim to answer your query without you having to click through to a website. 

/ According to the study by Erez Barak at Optify (http://www.slideshare.net/optify/optify-organic-click-through-rate-webinar-slides), in 2011 the top ranking website would receive as much as 37% of total clicks. But, with the growth of Rich Answers that’s all changing. The ‘number one’ organic result is being pushed further and further down the page. Click volumes for the ‘top spot’ are falling. 

The test: But just how much are Rich Answers affecting click-through-rates? Confluent Forms http://www.confluentforms.com/) got a Rich Snippet result listed for their website and CTRs went up once the Rich Answer was added. They went down when the Rich Answer were removed. Rich Answers are intended to solve queries from within the search results. Yet, they can still send additional traffic to your site, just as they did for Confluent Forms. 

SEO Best Practice Rich Answers are generally provided for question based search queries. 
/ Identify a simple question – Make sure the question is on topic. You can check this by using a Relevancy tool such as nTopic. 
/ Provide a direct answer – Ensure that your answer is simple, clear and useful for both users and search engines. 
/ Offer value added info – Aside from your concise response to the question, include more detail and value. Be sure not to just re-quote Wikipedia since that’ll not get you very far.
/ Make it easy for users and Google to find – This could mean sharing it with your social media followers or linking to if from your own or third party websites. 

------------ 

6. Using HTTPS May Actually Harm Your Ranking

The test: The HTTPS study (https://www.stonetemple.com/how-strong-is-https-as-a-ranking-factor/) tracked rankings across 50,000 keyword searches and 218,000 domains. They monitored those rankings over time, and observed which URLs in the SERPs changed from HTTP to HTTPS. Of the 218,000 domains being tracked, just 630 (0.3%) of them made the switch to HTTPS. These sites (the ones switched to HTTPs) actually lost ranking. Later they recovered (slowly) to pretty much where they started. 

Results: It appears that HTTPS (despite Google wanting to make it standard everywhere on the web) has no significant ranking benefit for now and may actually harm your rankings in the short term. 

------------ 

7. Robots.txt NoIndex Doesn’t (Always) Work 

Intro: The common approach adopted by Webmasters is to add a NoIndex directive inside the Robots Metatag ON A PAGE. When the search engine spiders crawl that page, they identify the NoIndex directive in the header of the page and remove the page from the Index. On the other hand, the NoIndex directive placed inside the Robots.txt file of a website will both stop the page from being indexed and stop the page from being crawled. 

The test and results: However! According to tests (https://www.stonetemple.com/does-google-respect-robots-txt-noindex-and-should-you-use-it/) Google did not remove any of the pages immediately despite the website’s Robots.txt files being crawled several times per day. But, what we can be sure to say is that (despite Google showing support of Robots.txt NoIndex) it’s slow to work, and sometimes doesn’t work at all. 

------------ 

8. Exact Match Anchor Text Links Trump Non Anchor Match Links 

The test and results: What happens when you point 20 links to a website all with the same keyword rich anchor text? Your rankings skyrocket, that’s what! In a series of three experiments, Rand Fishkin tested pointing 20 generic anchor links to a webpage versus 20 exact match anchor text links. In each case the exact match anchor text increased the ranking of the target pages significantly. And, in 2 out of 3 tests the exact match anchor text websites capitulated the generic anchor text websites in the results. 


 ------------ 
9. Link To Other Websites To Lift Your Rankings 

The test: Shai Aharony and the team at Reboot put this notion to the test in their outgoing links experiment. Source: http://www.rebootonline.com/blog/long-term-outgoing-link-experiment ;

For the experiment Shai setup 10 websites, all with similar domain formats and structure. Each website contained a unique 300 word article which was optimised for a made up word “phylandocic”. Prior to the test the word “phylandocic” showed zero results in Google. 

In order to test the effect of outbound links, 3 full follow links were added to 5 of the 10 domains. The links pointed to highly trusted websites: 
/ Oxford University (DA 92) 
/ Genome Research Institute (DA 85)
/ Cambridge University (DA 93) 

Results: EVERY single website with outbound links outranked those without. This means your action step is simple. Each time you post an article to your site, make sure it includes a handful of links to relevant and trustworthy resources. 

------------ 
10. Nofollow Links Increase Your Ranking 

The test: Another experiment from the IMEC lab http://www.slideshare.net/randfish/mad-science-experiments-in-seo-social-media/73-12345678910 was conducted to answer: Do no followed links have any direct impact on rankings? Since the purpose of using a “nofollow” link is to stop authority being passed, you would expect nofollow links to have no (direct) SEO value. This experiment proves different. 

Results: In the first of two tests IMEC Lab participants pointed links from pages on 55 unique domains at a page ranking #16. After all of the nofollow links were indexed, the page moved up very slightly for the competitive low search volume query being measured. In both tests the websites improved their rankings significantly when nofollow links were received. 
/ The first website increased 10 positions. 
/ The second website increased 4 positions. 

------------ 

11. Links From Webpages With Thousands of Links Do Work

 There is a belief in SEO that links from webpages with many outgoing links are not really worth much. The more outbound links that website has, the less value passed to your site. This theory is reinforced by the notion that directories and other lower-quality sites that have many outbound links should not provide significant ranking benefit to sites they link to. In turn, links from web pages with few outbound links are more valuable.

The test and results Dan Petrovic tested this in his PageRank Split Experiment. https://dejanseo.com.au/pagerank-split-experiment/ ;

In his experiment Dan set up 2 domains (A and B). Both domains were .com and both had similar characteristics and similar but unique content. The only real difference was that during the test Website B was linked to from a site which is linked to from a sub-page on http://www.debian.org (PR 7) which has 4,225 external followed links.

Immediately after Website B was linked to from the PR 7 debian.org (via the bridge website) Website B shot up in rankings, eventually reaching position 2. And, as per Dan’s most recent update (3 months after the test) website B maintained its position, only held off top spot by a significantly more authoritative PageRank 4 page. Website A (which had not been linked to) remained a steady position for a while, then dropped in ranking. So it appears that links from pages that have many outbound links are in fact extremely valuable. 

------------ 

12. Image Links Work (Anchor Text Proximity Experiment) 

The test: The experiment (https://dejanseo.com.au/anchor-text-proximity-experiment/) was designed to test the impact of various link types (and their context) on search rankings. To conduct the test Dan registered 4 almost identical domain names: 
http://********001.com.au 
http://********002.com.au 
http://********003.com.au 
http://********004.com.au

Each of the 4 domains was then linked to from a separate page on a well-established website. Each page targeted the same exact phrase, but had a different link type pointing to it: 

001: [exact phrase] - Used the exact target keyword phrase in the anchor text of the link. 
002: Surrounding text followed by the [exact phrase]: http://********002.com.au - Exact target keyword phrase inside a relevant sentence immediately followed by a raw http:// link to the target page. 
003: Image link with an ALT as [exact phrase] - An image linking to the target page which used the exact target keyword phrase as the ALT text for the image. 
004: Some surrounding text with [exact phrase] near the link which says click here. - This variation used the junk anchor text link “click here” and the exact target keyword phrase near to the link. 

Results Unsurprisingly, the exact match anchor text link worked well. But most surprisingly, the ALT text based image link worked best. And, what about the other two link types? The junk link (“click here”) and the raw link (“http//”) results did not show up at all.

Best practices for SEO This is just one isolated experiment, but it’s obvious that image links work really well. Consider creating image assets that you can utilise to generate backlinks. The team at Ahrefs put together a useful post about image asset link building here. https://ahrefs.com/blog/build-links-with-images/ ;

------------ 

13. Press Release Links Work

 SEO Consult put Cutt’s claim to the test this by issuing a press release which linked to Matt Cutts blog, with the anchor text “Sreppleasers” (which is an anagram for 'press release'). The term is not present anywhere on Cutts website, however when searching for the term Matt's blog do show up. 


------------ 

14. First Link Bias 

The test The hypothesis is that if a website is linked to twice (or more) from the same page, only the first link will affect rankings. 

In order to conduct the test SEO Scientist (http://www.seo-scientist.com/first-link-counted-rebunked.html) set up 2 websites (A and B). Website A links to website B with two links using different anchor texts. 

Test Variation 1 - The websites were set up, then after the links got indexed by Google, the rankings of site B were checked for the two phrases. Result: Site B ranked for the first phrase and not for the second phrase. 

Test Variation 2 - Next the position of links to site B were switched. Now the second phrase appears above the previously first phrase on site A and visa versa. Once Google had indexed the change, rankings were again checked for website B. Result: Site B disappeared from the SERPs for the new second phrase (previously first) and appears for the new first phrase (previously second). 

Test Variation 3 - To check this was not some anomaly, in the third test variation the sites were reverted back to their original state. Once the sites were re-indexed by Google, the rankings of website B were checked again. Result: Site B reappeared for the initially first phrase and disappeared again for the initially second phrase: No-following the link: SEO Scientist made the first link “nofollow” and still the second link was not counted!

SEO Best practices The lesson from this experiment is clear. If you are “self creating” links ensure that your first link is to your most important target page. 

------------ 

15. The Surprising Influence of Anchor Text on Page Titles

The test: A few years ago Dejan SEO set out to test what factors Google consider when creating a document title when a title tag is not present. They tested several factors including domain name, header tag and URLs – all of which did influence the document title shown in search results. What about anchor text? 

Dan Petrovic put it to the test in this follow up experiment (https://dejanseo.com.au/title-rewriting-experiment/). His experiment involved several participants linking to a page on his website using the anchor text “banana”. The page being linked to had the non-informative title “Untitled Document”. 

Results: During the test Dan monitored three search queries, where the document title miraculously showed as “banana”. The test goes to prove that anchor text can influence the document title shown by Google in search results.

 ------------ 

16. Negative SEO: How You Can (But Shouldn't) Harm Your Competitors Rankings 

The test and results: In an attempt to harm search rankings (and prove negative SEO exists) Tasty Placement (https://www.tastyplacement.com/infographic-testing-negative-seo) purchased a large number of spam links which they pointed at their target website Pool-Cleaning-Houston.com. The site was relatively established and prior to the experiment it ranked well for several keyword terms including “pool cleaning houston” and other similar terms. A total of 52 keyword’s positions were tracked during the experiment.

They bought a variety of junk links for the experiment at very low cost; a batch of comment links, which had no effect at all. 7 days later the forum post links were also placed, which was followed by a surprising increase in the site’s ranking from position 3 to position 2. Not what was expected at all. Another 7 days after that the sidebar links were added which resulted in an almost instant plummet down the rankings. Aside from the main keyword, a further 26 keywords also moved down noticeably.
more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

Making Angular sites indexable

Making Angular sites indexable | Technical and on-page SEO | Scoop.it

Angular, injects HTML into an already loaded page, meaning that clicking on a link doesn't reload the page. However AngularJS cause some indexation issues on websites. Deepcrawl has advised some best practices on how to solve indexation issues.

Norman Pongracz's insight:

Making Angular sites indexable

 

Angular JS is a framework for building dynamic web apps that use HTML as a base language. Put simply, Angular, injects HTML into an already loaded page, meaning that clicking on a link doesn’t reload the page, it simply uses the framework to inject a new set of HTML to serve to the user. This means the page doesn’t have to reload and the website is significantly faster and saves the developer time as considerably less code has to be written.

 

However Angular JS causes many indexation concerns. Deepcrawl built TransferWise that is indexed. http://swift-bic.transferwise.com/en

 

Recommendation to make Angular site indexable

 

1. Removing the forced hashes that Angular has set as default.
The URL structure should match the most common path that the user follows to reach a page:
- com/
- com/category/
- com/category/page/
By default, however, Angular sets your pages up as such:
- com
- com/#/category
- com/#/page

 

Hash bang allows Angular to know which HTML elements to inject with JS. Hasb bangs can be removed by configuring $locationProvider: In Angular, the $location service parses the URL in the address bar and makes changes to your application and vice versa. We have to use the $locationProvider module and set html5Mode to true.

 

2. One might have issues with relative URL’s: Writing these as “<a href=”en/countries/united-kingdom”>…</a>”, when they should have been <a href=”/en/countries/united-kingdom”>…</a>. 

Although everything can be seemingly fine for the user, when crawling this with a bot, the strings in the links will be appending to whatever the URL already was. So, for example, the homepage link in the main navigation would append an extra “/en” to the URL, rather than just pointing to itself. This meant that crawling the site gave you an infinite list of URLs with more and more subfolders. (Just adding this as a side note as it is something you might want to test for).

 

To link around your application using relative links, you will need to set a <base> in the <head> of your document. HTML5 mode set to true should automatically resolve relative links.

 

3. Rendering

 

If your server can’t handle the crawlers’ requests, this can result in an increase in server errors seen by the search engine. One solution to this is to pre-render in advance server-side, so that when crawlers reach the server, the page is already rendered.
An alternative method is to add the fragment Meta tag to your pages. When this is present, Google passes a URL string parameter that looks like: ?_escaped_fragment_. What this does is redirect Googlebot to the server in order to fetch a pre-rendered version.

 

References:
- https://www.deepcrawl.com/knowledge/best-practice/angular-js-and-seo/
- https://scotch.io/quick-tips/pretty-urls-in-angularjs-removing-the-hashtag

more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

When to Hide Content Behind Forms and When to Give Content Away

Understand your users’ intents and stage in the sales funnel before you gate content.
Norman Pongracz's insight:

Should you gate content — that is, keep white papers, case studies, or e-books behind a form that becomes the gate-keeper for allowing users access?

 

Traditional websites have relied on heavy forms to find and convert leads, even at the high risk of losing potential customers. Gated content is particularly common on B2B sites.

 

There are situations, however, when people are more likely to fill in such forms. Mapping content to the user’s journey will help you determine whether or not to gate content on a case by case basis. The type of content as well as the implementation of the actual “gate” also affect the users’ willingness to go past the gate and fill in the lead-generation form.

 

When Not to Gate
Content such as articles and blog posts should not be gated if your main goal is to establish stronger thought leadership, increase site traffic, and improve SEO. Search engines usually can’t see content behind gates, so it’s best to keep content within view if you want to make it findable.
Additionally, content that is meant to increase awareness or answer fundamental product questions should remain ungated as well. Early in the buying cycle, people need to understand what the thing does and how it benefits them.
Gating content prematurely creates tension and distrust. Many organizations make the mistake of placing items such as case studies, FAQs, and product specifications behind gates. These content assets don’t usually belong behind gates.
People in the initial stages of the buying cycle have lower commitment and a higher propensity to abandon forms than people in later stages. At low-commitment stages, one way to shine is to immediately appear transparent and courteous. This is your chance to initiate the conversation and make a good first impression. If users perceive value, they will be more inclined to move the relationship forward and provide you with their personal details later.

 

When to Gate
People are more willing to risk offering their personal information when they perceive your content as valuable and unique. Sometimes it’s appropriate to gate high-value content in resources such as research papers, webinars, and training videos. The challenge for organizations is to determine what content visitors consider valuable enough to be worth their personal information.
Site visitors are most apt to complete forms when they can’t get the information elsewhere and when the purchase intent is high. People do expect to have to answer a few questions in exchange for free trials, quotes, downloads, webinars, and consultation requests.

If you decide to gate content, make sure you:

 

1 / Provide a reasonable level of content outside of the gate to demonstrate the value of your offering. Prove your worth before asking for something in return. Use the reciprocity principle to motivate engagement. Placing the gate within the content could be a viable option. For example, give people a list of tips but save the most critical ones for after the reader completes the form.
2 / Balance SEO with lead generation. Keep in mind, locking your best content behind gates will significantly diminish your search rankings. It’s no good to have great content if no one discovers it. Landing pages and gateway pages can improve SEO and increase user engagement by reassuring users that they are in the right place and by setting proper expectations.
3 / Find the right moment in the workflow to gate content. Do it when people are ready to have a conversation with you about your services. Determine where users are in the sales funnel and tailor your communication to the buyer’s state and commitment level.
4 / Keep the questions short and targeted. Studies show that shorter forms have higher conversion rates. Only ask for essential information that you can use now and leave out questions that merely satisfy some vague curiosity: every time you cut a question from the gating form, you’ll get more responses to the remaining questions. A single question (such as a request for an email address) is low risk for users and appropriate especially during the initial phases. The less work required to access content, the more willing people are to exchange information.
5 / Consider employing progressive profiling to nurture the relationship. Rather than asking people to complete a long and tedious profile form, collect information about each prospect over time, by asking different questions that are customized to the situation and buyer’s intent.
6 / Make sure you have stellar content behind gates. Users are more willing to give personal information when they trust the content quality. Your challenge is to find out what content people value and to make it consistently remarkable.
7 / Ensure that people understand the value of your content before having to pass through the gate. If users have downloaded some of your gated content before, then their level of satisfaction on that earlier occasion will dictate whether they’ll try to do it again. For new users, you must work harder to increase their comfort. Providing a clear summary or list of benefits could lower their resistance to completing the form.
8 / Protect the user’s inbox. Once people trust you with their email address, use it respectfully.

more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

Beyond Rich Snippets: Semantic Web Technologies for Better SEO

Beyond Rich Snippets: Semantic Web Technologies for Better SEO | Technical and on-page SEO | Scoop.it
Google can reward structured data use with rich snippets. But semantic web technologies can improve search visibility in other ways too.
Norman Pongracz's insight:

The benefits of employing structured data and associated semantic web technologies extend past rich snippet generation.

 

Some history

 

OG and Schema
The introduction of schema.org (2011), though, provided webmasters with the first general-purpose set of schemas that were officially sanctioned by the search engines. The Open Graph protocol (2010) grew out of the Facebook Platform, and enabled developers to integrate web pages into the social graph (a graph of the relationships between internet users).

 

Knowledge graph
Google introduced its Knowledge Graph in 2012, which draws on a number of structured data sources to populate this knowledge base that currently resides alongside Google's main search results. Just as Google leaned on structured data sources to build its knowledge graph, Facebook Platform technologies facilitated the release of Facebook Graph Search in 2013.

 

Data highlighter
Around the time that the Knowledge Graph events vertical started to appear, Google introduced the Data Highlighter, which allows webmasters to visually match visible website information with properties supported by the Highlighter. In other words, it provides a mechanism to provide Google with structured information about a resource without marking it up in the code.
The first (and at time of writing, only) content type supported by the Highlighter? Events.

 

Structured data and eCommerce

Ecommerce-related structured data has changed this. Using schema.org, not only can ecommerce sites expose the same information in markup that is supplied in feeds, but websites can now markup more detailed information about their products, offers and services than are supported by product feeds.

Structured data helps the search engines better understand your site. A breadcrumb rich underlying code helps the search engines more clearly understand the hierarchy of your site and the relationship between pages. 

Well-executed structured data can help provide a more consistent and positive experience for website users, whether their exposure to your content is through search results, social media or third-party applications.
Employing structured data can help developers and optimization specialists see what underlies a resource through a data lens – that is, as related data points and data values, rather than disconnected widgets and disparate pieces of information. For example, a page's "like" button seen through this lens is not generically a widget for Facebook, but a mechanism to provide Facebook users and Open Graph consumers with precisely crafted information.

 

Data fidelity as a trust-building measure: Consistency of data is important for search engines. In particular, as this principle applies to structured data, search engines will trust your content more if it can see that your visible content aligns with the data provided in your markup.

 

http://www.seoskeptic.com/wp-content/uploads/2013/03/product-data-fidelity.png

 

An additional point of data reference for the search engines are XML product feeds. The search engines will accord greater trust to ecommerce pages if the product feed, markup and visible content are in sync.
Note: Consistency is, in fact, almost certainly one of the reasons that the search engines put their weight behind schema.org, which was literally designed for attribute-based markup, rather than opting for Open Graph-like invisible metadata (remember <meta> keywords?). And this is also the reason Google makes a point of advising against marking up non-visible content except in situations where a very precise data type is required but is not available on a page, such as a numeric representation of review star ratings, or event durations in ISO 8601 date format.

 

Local markup:
The true strength and ultimately the great potential of Google+, namely, as a interlinked network of verified, canonical, named entities (rel="publisher", Google+ profiles and Google+ pages). The role that Google+ could potentially play in determining provenance is obvious. The combination of a Google+ Profile or Page and structured data means Google can connect the dots between a web resource and the entity that produced that resource.

 

Structured data and semantic architecture
While it's perfectly possible to markup a site with structured data after it's been built, a site that's constructed with an eye to semantic structure will fare better in the long run than one that's not.
The BBC World Cup 2010 website did not have structured data applied to it, but was actually assembled with the aid of semantic web technologies. The result is a rock-solid resource that at once reliably meets human visitors' needs, and at the same time provides search engines with explicit, utterly unambiguous data (the BBC has taken a lead role this sort of semantic architecture).

http://bitly.com/bundles/aaranged/1

more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

Understanding web pages better

Norman Pongracz's insight:

Google indexing JavaScript: Implications and Risks

 

The “Googlebot” finally has the ability to interpret JavaScript, the last remaining core construct used to create and manipulate content on web pages (HTML and CSS being the other two).

 

Implications and potential risks with solutions:

 

Better Flow of Link Juice

Entire navigation menus are sometimes fully reliant on JavaScript. The ability to parse these links will result in better “link juice” distribution.

 

Poor Load Times

The use of excessive JavaScript is rampant and, often times, a browser has to make a significant quantity of additional requests and spend time downloading this JavaScript. Now that the Googlebot has to do this too, many sites’ load times in the eyes of Google are likely to increase. To see if you’re affected log in to your Google Webmaster tools and check your “Crawl Stats” graph over the past few months. Also, if your web server is unable to handle the volume of crawl requests for resources, it may have a negative impact on our capability to render your pages. If you’d like to ensure that your pages can be rendered by Google, make sure your servers are able to handle crawl requests for resources.

 

Blocking

If resources like JavaScript or CSS in separate files are blocked (say, with robots.txt) so that Googlebot can’t retrieve them, Google's indexing systems won’t be able to see your site like an average user. We recommend allowing Googlebot to retrieve JavaScript and CSS so that your content can be indexed better.

 

Graceful Degradation

It's always a good idea to have your site degrade gracefully. This will help users enjoy your content even if their browser doesn't have compatible JavaScript implementations. It will also help visitors with JavaScript disabled or off, as well as search engines that can't execute JavaScript yet.

 

Content indexation

Some JavaScript removes content from the page rather than adding, which prevents us from indexing the content.

 

Additional resource: http://www.business2community.com/seo/googles-crawler-now-understands-javascript-mean-0898263

more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

Top 20 SEO requirements for scoping your eCommerce platform

Top 20 SEO requirements for scoping your eCommerce platform | Technical and on-page SEO | Scoop.it
Having spent the last 6 years Client side as Head of eCommerce and agency side managing digital marketing teams, one constant has been confusion in new platform builds over what a “search engine friendly” website actually is.
Norman Pongracz's insight:

SEO requirements for setting up new eCommerce domain

 

Accessibility and Navigation
The key content is still visible to search engine spiders/bots as well as to visitors when elements like JavaScript are disabled
The site can be used in different browsers and devices
XML sitemap generated dynamically and is submitted on a regular basis
HTML sitemap is auto generated based on product catalogue and site structure
Robots.txt file is provided
Rich snippets are supported within platform
Custom 404 error page and automated report to flag error pages
Flat information architecture
Thin content pages are blocked from crawling
Site search is blocked from crawling
Flash objects are search engine friendly
Pdf content is readable
Page load time to meet agreed threshold

 

Canonicalisation
301 redirects from legacy pages, non canonical URLs and orphaned pages (non www or no trailing slash) to preserve search engine rankings
Canonical tag used to avoid duplicate content
Hreflang directives are set up for pages having version in different languages and regions

 

URLs
Dynamically generatet search engine friendly URLs for product and content pages
Ability to specify / edit URLs for individual pages via CMS for campaign landing pages and microsites

 

On-page Elements
Keyword optimised H tags within html for headings – structure for use of H1 to H6 to provide a relevant hierarchy of content
Core provision for meta content (title, description, keywords) that is auto generated
Images have appropriate ALT tags
Keyword in title tag (unique for each page, include keywords)
H1 with keyword can be found on each page

 

Content
Major headings are clear & descriptive
Critical content is above the fold
Font size/Spacing is easy to read
Clear path to company information and contact Information
Main navigation is easily identifiable
Navigation labels are clear & concise
Number of buttons/links is reasonable
Company logo Is linked to home-page
Links are consistent & easy to identify
Site search is easy to access
Provide text alternatives for all non-text content
For all non-text content that is used to convey information, text alternatives identify the non-text content and convey the same information. For multimedia, provide a text-alternative that identifies the multimedia.
For non-text content that is intended to create a specific sensory experience, text alternatives at least identify the non-text content with a descriptive label (for instance: colour guide).
Captions are provided for pre-recorded multimedia.
HTML page titles are explanatory
Social media content such as blogs are hosted on your primary website domain

 

Visitor tracking
Visitor analytics (such as Google Analytics or Omniture) is implemented
Google & Bing Webmaster Tools accounts set up
Event and Goal Tracking set up

 

References:

 

Main article:
https://econsultancy.com/blog/5525-top-20-seo-requirements-for-any-ecommerce-platform

 

Supporting articles:
http://searchengineland.com/seo-checklist-for-startup-websites-170965
https://docs.google.com/file/d/0B31KfhEE-e3oZGI5NmI1NGUtZmZlZC00YTVlLTg4MWQtNTVkMTAyMzZkNjEy/edit?hl=en
http://www.w3.org/TR/2005/WD-WCAG20-20050630/checklist
http://www.usereffect.com/topic/25-point-website-usability-checklist
http://moz.com/blog/launching-a-new-website-18-steps
http://www.newmediacampaigns.com/page/the-8-minimum-requirements-for-seo-features-in-a-cms

more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

15 Great Citation Resources for Local Search

15 Great Citation Resources for Local Search | Technical and on-page SEO | Scoop.it
For many SMBs & SEOs that are new to local search, understanding citations and what’s important about them can be a bit mystifying. On the surface, local directory listings seem as plain as day – how complex can business listings on a website be right?! But once you start getting drawn into the murky world
Norman Pongracz's insight:

Citation Building Basics (Summary)

 

In simple SEO terms a Local Citation is simply where your company is mentioned on other websites and places found on the Internet. Local citations are used heavily in helping you to rank in local search results.
An example of a citation could be a business directory such as Yell, Thompson Local or Brown Book where your company is mentioned explicitly by name. Local citations do not to include a link to your site. It could also be where your company is mentioned, cited, referenced or spoken about on other local websites.

Citation sources come in 6 main shapes & sizes (see below). Some are specific to an industry or city, while some are much broader in scope and provide listings for all types of business in all towns across the country. As long as the site has some relevance to your business (e.g. offers correct category to list or covers same geographic location) and is decent in quality then it’s a goer.

Local Directories
-Niche or Vertical directories
-General Directories
-Event sites
-Social platforms
-Local news & blog sites

How can I find out where I’m already listed?
-CitationTracker (by BrightLocal)
-CitationFinder (by WhiteSpark)

Knowing where you’re listed gives you &frac12; the picture. To really bring your citation situation into focus you need to know what your business data looks like on these sites.
-Do they have your business name stored correctly?
-Do they have your exact address & zipcode?
-Are they using the right local number for your business?

Tools:
-Yext Local listings scan tool
-Brightlocal SEO Check Up
-UBL Visibility Tool

Where else can I get myself a listing? -The best way to work this out is to spy on your competitors and see where they’re listed. If your competitors can get a listing on a site it follows that you should – in most cases – also be able to get a listing. The same 2 tools that help you find your existing citations (CitationTracker & CitationFinder) can also be used to spy on your competitors.

What category should I use for my business? - Selecting the right category/categories to list your business on aggregator & citation sites is very important. But identifying & selecting the right category can be tricky for some businesses. Tools: https://moz.com/local/categories

How long does it take for listings to go live? If you submit listings manually, direct to sites then the speed of go live tends to be much faster than if you submit via a 3rd party or aggregator service. We typically see 70% of our direct submissions go live within 4 weeks of submission, with many going live instantly or in 48-72 hours.

Most important UK Citation Sources:
192.com
AccessPlace.com
AgentLocal.co.uk
ApprovedBusiness.co.uk
BizWiki.co.uk
Britaine.co.uk
Brownbook. net
BTLinks.com
Business.Unbiased.co.uk
BusinessNetwork.co.uk
City-Listings.co.uk
City-Visitor.co.uk
CityLocal.co.uk
CompaniesintheUK.co.uk
Cylex-UK.co.uk
Directory.TheSun.co.uk
FindtheBest.co.uk
ForLocations.co.uk
Foursquare.com
FreeBD.co.uk
FreeIndex.co.uk
Fyple.co.uk
GoMy.co.uk
HotFrog.co.uk
InfoServe.co.uk
It2.biz
Listz.co.uk
LocalDataCompany.com
LocalDataSearch.com
LocalLife.co.uk
LocalMole.co.uk
LocalSecrets.com
LocaTrade.com
Manta.com
MarketLocation.com
MisterWhat.co.uk
MiQuando.com
My118Information.co.uk
MySheriff.co.uk
MyLocalServices.co.uk
Near.co.uk
Opendi.co.uk
Qype.co.uk
Recommendedin.co.uk
Scoot.co.uk
SmileLocal.com
TheBestof.co.uk
TheBusinessPages.co.uk
TheDirectTree.com
TheDiscDirectory.co.uk
ThomsonLocal.com
Tipped.co.uk
TouchLocal.com
UFindUs.com
UK.Uhuw.com
UK.WowCity.com
UK-Local-Search.co.uk
UK-Locate.co.uk
UKSmallBusinessDirectory.co.uk
VivaStreet.co.uk
Wampit.com
WeLoveLocal.co.uk
WheresBest.co.uk
WhoseView.co.uk
Yalwa.co.uk
Yell.com
Yelp.co.uk (verification required only for claiming listing)
Zettai.net

Reference:

Main article:
http://www.brightlocal.com/2014/05/21/15-great-citation-resources-local-search/#learn

Other articles:
http://www.localvisibilitysystem.com/definitive-local-search-citations/#uk
http://www.hallaminternet.com/2012/what-is-a-citation/

more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

Google Analytics Troubleshooting Guide & Auditing Resources - MarketingVOX

Google Analytics Troubleshooting Guide & Auditing Resources - Publisher: MarketingVox
Norman Pongracz's insight:

Debugging Google Analytics Setup - Common Mistakes (Summary)

 

GA 101: accounts, trackers, domains:
1) the tracking code is in your website's HTML source code,
2) you are using the right tracking code,
3) you are checking the right GA account in the application's settings,
4) GA is acknowledging that it is receiving data for that account, and (For testing this, see: http://www.webanalyticsworld.net/2012/02/debugging-google-analytics-code-ii-a-tutorial-video-on-fiddler%E2%80%99s-inspector-and-autoresponder-functions.html
5) there are no "rogue" sites using your UA code out there. (If someone puts your Google Analytics tracking code on their site (the same UA-#), visits to their site will show up in your Google Analytics profiles - for more details read: http://www.blastam.com/blog/index.php/2011/06/are-rogue-sites-influencing-your-google-analytics-data/)

Goals, funnels, and filters
If your goals are not being tracked (i.e. GA never reports any match) make sure your URLs are an exact match, or double-check your regular expressions, depending on how the goal rules have been written. Exact match is easier to use but less flexible and more brittle if you are going to change URLs or want to match a whole family of similar URLs.
Badly set up funnels can show incoherent data such as everyone leaving after an early step while your goal conversion does show conversion events further down the funnel. See http://www.lunametrics.com/blog/2008/06/25/funnel-problems-google-analytics/
When running filters, check you understand their syntax. If you are using several filters, bear in mind they are executed one after the other and "feed" into each other. Using more than one Include filter can lead to data loss and should be done with caution.

Campaign tracking
If you have already made sure your various traffic generation efforts (e.g. in email newsletters) embed the right URL parameters, the next step is to verify that the redirects properly work. In larger teams it is advised to use a centralized online document or spreadsheet to keep track of normalized campaign parameters.https://developers.google.com/analytics/resources/articles/gaTrackingTroubleshooting?csw=1#campaignTrackingIssues

Site search, site overlay, site speed
Some GA features do not work if you rewrite URLs, so if you want to use site search or site overlay, make sure to use a separate profile from the one where you generate readable "fake" URLs. If your CMS allows it or if you can run custom server-side code (e.g. in PHP) you can also make GA believe you are using a search query parameter even if you are not.
http://www.lunametrics.com/blog/2010/08/19/site-search-without-query-parameters/#sr=nbslfujohwpy.dpn&m=r&cp=(sfgfssbm)&ct=/hpphmf-bobmzujdt-uspvcmftippu-uppmt-049532/-tmc&ts=1401813714

Asynchronous tracking, ecommerce and custom variables
While CMS upgrades and migrations are a leading cause of analytics problems on accounts that used to work, moving to the newer async GA code has to be done with similar caution. Among specific problems with the latest generation of GA scripting:
-Stick to the exact spelling and casing of method names (e.g. _trackPageview) - they are case sensitive.
-Be careful to not have leading or trailing whitespace when you're pushing the tracking code.
-Pass along strings within quotes, but do not otherwise use quotes for other value types such as booleans.
When coming from the older (synchronous) syntax, make sure you have converted everything, including ecommerce integration. Speaking of which, check that you do not have improperly escaped special characters or -apostrophes getting in the way. See more on this topic.
If you are using custom variables, verify that you are following these guidelines. Mixing page, session and visitor-level variables in the same slot is not recommended. Migration from the deprecated _setVar method to _setCustomVar should be done carefully. And while the dreaded "%20" bug was finally fixed in May 2011, this means filters need to be rewritten. Also, know that custom variables typically take longer to appear in GA than "regular" data - it can take up to 48 hours on very large sites.

Auditing and support tools
Auditing and support tools - with a mix of a javascript console and a http live header sniffer, it is possible to instantly know what's going on with your code. To that effect, Google Analytics Tracking Code Debugger is a very convenient Chrome browser extension.

 

Reference:

 

Main article:
http://www.marketingvox.com/google-analytics-troubleshoot-tools-049532/

 

Other articles:
http://www.blastam.com/blog/index.php/2011/06/are-rogue-sites-influencing-your-google-analytics-data/
http://psgrep.com/
http://www.lunametrics.com/blog/2008/06/25/funnel-problems-google-analytics/#sr=nbslfujohwpy.dpn&m=r&cp=(sfgfssbm)&ct=/hpphmf-bobmzujdt-uspvcmftippu-uppmt-049532/-tmc&ts=1401813714
http://www.cardinalpath.com/the-math-behind-web-analytics-mean-trend-min-max-standard-deviation/
http://www.webanalyticsworld.net/2012/02/debugging-google-analytics-code-ii-a-tutorial-video-on-fiddler%E2%80%99s-inspector-and-autoresponder-functions.html

more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

There's no place like home (page)

There's no place like home (page) | Technical and on-page SEO | Scoop.it
Your home page is one of the most visited pages on your website. Few people will visit your site without seeing it. But a lot of home pages suck. Read this, and make sure yours doesn’t.
Norman Pongracz's insight:

Homepage Optimisation (Summary)

 

-Show whatever it is you’re selling - sounds obvious but often sites don’t show off their products. The first thing users should see is whatever you’re selling. It should be big, bold and beautiful.
-If, you sell a service rather than a physical product (see Mailchimp), try to encapsulate what you do in the simplest, shortest way you can. So if you’re a lawyer, you could say ‘No-nonsense legal advice’, an accountant ‘Tax returns the easy way!’
-Keep it short, punchy and use real language. And try to include (where appropriate) words of quality like ‘easy’, ‘simple’, ‘discover’, ‘free’ as these are the words that people tend to respond to. Visitors then know what they’re getting and if they’re interested, they’ll stick around.
-Use really eye-catching images. For instance ASOS draws in your eyes so you can’t help but look at ‘SHOP MEN’ and ‘SHOP WOMEN’. T
-The key here is that all your products need to be “above the fold”. That is to say, you shouldn’t need to scroll down to see them.
-Make the next step clear - make sure it’s blindingly obvious. It’s sometimes difficult to summarize what your business does as succinctly as this. But try, because it will make your website much more compelling. Retailers can’t use the home page to sell individual products (that’s what product pages are for), but they can, should and do, sell themselves. (see: ASOS)
-Use a disruptive homepage design! A very linear, blocky site, where everything is aligned and there are lots of right angles, might be clear and even, but it’s very difficult to make anything stand out. This means users’ attention won’t be channelled towards your ‘call to action’. Use elements that break up alignments and hierarchies, and push your users’ eyes to where you want them to go. The more something sticks out, the more people will click on it.
-Testify! As a webmaster, you need to do all you can to reassure your customers that you’re trustworthy. One of the easiest ways is with testimonials, quotes from people who have used, and liked, your service. A lot of companies ask for feedback automatically after a purchase is complete.
-Videos. You don’t need to produce the next blockbuster, but you should turn your hand to making a video or two. Videos, as well as being great content that search engines love, reassure your customers.
-Remember the visual hierarchy - Some of the pages on your website will make you lots of money, others won’t. Navigation should always be easy and intuitive, but you can still nudge your users in the right direction. Don’t feel you have to treat all pages equally. You can make some pages easier to find than others.
-Consider making the social icons bigger, to make my content easier to share. (And try to put the icons on the right hand side of the page, because more people will click them.)

more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

Ecommerce SEO Tips: User-Focused SEO Strategies For Deleted Products | Linchpin SEO

Ecommerce SEO Tips:  User-Focused SEO Strategies For Deleted Products | Linchpin SEO | Technical and on-page SEO | Scoop.it
Ecommerce SEO Tips: User-Focused SEO Strategies For Deleted Products by Linchpin SEO
Norman Pongracz's insight:

Options for handling changes in product pages

 

 

Redirect to the Deleted Product’s Category Page - this should be the category that is one level up from the product page, or if there is less than 3 products in the defined category, keep climbing the taxonomy until there is at least 3 products in a category.
Once this category is defined, 301 redirect the old product page to this category.
Pros:
-Good for seasonal products
-Pushes ranking value into the category page.
-Allows the ranking value to be split between remaining products in that category
-Gives users the ability to find other relevant products that could fit their needs
-Lowers the risk of users going back to the search results page and visiting a competing website
-Once the search engine re-crawls the page and finds the 301 redirect the product will be removed from the search engines index.
Cons:
-Possible user confusion. This risk can be mitigated by serving a small JavaScript overlay on the category page (this can’t interfere with the search engines ability to crawl the category page) explaining that the previous item is not available, but that these might be helpful.

A 301 redirect coupled with a noindex/follow meta-tag on the search results page - Whenever a user clicks on an external link – 
return the search results page to the user that includes similar products
Pros:
-Keeps users engaged with the website.
-Using the noidex/follow meta-tag allows ranking metrics to flow through the internal links on the search results set, but keeps the search results page out of the Google index.
-Allows for discovery of similar products
-Once the search engine re-crawls the page and finds the 301 redirect the product will be removed from the search engines index.
Cons:
-Possible user confusion. This risk can be mitigated by serving a small JavaScript overlay on the category page (this can’t interfere with the search engines ability to crawl the page) explaining that the previous item is not available, but that these might be helpful.
-Leaves the product selection up to the user. Thus, the website owner can’t control the outcome of the user journey or directly match/recommend a single product that best matches their intent.

Manually Redirect to a Similar Product - Manually create a 301 mapping by selecting a similar product or page from the remaining product set so whenever a user clicks on an external link – either from the search results, bookmark, social website, or from a link on another website they are taken to the new page. Create an environment to allow for the deleted item to be redirected to this newly identified page.
Pros:
-Ability to easily match relevancy based on user need.
-Ability to redirect to a similar product that high conversion rate – or even a new product that has a high relevancy to the deleted product.
-Keeps users engaged within the website.
-Allows for direct flow of ranking and social metrics from one product to another.
-Once the search engine re-crawls the page and finds the 301 redirect the product will be removed from the search engines index.
Cons:
-Possible user confusion. This risk can be mitigated by serving a small JavaScript overlay on the category page (this can’t interfere with the search engines ability to crawl the page) explaining that the previous item is not available, but that these might be helpful.
-This is done manually and can be time consuming for large ecommerce websites.

Redirect Based on Relevancy Value
Title relevancy is high enough redirect directly to related product.
Whenever a user clicks on an external link – and it is detected that a 404 error will occur, dynamically spin a search on the back end utilizing title of the product.
-If there is a product that matches at a high enough relevancy (this will be defined based on product set) send the user directly to that product.
-If the relevancy of products is not high enough, send the user to a search results page with a group of related products.
Pros:
-This is a combination is serving the best option to the user based on value and relevancy.
-Keeps users engaged within the website.
-Keeps ranking and social metrics flowing throughout the website.
-Once the search engine re-crawls the page and finds the 301 redirect the product will be removed from the search engines index.
Cons:
-Possible user confusion. This risk can be mitigated by serving a small JavaScript overlay on the category page (this can’t interfere with the search engines ability to crawl the page) explaining that the previous item is not available, but that these might be helpful.

Custom 404 Page - Whenever a user clicks on an external link, redirect the user to a custom 404 page This page should: Inform the user the product is no longer available; Provide related product selections; Provide a search box for the user to search the website for other products.
Pros:
-Directly informs the user that the product is no longer available
-Once the search engine re-crawls the page and finds the 301 redirect the product will be removed from the search engines index.
Cons:
-Loss of ranking or social value that the deleted item had built.
-Higher risk the user will hit the back button and go to a competitor of yours who still has the product.

Permanently delete the expired product’s pages, content and URLs. When you have no closely related products to the one that’s expired, you may choose to delete the page completely using a 410 status code (gone) which notifies Google that the page has been permanently removed and will never return.

Reuse URLs. If you sell generic products where technical specifications and model numbers are not relevant, you could reuse your URLs. That way you will preserve the page’s authority and increase your chances of ranking on Google.

Some items deserve to live on. Certain products may have informational value for existing customers or others wanting to research it. Leave these pages intact. Previous buyers can get information, help and service through these pages.

In case of out-of-stock items
-Leave the pages up. If the items will be in stock later, leave pages up just the way they are. Don’t delete, hide or replace them. Don’t add another product to them or redirect visitors to other pages.
-Inform users when it will return. Always offer an expected date when the product will be back in stock so visitors will know when to come back and buy.
-Offer to backorder the product. Let them order and promise to have it sent out to them as soon as fresh supplies arrive. Prospective buyers who really want the product won’t mind waiting a few extra days for it.

References: 
http://www.linchpinseo.com/ecommerce-seo-tips-user-focused-seo-strategies-for-deleted-products
http://searchengineland.com/best-practices-in-e-commerce-seo-176921
http://moz.com/blog/how-should-you-handle-expired-content

more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

Page Title & Meta Description By Pixel Width In SERP Snippet | Screaming Frog

Page Title & Meta Description By Pixel Width In SERP Snippet | Screaming Frog | Technical and on-page SEO | Scoop.it
Norman Pongracz's insight:

For page Titles Google now use 18px Arial for the title element, previously it was 16px. However interestingly, Google are still internally truncating based on 16px, but the CSS will kick in way before their ellipsis is shown due to the larger font size. The upshot of this change is that text is no longer truncated at word boundaries. Bolding pushes he size of text in pixels. We also see Google moving brand phrases to the start of a title dynamically.

For meta descriptions actually the CSS truncation appears to be around 920 pixels.

more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

How to Remove a Manual Penalty | World of Search

How to Remove a Manual Penalty | World of Search | Technical and on-page SEO | Scoop.it
This guide explains, in depth, how to get over a manual penalty for inbound links and uses: This Excel template DISCLAIMER: There are undoubtedly faster and shorter processes to audit your links and submit a reconsideration request. Taking shortcuts like that may work, or it may not. This process is the full-blown, no cut corners …
Norman Pongracz's insight:
SummaryFirst Steps / Basic Analysis on Pitch LevelDid the client receive any link warning message in Google Webmaster Tools?Did the client experience any sharp decline in visibility via Search Metrics? (Penguin and manual penalty tends to be a sharp decline in rankings compared to the slow downfall of Panda penalty)Is it possible that the decline was caused by competitors' sites getting penalty removed and regaining their ranking?Did the client experience any sharp decline in traffic via GA? Does this correspond with decline in visibility?Is it possible that decline in traffic was caused by tracking or other issue than penalty?Did the company lost its ranking for brand terms (Google search "Brand term")Has there been any link removal done beforehand? Are there any documentations on it?Does the client have any white/safe-list of links/domains?Does the domain interlink with any other domains? (such as Debenhams.com and Debenhams.ie) Did they implement HrefLang or any other solution to prevent looking spammy?What Is a “Bad” Link?

Basically a “bad” link in Google’s eyes is anything that isn’t editorial – any link that you created for the purpose of SEO.

If someone created a random link to your website on some unrelated forum, that might be a link that we consider not great from an SEO perspective, but from a penalty perspective there’s theoretically nothing wrong with it.

However, if you discover that you have many backlinks from low quality and unrelated domains, they may be worth removing – even if you didn’t make them. Look for patterns. One link from a Spam directory will not result in penalty. A dozens of links from Spam directories might.

In addition, Google seems to give the most relevancy when it comes to penalties to your most recent links (as opposed to links you made five years ago). Try getting your “latest links” look around to see if there is any suspicious recent activity.

Link Metrics Analysis

Link Research Tools Detox Analysis

What risk did LRTs assign to the domain?Did you classify at least 80% of keywords?What is the risk distribution?Does anchor text distribution look natural?

LRTs is not that useful to determine the site’s spam backlinks but provides an excellent benchmark to understand the toxicity.

Backlink Data Collection

Majestic (Historic preferably)OSEaHrefsMajestic and/or OSE apiUpload list of links on LRTs

Spreadsheet analysis

The best practice is to review each domain (or a linking page/domain) manually, but many times this isn't possible. So here are some shortcuts.

Are any of the links appearing in previous disavow files?Do they have a white list of links?What are the top referring domains? (Worth reviewing the top domains manually)Do links have suspicious domain names and/or URL paths? Spam links tend to have at least one of the following words in their URLs: SEO, link, directory (or many times: dir), submit, web, site, search, Alexa, moz, domain, list, engine, bookmark, rank etc.Spam domains tend to have unusually long URL names (example: best-shoes-for-wedding.wordpress.com) or have marketing sounding path names and titles (example: importance-of-selling-your-stuff-online)Spam domains tend to have low citation and trust flow metrics (under 15)Spam domains tend to have low amount of backlinksSpam domains often created within a short period of time – i.e. low domain age.Are referring domains all have unique IP address? Does some of the domains look like a link network?Are press releases and sponsored posts as well as guest blogging guidelines no-followed?Review for high quality directories that can be whitelisted (dmoz, yahoo directory, yell etc.)
more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

Search console parameter settings and the impact on indexation

Search console parameter settings and the impact on indexation | Technical and on-page SEO | Scoop.it
Crawl optimisation (ensuring search engine spiders crawl all unique content
across a domain) is one of the most complex aspects of technical SEO. The
following article and research explores how Google Search Console parameter
settings (one of several potential methods) is being used by webmasters and
SEO experts to guide how bots crawl their domain’s URLs.

 

There are other methods for controlling crawl budget including canonicals,
XML sitemaps and robot directives, but in this article we are going to
specifically focus on understanding how Search Console parameter settings
impact on search engine crawl.

 

The launch of parameter settings


Google introduced the latest version of parameter settings  in Webmaster
Tools in mid-2011  (Webmaster Tools was rebranded to Search Console on May
the 5th 2015 ). According to Google, the parameter handling “helps you
control which URLs on your site should be crawled by Googlebot, depending
on the parameters that appear in these URLs.”


Kamila Primke , the author of Google’s article explaining the change listed
the following benefits of using parameter handling in Search Console:

 

* It is a simple way to prevent crawling duplicate content
* It reduces the domain’s bandwidth usage
* It likely allows more unique content from the site to be indexed
* It improves crawl coverage of content on site

 

While Primke’s piece highlighted the most important benefits and features,
it did not go into detail about how different variations of parameters can
be configured. To clear up some of the confusion, Maile Ohye  later
recorded a video  providing guidance for common cases when configuring URL
Parameters.

 

 

We have prepared the following visual guide based on Ohye’s video and our
own experience.

Essentially, the user needs to decide whether the parameter is used for
tracking or not and then tell Google how the parameter should change the
page’s content. This offers users the following options to handle URLs with
parameters:

 

* Only crawl representative URLs – the only option available when
parameters are used for tracking.
* Crawl every URL – this is a common recommendation when parameters are
used for pagination, the new pages are somewhat useful or if the
parameters replace the original content with manually translated
text.
* Crawl no URLs – this is a common recommendation when parameters
create pages with auto-generated translation, sub-par sorted and
narrowed lists, or the URLs are simply not intended to be indexed,
for instance in the case of parameters triggering pop-up windows.
* Let Googlebot decide – this setting is a recommended solution when
sorting and narrowing parameters are used inconsistently, for example
when sales pages load with low-to-high price sorting by default while
premium product categories are setup with high-to-low price order.
* Only crawl URLs with value X – this one is a recommended solution
when sorting parameters that are used consistently, such as
low-to-high price listing implemented across all category pages the
same way.

 

If the webmaster believes there might be architectural problems on the
site, “Let Googlebot decide” is the preferred option.

 

Our Findings


Following an entire year of running tests across a large scale,
international ecommerce website, we have made several key observations on
parameter settings and the consequent impact on search engine indexation.

 

To read these findings, please complete the form below. 

 
Norman Pongracz's insight:
The case study explores how the parameter settings tool can be best used to guide search engines in crawling and indexing all important pages. The article explains how this admin solution impacts the website's crawl budget allocation, page speed and how it helps to fix some of the general site issues.



Measured impact
Following an entire year of running tests across a large scale, international ecommerce website, we have made several key observations on parameter settings and the consequent impact on search engine indexation.

1. Google de-indexes duplicate pages faster than it picks up pages with unique content. Significant changes in indexation can already be measured 2-3 days after the configuration. Over a one month test, sites displayed a 75% drop in duplication (over 6.2 million pages) but only showcased an 8% increase (about 340K pages) in unique content pages.

2. Indexation rates are based on how parameters are used. Internally used parameters are impacted less than externally used ones. The actual indexation number will depend on how frequently the tracking parameter is used and how well canonicals are configured.

3. Parameter settings reduce the time spent downloading pages. While running the tests over a one month period, The time it took Google to download pages decreased by 17%-20% on average. This resulted in improved crawl rates of at least 7% within the same time period.

4. Risks. If parameter directives are set incorrectly on top level pages, crawl and indexation will significantly change on deeper pages as well.



Parameter settings guide
Essentially, the user needs to decide whether the parameter is used for tracking or not and then tell Google how the parameter should change the page’s content. This offers users the following options to handle URLs with parameters: 

/ Only crawl representative URLs – the only option available when parameters are used for tracking. 

/ Crawl every URL – this is a common recommendation when parameters are used for pagination, the new pages are somewhat useful or if the parameters replace the original content with manually translated text. 

/ Crawl no URLs – this is a common recommendation when parameters create pages with auto-generated translation, sub-par sorted and narrowed lists, or the URLs are simply not intended to be indexed, for instance in the case of parameters triggering pop-up windows. Let Googlebot decide – this setting is a recommended solution when sorting and narrowing parameters are used inconsistently, for example when sales pages load with low-to-high price sorting by default while premium product categories are setup with high-to-low price order. 

/ Only crawl URLs with value X – this one is a recommended solution when sorting parameters that are used consistently, such as low-to-high price listing implemented across all category pages the same way. 

Finally, if the webmaster believes there might be architectural problems on the site, “Let Googlebot decide” is the preferred option.   
more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

Interesting bits of the Page Quality Rating Guideline

Norman Pongracz's insight:
Here are some of the interesting parts from Google's Page Quality Rating Guideline. Considering that Google is trying to mimic user behaviour and rate websites based on these factors, one or more of these below can have impact on rankings:

Broken pages
- Broken links and elements can impact page rating but only if it is significant in scale (implying bad site maintenance).
- Custom 404 pages can have ratings as well, for example the following is a highly rated 404 page: https://www.raterhub.com/evaluation/result/static/a/GG/3.6.5.png
 
Update frequency
- Already published articles don't have to be updated frequently except if it is encyclopaedia style like Wikipedia
- Frequency of updates on websites matter. Would that mean we should recommend adding timestamps to pages?
 
External ratings
- External ratings such as Wikipedia, forums and other rating organisations an be used to rate websites or page quality
- Few negative reviews won't impact page quality 

On-page content
- The amount of on-page content necessary for the page depends on the topic and purpose of the page. A High quality page on a broad topic with a lot of available information will have more content than a High quality page on a narrower topic
- On-page content behind tabs can be considered main content and can thus improve page quality.
- Product pages should feature recommender systems (i.e: "complete the look", "those who bought the product also bought")
- On copied content: Content copied from a changing source, such as a search results page or news feed. You often will not be able to find an exact matching original source if it is a copy of “dynamic” content (content which changes frequently). However, we will still consider this to be copied content. Other times the content found on the original source has changed enough that searches for sentences or phrases may no longer match the original source. For example, Wikipedia articles can change dramatically over time. Text copied from old copies may not match the current content. This will still be considered copied content
- Video quality can have impact on page rating.
 
Supporting information, support pages
Shopping websites can be judged on 
- How detailed the contact information is
- How detailed the store policies are on payment, exchanges, and returns
- Customer service information available
- About us, contact and FAQ pages influence page rating (this includes Facebook pages and corporate websites as well!)
- Shopping cart functionality impacts page rating
These do not have to be presented on the page but need to be somehow linked.
 
Examples
 
Highest quality product page https://www.raterhub.com/evaluation/result/static/a/GG/BookPack.png - The purpose of this page is to provide information about, and allow users to buy, a specific type of school backpack. The page provides a lot of helpful product information, as well as 600 user reviews. Since the store produces this backpack, they are experts on the product, making the page on their own website authoritative. In addition, this store has a reputation for producing one of the highest quality and most popular school backpacks on the market. This page also has a high quality MC, placed on a trustworthy website and has good reputation.
 
Highest quality “Custom 404” page https://www.raterhub.com/evaluation/result/static/a/GG/3.6.5.png - The MC of this page is the cartoon, the caption, and the search functionality, which is specific to the content of the website. It is clear that time, effort, and talent was involved in the creation of the MC. This publication has a very positive reputation and is specifically known for its cartoons. Keep in mind that for any type of page, including pages with error messages, there may be a range of highest quality to lowest quality pages. Therefore, it’s important to evaluate the page using the same criteria as all other pages, regardless of what type of page it is. This page also has a high quality SC, placed on a trustworthy website and has good reputation. 

High quality category page https://www.raterhub.com/evaluation/result/static/a/GG/BackpacksT.png - The purpose of this page is to allow users to buy a school backpack. The page provides a lot of different backpack options, and some of them have user reviews. This is a well-known, reputable merchant, with detailed Customer Service information on the site. The SC features are particularly helpful. For example, the filters allow users to show results by categories such as color, style, and price. They have satisfying amount of high quality MC, good SC and positive reputation.
 
High quality category page https://www.raterhub.com/evaluation/result/static/e/PQexamples/2.5.2.png - The Company sells its own line of high end, fashionable baby and children’s furniture and accessories. It has a positive reputation as well as expertise in these specific types of goods. Many products sold on the site are unique to this company. They have satisfying amount of high quality MC, expertise in given field and positive reputation.
 
High quality product page https://www.raterhub.com/evaluation/result/static/a/GG/GPS.png - There is a very large quantity of MC on this page. Note that the tabs on the page lead to even more information, including many customer reviews. The tabs should be considered part of the MC. They have satisfying amount of high quality MC, good SC and positive reputation.
 
High quality product page https://www.raterhub.com/evaluation/result/static/a/GG/StandMixerTarget.png - The page from “Target” provides the manufacturer’s product specs, as well as original product information, over 90 user reviews, shipping and returns information, multiple images of the product, etc. Note: Some of the MC is behind links on the page (“item details,” “item specifications,” “guest reviews,” etc.). Even though you have to click these links to see the content, it is still considered MC.
 
 
Medium quality “Custom 404” page https://www.raterhub.com/evaluation/result/static/e/PQexamples/3.6.6.png This page is on a well-known merchant website with a good reputation. However, this particular page displays the bare minimum of content needed to explain the problem to users, and the only help offered is a link to the homepage.
 
Lowest quality category page https://www.raterhub.com/evaluation/result/static/a/GG/shoesbuy.png This page is selling Nike Air Jordan shoes. When you look at the “Contact Us” page, it does not give the name of a company or a physical address, which also cannot be found anywhere else on the website. This amount of contact information is not sufficient enough for a shopping website. In addition, the “Shipping and Returns” page has the name of another company that seems to be unrelated. There are also official looking logos at the bottom of the homepage, including the Better Business Bureau logo and Google Checkout logo, that don’t appear to be affiliated with the website.
more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

PDF files:SEO and Accessibility | Carmelon

PDF files:SEO and Accessibility | Carmelon | Technical and on-page SEO | Scoop.it

"...there are often situations calling for the use of PDFs...the parameters for grading PDF files are known to be different from HTML files...in the final result- PDFs on sites can successfully compete with HTML pages and rank very high, even in first places on the organic search results."

Norman Pongracz's insight:

Ranking PDF files in the SERPs

Introduction to PDF document optimisation and accessibility

When to use PDFs instead of HTML

When it comes to on-site content, it is always preferable to place content in HTML and not PDF. However, there are often situations calling for the use of PDFs, for example- user’s guides, forms that need to be downloaded by the user etc. It is important to realize that even in such situations, usually the use of PDFs does not necessarily mean that we must give up the strategic choice to place the content authority in HTML pages.

Search engines and Google specifically, can crawl and index PDF files. As far as the location in the search results is concerned, PDF’s can and do fully compete with HTML pages.

Although not publicly published by Google, the parameters for grading PDF files are known to be different from HTML files, mostly due to the large textual (and therefore- keywords reach) content volume of PDFs (in comparison to average website’s HTML pages). The difference in grading is created in order to allow a correct comparison between HTML and PDF versions of content, and in the final result- PDFs on sites can successfully compete with HTML pages and rank very high, even in first places on the organic search results.

The Best Practices in SEO for PDF Files

Best practices for PDFs in SEO include general on-page recommendations and accessibility recommendations, both of them ensuring that the content of the PDF files can be accessed by both the search engine and the user.

SEO RecommendationsIndexation: For several technical reasons, crawling and indexing PDFs takes SE longer than HTML does (usually on the scale of hours to days, but sometimes up to a month more). Therefore, one should encourage and speed up PDF indexing by marking the address of a PDF in the website’s sitemap file, as in any HTML page. It is also possible to use Google Search Console to submit the PDF for crawling (“fetch as Google”), and after the crawl- to submit the results for indexing.Size limitation: as a general rule of thumb, it is advisable to create PDFs as small as possible, and to avoid sizes larger than 2.5 MB. Specifically for Google, PDFs are temporarily transformed to HTML during the crawl, and Google will only index a maximum of 2.5 MB from the temporary HTML file. If the temporary HTML is larger than 2.5 MB, Google will usually crawl the whole file, but index only 2.5 MB of data (usually the first 2.5 MB). Title and Heading markup: Google crawls and indexes titles that are stylistically marked as titles (using Headings), and utilizes them to improve the indexing and association with keywords. Therefore, it is important to use headings markup for titles when creating PDFs.Links within PDFs: As previously mentioned, Google can index links within PDFs, and treats them as it would links in HTML. For this purpose, links must be have a standard link structure (ie structured as >a href=”/page2.html”>link to page 2</a> ). As it is not possible to mark links in PDF with the “no follow” and “no index” tags, if it is undesired that a specific link would transfer authority, then it must not be placed in the HTML.Usage of Rich Media: Google will not index rich media (including pictures of any kind) placed in PDFs. It is necessary to avoid placing texts in images (same as in HTML pages). If a picture is to be indexed, it is possible to place a link to the picture in the PDF, and then the crawler will follow that link and index the picture (as a separate file from the PDF and not as part of its content).PDF produced with text from scanned images of texts (OCR):As previously mentioned, SEs will not index text located in a picture. However, if the text was produced through OCR, it is still considered text, and there should be no problems with indexing.Indexing PDFs but preventing displaying cached versions in Google: if the PDF contains temporary content, or content that changes often, it may be desired to prevent Google for keeping and displaying cached versions of files that are outdated or don’t exist anymore. This is possible to achieve by implementing the X-Robots tag with a “no archive” markup in the PDF HTTP response.Avoid using password-protected PDFs: when creating a PDF, it is sometimes possible to add a password lock to it, to prevent unauthorized access to the file. Obviously, locking the file with a password will prevent SEs from accessing it, so if indexing is desired, password protection must not be used.Preventing content duplication: If, under any circumstances, there is a PDF file available for indexing and at the same time an HTML page with the same (or highly similar) content, or other PDF files with the same (or highly similar) content, it is necessary to specify the proffered version for SE in order to avoid content duplication penalties. This can be achieved using the canonical tag (similar to HTML). However, it’s important to remember that the tag has to be implemented in the header of the PDF’s HTTP response. For further details on this subject, see the following link (and specifically- the example at the bottom of the page for implementing canonical in PDFs) https://support.google.com/webmasters/answer/139066?hl=en. It’s important to remember that such a canonical markup will only work if the PDF is available for indexing- otherwise, the SE will never see the canonical request.Accessibility recommendations

The accessibility level of a document can be scaled from completely inaccessible to greatly accessible. The way the different page elements and text are defined is of major importance to the accessibility of a document. Assistive technologies interpret the tags of a document and renders content to the user accordingly.

One standard that explores all the things that are relevant to creating accessible PDFs is ISO 14289-1 (PDF/UA-1). The official international guidelines for creating accessible web content, WCAG 2.0, also include many of these.

Types of actions needed for PDF accessibility

It is important that all text in a document is tagged, be it a paragraph text, a heading, a list, or similar. Not only do you make this evident visually but also clear to all users by providing the correct tagging, as this is what assistive tech­nologies use. This includes:

Markup of file languageMarkup to indicate reading directionCaptions for images and photosMarkup of tables (column titles, direction of reading)Creating alternatives for tables and/or complex diagramsMarkup and ensuring readability of foliosPresenting the page as a “single page”
 
Tagging elements of the page
 
Language: There should be an overall definition of the language the document is written in. Furthermore if lines or blocks of text within the document change lan­guage, then the text should be tagged separate­ly.Reading order: The sequence in which a screen reader will ren­der the page content depends on how the doc­ument was created. Therefore it is important to ensure that a document has a sensible reading order. Most remediation tools provide the ability to check the reading order of a document.Images: An image can have different purposes depending on how it is used in the document. Many images have a purely decorative purpose and this purpose needs to be conveyed. This is done by giving it a definition of ‘‘artifact’, described further on. Other images may have some sort of function or convey important information, and therefore these need an alternative text stating this.Tables: When data tables are used it is important to tag the structure of these. As a minimum defining which are the column / row headings.Bookmarks: For many users the easiest and most accessible way to have a table of content is to have book­marks provided from the headings in the docu­ment. Stating these gives the user the option to have a panel open with the document containing the bookmarks.Title: As a minimum documents should include basic information such as a document title. Providing the name of the author, as well as a description and some keywords is also a good idea.Security settings: Make sure that the document is not locked in a way that makes assistive technolo­gies unable to extract content and render it to the user.It is also important to ensure that the color of the background and the color of text are in sufficient contrast to each other. The ‘Web Content Accessibility Guidelines’ gives guidance on this in term of recommendations for text sizes and compliance level.Also avoid using references to content and information solely based on a location. Some users will receive the content in one long sequence so for instance a ‘box on the right’ does not exist to them. Make sure to supplement this by referring to a heading also for instance.Make sure that the documents can be zoomed to enlarge text without it becoming difficult to read. One example is when text becomes very pixelated.Images of text should be avoided as they don’t work for well for several user groups, such as those with reading difficulties. Text in an image can be identified by the fact that the text cannot be highlighted.
 
What files need complex PDF accessibility?
Company newslettersCommunication documentsCatalogues and brochuresAnnual reportsSustainable development reportsChecking the accessibility of your PDF file

Check the readability of your PDF file using Jaws (text or image). You can visually and aurally compare them to see if all of the information has been conveyed:

Check that you have used an appropriate fontCheck that any words in capitals have the right accents on themCheck the heading levels using Jaws (heading levels 1, 2, 3, etc.)Check the alternatives behind the photos, images, etc.Check that any diagrams that have been marked up as tables or lists of bullet points have been correctly understoodCheck that the links are clear and well indicated with JawsCheck the accessibility of the forms (explicit fields, possibility of filling in the fields and validating them, check that the tab key moves the cursor along through the fields correctly)Check the readability of the cells in the tables and that they relate to a specific headerCheck the file languageCheck any changes in language (for example an English word in a French document); Jaws should be able to change languageTools to export accessible PDFs ad to test accessibilityMicrosoft WordAdobe InDesignQuark XpressJAWS Screen Reading Software freedomscientific.com/Downloads/JAWS - Using JAWS http://webaim.org/articles/jaws/PAC 2.0, a free accessibility checker.CommonLook PDF GlobalAccess, a commercial accessibility validator and remediation tool
References and guidelines
Hubspot and Siteimprove white paper: How to create accessible PDFs: https://cdn2.hubspot.net/hubfs/321800/Content/eBooks_Guides_Whitepapers/EN_How-to-create-accessible-pdfs.pdfISO 14289-1:2014 Document management applications: Electronic document file format enhancement for accessibility: Part 1: Use of ISO 32000-1: http://www.iso.org/iso/home/store/catalogue_ics/catalogue_detail_ics.htm?csnumber=64599&ICS1=35&ICS2=240&ICS3=30How to Meet WCAG 2.0 w3.org/WAI/WCAG20/quickref/Overview.phpThe Matterhorn Protocol 1.0 http://www.pdfa.org/2013/08/the-matterhorn-protocol-1-0/WebAIM PDF Accessibility - Converting Documents to PDF: http://webaim.org/techniques/acrobat/convertingAdobe: Creating accessible Adobe PDF files – A guide for document authors: https://www.adobe.com/enterprise/accessibility/pdfs/acro6_pg_ue.pdfPDF Accessibility: http://www.pdf-accessibility.com/accessibility-document-pdf-accessible/Royi Vaknin PDF Files: SEO and Accessibility (Carmelon Digital Marketing): http://www.carmelon-digital.com/articles/seo_pdf_files/Reid Bandremer: SEO for PDFs (Lunametrics) http://www.lunametrics.com/blog/2013/01/10/seo-pdfs/
more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

A Guide To Schema Markup & Structured Data SEO Opportunities By Site Type

A Guide To Schema Markup & Structured Data SEO Opportunities By Site Type | Technical and on-page SEO | Scoop.it
Structured data can help you to send the right signals to search engines about your business and content. But where do you start? Columnist Tony Edward has some suggestions.
Norman Pongracz's insight:

Google’s John Mueller recently stated that the search engine giant may add structured data markup as a ranking factor (more info: http://searchengineland.com/google-may-add-structured-markup-data-to-ranking-algorithm-230402). So it is definitely worth the effort to implement schema markup on your website, as this is becoming more important to Google.

----------------------
All Sites
----------------------

Organization Schema Markup https://schema.org/Organization
/ The organization schema markup helps generate brand signals which can enhance your Knowledge Graph entry and website snippet presence in the search engine results pages (SERPs).
/ Be sure to specify your logo, social profile links and corporate contact information.

WebSite Schema Markup https://schema.org/website
/ The WebSite schema markup helps generate the Sitelinks Search Box feature for brand SERPs and can help your site name to appear in search results. You must, of course, have an existing site search on your website to enable the Sitelinks Search Box element.

Breadcrumbs Markup https://schema.org/BreadcrumbList
/ The BreadcrumbList schema allows you to mark up the breadcrumbs on your site to generate breadcrumb rich snippets for your pages in the SERPs.

Site Navigation Schema Markup http://schema.org/SiteNavigationElement
/ The SiteNavigationElement markup can help increase search engines’ understanding of your site structure and navigation and can be used to influence organic sitelinks.

Video Schema Markup https://schema.org/VideoObject
/ A site with embedded or hosted video content can leverage the VideoObject schema. Google primarily displays video rich snippets for YouTube videos, but this will help video rich snippets to appear for your Web pages in Google Video Search.

Schema Software Application Markup https://schema.org/SoftwareApplication
/ Leverage the SoftwareApplication markup on your software apps to enable app rich snippets.

----------------------
E-Commerce sites
----------------------

Schema Product & Offer Markup http://schema.org/Product and http://schema.org/Offer
/ Used together, the Product and Offer markups can help product information to appear in the SERPs, including price and status information. Note that the Offer markup is required in order for the price to appear in Google SERPs.

Schema Rating Markup https://schema.org/Rating
/ The Rating schema is primarily used on e-commerce sites but can also be used for a local business site, such as a restaurant. When an item has multiple ratings that have been averaged together to produce an aggregate rating, you’ll want to use the AggregateRating schema.
/ Note: Google assumes that you use a five-point scale, with 1 being the worst and 5 being the best. If you use anything other than a 1–5 scale, you’ll need to indicate the highest possible rating with the “bestRating” property. These markups will help generate star rating rich snippets in the SERPs.

----------------------
Publisher Sites
----------------------

Schema Article Markup https://schema.org/NewsArticle and https://schema.org/BlogPosting
/ If you’re a publisher website, the NewsArticle or BlogPosting schemas are recommended (choose one or the other, depending on your site/content).
/ Leveraging these markups accordingly can help your content to appear in Google News and in-depth articles search suggestions.

----------------------
Local Business Sites
----------------------

Schema Local Business Markup https://schema.org/LocalBusiness and https://schema.org/PostalAddress
/ You can leverage LocalBusiness and PostalAddress schema markup to impact your local listing. This markup can be implemented on sites with brick-and-mortar locations. The schemas can be used to indicate your physical address, opening hours, payment types accepted and more.
/ Keep in mind that there are also industry-specific schemas, such as AutomotiveBusiness, SelfStorage, TravelAgency and many more.

----------------------
Event Sites
----------------------

Event Schema Markup https://schema.org/Event
The Event markup can be used for sites that organize events, musical concerts or art festivals to generate event rich snippets.

more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

SEO - How to improve your site indexation – XML Sitemaps Case Study - Bruce Clay

SEO - How to improve your site indexation – XML Sitemaps Case Study - Bruce Clay | Technical and on-page SEO | Scoop.it
Google Webmaster Tools (GWTs) is always a good place to start when optmising a site for Search Engines (SEs). In fact this can give you an indication of site health as it highlights any possible issue or problems your site might have. One important indicator is the number of pages you have indexed in Google …
Norman Pongracz's insight:

There have been numerous case studies on how XML sitemaps benefit different sites with increased visits, revenues by just allowing the crawlers to find the important pages faster.

 

Google himself produced a case study years ago called "Sitemaps: Above and Beyond the Crawl of Duty"http://www2009.eprints.org/100/1/p991.pdf that analysed multiple domains such as Amazon, CNN to demonstrate the effectiveness of sitemaps.

 

According to the most statistics, Google finds over 70% of the important URLs through XML sitemaps and only about 30% of the URLs through crawling the interlinking structure (often referred as “Discovery”). We, at Forward3D consider XML sitemaps an essential part of every website and these studies show that Google significantly relies on the URLs and information listed in these documents – hence the complexity of the XML sitemap implementation; crawlers will only trust well-defined directives.

 

XML sitemaps has been tested across multiple industries and domain of different sizes: MOZ’s case study on the UK based Razoo (http://moz.com/blog/multiple-xml-sitemaps-increased-indexation-and-traffic) indicated that after the XML sitemap implementation, the number of pages ranking were increased from 486 to 1240, while the keywords sending organic search traffic were more than doubled (from 548 to 1347).

 

A case study on a large site (http://www.bruceclay.com/blog/how-to-improve-your-site-indexation-xml-sitemaps-case-study/) showed an increase in page indexation from 24% to 68% in the initial period, right after the XML sitemap was implemented; this resulted in significant improvements in SEO traffic.

 

Also, there have been some smaller scale experiments; reports showed that Google’s crawl rate on the domain “Scaling Bits” (www.scalingbits.com/web/sitemaps) has increased by 100% after the XML sitemap was implemented driving up traffic by 30%. Finally, TechCrunch had a case study on an extensive SEO project that included an XML sitemap implementation with very similar results: 30% additional traffic across site (http://neilpatel.com/case-study-techcrunch/).

 

When it comes to New Look, we expect similar results. If pages are indexed properly, traffic can show an increase as much as 30% percentage with a potential increase in revenues by 5.25% at least (based on our initial forecast).

 
more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

Whitepaper: Taking on big competitors with local SEO | STAT Search Analytics

Whitepaper: Taking on big competitors with local SEO | STAT Search Analytics | Technical and on-page SEO | Scoop.it
How do people really do local search? We looked at auto insurance in the USA, and found some surprising things about local SEO that folks in every industry should know.
Norman Pongracz's insight:

Case study on American auto insurance local queries by GetStat

 

Auto insurance industry was chosen because it offers localized products and services but isn’t dependant on brick-and-mortar sales, and one of the biggest American brands "Geico".

 

Study was based on syntax variants of 33 basic short-tail keywords related to auto insurance with search volume per keyword greater than 100 per month.

 

Findings

 

Query Structure Matters
As we expected, when you only look at search volume, the short-tail unmodified queries win hands-down. A strong majority of desktop searchers do not geo-modify (geo-modify: add location to query) queries for local services that are not tied to brick-and-mortar locations. Instead, they are using simple short-tail keywords like [auto insurance quotes] and letting Google geo-locate (show localised result for any keyword query) them.

 

Every Query is Local
Even if your sales aren’t tied to brick-and-mortar locations, you still need to be looking at the local picture. The reality is that every search is now local. Google routinely modifies desktop and mobile search results based on a location — even for queries that do not explicitly include geo-modifiers.That means if you’re not tracking keywords on a local level, you’re missing the majority of the competitive landscape

 

Big players have weaknesses as well
Things can seem hopeless looking at the national picture. But by going deeper with localized ranking data and analysis, you’ll find that even the biggest brands aren’t dominating the SERPS in every local market. These are opportunities that you can identify and strategically exploit in any industry. And it’s not just about large regions like states or provinces. You can dig down to the level of individual cities or even postal codes and ZIP codes to micro-target your SEO strategy.

more...
No comment yet.
Rescooped by Norman Pongracz from SEO Tips, Advice, Help
Scoop.it!

Google To Warn Searchers When A Mobile URL Redirects To The Homepage

Google To Warn Searchers When A Mobile URL Redirects To The Homepage | Technical and on-page SEO | Scoop.it

Google alerted webmasters late yesterday that it will let smartphone searchers know if it thinks a website has a “faulty redirect” in place that sends the searcher to your home page, not the page they clicked on.


Via Bonnie Burns
Norman Pongracz's insight:

We’d like to spare users the frustration of landing on irrelevant pages and help webmasters fix the faulty redirects. Starting today in our English search results in the US, whenever we detect that smartphone users are redirected to a homepage instead of the page they asked for, we may note it below the result. If you still wish to proceed to the page, you can click “Try anyway.”


But Google’s not just warning searchers; there’s also help for webmasters. The “Crawl Errors” section of Webmaster Tools will offer specific information about faulty redirects affecting smartphone crawling.

more...
Bonnie Burns's curator insight, June 5, 2014 11:08 AM

Google stated: We’d like to spare users the frustration of landing on irrelevant pages and help webmasters fix the faulty redirects. Starting today in our English search results in the US, whenever we detect that smartphone users are redirected to a homepage instead of the the page they asked for, we may note it below the result. If you still wish to proceed to the page, you can click “Try anyway.”

Scooped by Norman Pongracz
Scoop.it!

Basics of Debugging Google Analytics Code: GA Chrome Debugger and other tools

Basics of Debugging Google Analytics Code: GA Chrome Debugger and other tools | Technical and on-page SEO | Scoop.it
An overview of debugging tracking code, with information on common debugging tools and a deeper look into Chrome GA Debugger
Norman Pongracz's insight:

Debugging GA with GA Chrome Debugger and other tools without no code access (summary)

 

Common Debugging Tools
-Fiddler2 makes it easy to view every request to Google Analytics (and any other javascript-based Web Analytics tool). You can even try out changes to your tracking code on your live system before releasing them to everyone, the tool is independent from browsers – and it is free!
-Web Analytics Solution Profiler (WASP)
-Charles Debugger
-Firebug/Chrome Developer Console https://www.youtube.com/watch?v=nOEw9iiopwI
The function I use most of the time apart from the Console (where errors are being logged) is the Network Tab. It can tell us if the tracking beacon has been sent to Google Analytics successfully. To find out, look for the __utm.gif request. If it displays a “200 OK” status code (see the green light in the screen shot), you know that Google Analytics has received the current Pageview or Event. You can take a look what is inside that request in the “Headers” tab (Cardinal Path’s Kent Clark’s marvelous “Cheat Sheet” helps interpreting the values). http://www.cardinalpath.com/wp-content/uploads/ruga_cheat_sheet.pdf

Chrome GA Debugger / ga_debug.js
Google’s recommended debugging tool for Google Analytics is Chrome’s Add-On “GA Debugger”. It is basically a form of using the “ga_debug.js” script without having to alter your page’s code at all (if you use ga_debug.js, you will have to change ga.js into /u/ga_debug.js on every page you want to debug). Chrome GA Debugger is a nice and easy-to-use tool that logs every Pageview and Event that you send to Google Analytics in your Chrome Developer Console (right-click on any part of the page => “Inspect Element” → go to tab “Console”):
Chrome GA Debugger shows you in an easy-to-read format what is being sent to Google Analytics without having to understand or inspect cookie variables or the Network Tab of your Console. It gives you hints like:
-Does my visit have the correct source/medium/campaign?
-Are there pages that accidentally override those sources?
-Are there pages where conflicting JavaScript or other reasons hinder the Tracking Code from being executed?

Fiddler, the browser-independent HTTP debugger and manipulator
With Fiddler, you can even debug your iPhone apps or anything else that does not run through a classic browser.

https://www.youtube.com/watch?v=jlQYf1DiA3U
-The filters to capture only the requests you need (e.g. the Google/Adobe/Webtrends Analytics HTTP requests)
-The inspector tab where you can investigate all the request’s parameters under “Web Forms”
-The AutoResponder that allows you to “kill” specific files or replace one (JavaScript or other) file by another one on your computer or somewhere else

Rewrite the HTML with FiddlerScript
With the AutoResponder, you can easily have your browser load the file you want instead of the default one. So if the code you want to debug is in that specific analytics_code.js file, you just download that file, change it the way you think it could work, save it to your hard drive as “my_analytics_test_code.js”, and then tell the AutoResponder that whenever it encounters “analytics_code.js”, it shall replace that file by the test file on your PC.
But there are some cases where you really have to alter the very HTML code of the page, not simply replace an entire file. Examples when you need this are:
-You want to change a line (or more) of the tracking code inside of your HTML, e.g. add a Webtrends HTML meta tag or a custom variable for Adobe Analytics or Google Analytics
-You want to rewrite an inline Event Tracking call (“inline” are those “onclick” handlers that are inside the link, e.g. < a onclick=”yourcall” >Link< /a >. You should avoid them by the way to keep JavaScript and HTML separate so it is less likely that you have to resort to the method I am describing here)
-You want to test-drive another tool, for example a Tag Management System, a heatmap tool, some conversion tracking for your email marketing tool, or whatever else that requires you to insert code into the web page – which would usually mean working yourself through your development release cycle first (and wait months, ask for budget etc…)
-You want to check whether it is the Tag Management System’s fault that a tag is not working. I had this case recently with a tag in Google Tag Manager (GTM). When I inserted it “the traditional way” by rewriting the HTML via Fiddler, I saw that the tag fired correctly. I realized that it was one of those tags that have to load synchronously, but GTM can only fire tags asynchronously (see some more drawbacks of Google Tag Manager). After some more hours of re-coding the tag, I finally got it working via GTM as well.

Additional tools: AddOn “FiddlerScript” and execute: http://www.telerik.com/fiddler/add-ons => “Syntax-Highlighting Add-Ons”

...


References:

Main article:
http://www.webanalyticsworld.net/2012/01/basics-of-debugging-google-analytics-code-ga-chrome-debugger-and-other-tools.html
http://www.webanalyticsworld.net/2014/05/advanced-analytics-debugging-no-code-access-no-problem.html

Additional articles:
http://www.cardinalpath.com/wp-content/uploads/ruga_cheat_sheet.pdf

more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

Homepage Sliders: Bad For SEO, Bad For Usability

Homepage Sliders: Bad For SEO, Bad For Usability | Technical and on-page SEO | Scoop.it
One of the most prevalent design flaws in B2B websites is the use of carousels (or sliders) on the homepage. Carousels are an ineffective way to target user personas, which ends up hurting the site’s SEO and usability. In fact, at the recent Conversion Conference in Chicago, about 25% of the speakers mentioned carousels — […]
Norman Pongracz's insight:

Problem with B2B Homepage Sliders (Summary)

 

With B2B websites, carousels seem to only be used for one of three reasons: branding, thought leadership or product/service promotion.

Problems:

Alternating Headings - Many of the carousels the headings in the slider were wrapped in an h1 tag. Basic SEO best practices state that there should only be one h1 tag per page, and it should appear before any other heading tag. The problem with using h1 or any heading tag in the carousel is that every time the slide changes, the h1 tag changes. A page with five slides in the carousel will have 5 h1 tags, which greatly devalues the keyword relevance.

SEO issues
-Flash Usage - few of the websites serve up slider content using Flash. Avoiding Flash to serve up content is SEO 101.
-Poor Performance - As with any website, the more you complicate and add things, the slower the page loading speed. A few sites featuring full-width carousels packed with high resolution images greatly impacts the page load speed.
-Content Replacement - As stated earlier, carousels are used as an ineffective method of targeting user personas. Many websites take this to an extreme by using shallow content on the page.

Usability Issues
-Nobody Clicks On The Carousels
-Content Is Pushed Below The Fold
-The Megaphone Effect - When a user lands on a page, his or her attention is drawn to the carousel because it has revolving content, alternating text, colour changes, and all sorts of other attention-stealing features.
-Confusing Objectives - When a carousel is used, the user will assume the page talks about whatever heading is used in the carousel.

more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

Official Google Webmaster Central Blog: Infinite scroll search-friendly recommendations

Official Google Webmaster Central Blog: Infinite scroll search-friendly recommendations | Technical and on-page SEO | Scoop.it
Norman Pongracz's insight:

Infinite scroll search-friendly recommendations

With infinite scroll, crawlers cannot always emulate manual user behaviour - like scrolling or clicking a button to load more items - so they don't always access all individual items in the feed or gallery. To make sure that search engines can crawl individual items linked from an infinite scroll page, your content management system should produce paginated series (component pages) to go along with your infinite scroll.
-Chunk your infinite-scroll page content into component pages that can be accessed when JavaScript is disabled.
-Determine how much content to include on each page while maintaining reasonable page load time.
-Divide content so that there’s no overlap between component pages in the series

Implement replaceState/pushState on the infinite scroll page (The decision to use one or both is up to you and your site’s user behaviour)for the following:
-Any user action that resembles a click or actively turning a page.
-To provide users with the ability to serially backup through the most recently paginated content.

more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

Beginners Guide to Universal Analytics - Creating Custom Dimensions & Metrics

Beginners Guide to Universal Analytics - Creating Custom Dimensions & Metrics | Technical and on-page SEO | Scoop.it
Beginners Guide to Universal Analytics.Learn some quick tips to get started. Learn creating custom dimensions and custom metrics.
Norman Pongracz's insight:

Difference between Universal Analytics (UA) and Google Analytics (GA) - Excerpt

 

Data Collection and integration - UA provides more ways to collect and integrate different types of data (across multiple devices and platforms) than Google Analytics (GA). UA provides better understanding of relationship between online and offline marketing channels.

Data Processing - UA is visitor based instead of visit based.

Custom Dimensions and metrics - UA allows 'custom dimensions' and ‘custom metrics’ to collect the type of data GA does not automatically collect (like phone call data, CRM data etc). GA uses 'Custom variables' instead. Vostum variables are available in UA still (not sure for how long). User Interface only changes when using custom dimensions and metrics.

Javascript library - UA uses ‘analytics.js’ JS library whereas GA uses ‘ga.js’.

Tracking Code - GA uses UA tracking code

Remarketing - UA does not support ‘Re-marketing’ yet.

Referrals Processing - in UA, returning referrals considered to be wo web sessions.

Cookies - While GA can use up to 4 cookies (_utma,_utmb,_utmz and _utmv) to collect visitors’ usage data, UA uses only 1 cookie (called _ga).

Privacy and data usage - You need to give your end users proper notice and get consent about what data you will collect via UA. You also need to give your end users the opportunity to ‘opt out’ from being tracked. That means you need to make changes in your privacy and data usage policies. Google recommends using Google Analytics opt out browser add on if you want to block Google Analytics.

more...
No comment yet.
Scooped by Norman Pongracz
Scoop.it!

Bruce Clay EU - Theming Through Siloing

Bruce Clay EU - Theming Through Siloing | Technical and on-page SEO | Scoop.it
Theme building through directory based and virtual link silos. Europe
Norman Pongracz's insight:

Possible Alternatives to Eliminate Excessive Navigation or Cross Linking (Excerpt)


When it is impossible to remove menus that contradict subject relevant categories, instead use technology to block the search engine spider's indexing of those specific elements to maintain quality subject relevance.

 

IFRAMEs: If you have repetitive elements, add an IFRAME to isolate the object to one location and eliminate subject confusion from interlinking. The contents of an IFRAME is an external element that is not a part of the page, or any page except the HTML of the IFRAME contents file itself. As such, IFRAME contents does not count as a part of any page displaying the IFRAME code.

 

Ajax: Ajax code included dynamically into a web page cannot be indexed in search engines, providing the perfect haven for content, menus and other widgets for user's eyes only.

more...
No comment yet.