In February 2011, Google announced the Panda update, which penalizes websites containing content duplicated from other websites and sources. Historically websites have copied content from one another and benefited in search engine rankings by engaging in this practice. However, Google implemented a new system which punishes sites whose content is not unique.[36] The 2012 Google Penguin attempted to penalize websites that used manipulative techniques to improve their rankings on the search engine.[37] Although Google Penguin has been presented as an algorithm aimed at fighting web spam, it really focuses on spammy links[38] by gauging the quality of the sites the links are coming from. The 2013 Google Hummingbird update featured an algorithm change designed to improve Google's natural language processing and semantic understanding of web pages. Hummingbird's language processing system falls under the newly recognized term of 'Conversational Search' where the system pays more attention to each word in the query in order to better match the pages to the meaning of the query rather than a few words [39]. With regards to the changes made to search engine optimization, for content publishers and writers, Hummingbird is intended to resolve issues by getting rid of irrelevant content and spam, allowing Google to produce high-quality content and rely on them to be 'trusted' authors.
Brian hello! First off I want to THANK YOU for this fantastic post. I can’t emphasize that enough. I have this bookmarked and keep going through it to help boost our blog. I totally nerded out on this, especially the LSI keywords which made my day. I know, pathetic, right? But when so much changes in SEO all the time, these kinds of posts are so helpful. So thanks for this. So no question – just praise, hope that’s ok 😁
Having a different description meta tag for each page helps both users and Google, especially in searches where users may bring up multiple pages on your domain (for example, searches using the site: operator). If your site has thousands or even millions of pages, hand-crafting description meta tags probably isn't feasible. In this case, you could automatically generate description meta tags based on each page's content.
Sure, we did keyword research, we recommended partnerships and widgets and architecture advice, but we didn’t step back and take a good look at our target audiences, what sites were meeting their specific needs in search results, and what we specifically could build into the product that would be far more desirable than what everyone else had (not even thought of yet ideally) to make sure our entire site is superior, resulting in the inevitable stealing of search traffic from our competitors.
Users will occasionally come to a page that doesn't exist on your site, either by following a broken link or typing in the wrong URL. Having a custom 404 page30 that kindly guides users back to a working page on your site can greatly improve a user's experience. Your 404 page should probably have a link back to your root page and could also provide links to popular or related content on your site. You can use Google Search Console to find the sources of URLs causing "not found" errors31.
This toolbar is based on the LRT Power*Trust metric that we’ve been using to identify spammy and great links in LinkResearchTools and Link Detox since 2012 and the free browser was just recently launched. It helps you promptly evaluate the power and trustworthiness of a website or page during your web-browsing way exacter than Google PageRank ever did.
After you have identified your target keywords, you need to create a page targeting that keyword. This is known as SEO content. In many cases, it makes sense to publish a blog post targeting keywords. However, you need to make decisions based on the search intent. If your target keyword phrase is “buy black Nike shoes”, then it doesn’t make sense to create a long-form piece of content.
Well, the age of print media is coming to a close. But there’s no reason why some enterprising blogger couldn’t use the same tactic to get new subscribers. Let’s say you have a lifestyle blog targetting people in San Francisco. You could promote the giveaway through local media, posters, and many other tactics (we’ll get into these methods shortly).
Hey Brian, love your site + content. Really awesome stuff! I have a question about dead link building on Wikipedia. I actually got a “user talk” message from someone moderating a Wikipedia page I replaced a dead link on. They claimed that “Wikipedia uses nofollow tags” so “additions of links to Wikipedia will not alter search engine rankings.” Any thoughts here?
As I had a teacher at school who was always really picky on how to draw conclusions I must say that the conclusions you drew for your health situation might be true, but dangerous. For example: If slightly more women than men suffer from health deseases it could be wise to write the information toward women. But, if you take search behaviour into account thing could look a lot different: It might turn up that men search more than women or that (senior) men are more present on the net than women.

In addition to optimizing these six areas of your site, analyze your competitors and see what they are doing in terms of on-page optimization, off-page optimization (competitive link analysis) and social media. While you may be doing a lot of the same things they are, it’s incredibly important to think outside the box to get a leg up over the competition.

To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots (usually ). When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.[47]
Google Analytics is free to use, and the insights gleaned from it can help you to drive further traffic to your website. Use tracked links for your marketing campaigns and regularly check your website analytics. This will enable you to identify which strategies and types of content work, which ones need improvement, and which ones you should not waste your time on.

Do not be fooled by those traffic sellers promising thousands of hits an hour. What they really do is load up your URL in a program, along with a list of proxies. Then they run the program for a few hours. It looks like someone is on your site because your logs show visitors from thousands of different IPs. What happens in reality is your website is just pinged by the proxy, no one really sees your site. It is a waste of money.


Provide full functionality on all devices. Mobile users expect the same functionality - such as commenting and check-out - and content on mobile as well as on all other devices that your website supports. In addition to textual content, make sure that all important images and videos are embedded and accessible on mobile devices. For search engines, provide all structured data and other metadata - such as titles, descriptions, link-elements, and other meta-tags - on all versions of the pages.
×