I consulted a few years ago before Yahoo and CNET and my clients were all small businesses, even friends' sites.  No matter the size of the project, you can still try to get some insight into your target audiences and what they need or want.  I mentioned in a previous comment I used Search once to determine sentiment on a site vs. it's competitors by searching for a feature the site and its competitors all had, along with "like", "love", "hate", "wish", etc.  I also took note of who the people were who said those things and where they were talking (forums, twitter, etc).  It's a hacked manual approach and although not nearly as quality as a good market research report, at least I have a llittle bit of insight before going out to make site recommendations based solely on tags & links.  If you're recommending the site build things that people want (and fix or remove things that they dont), you're more likely to gain links and traffic naturally.
In 2007, Google announced a campaign against paid links that transfer PageRank.[30] On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the nofollow attribute on links. Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer treat any nofollow links, in the same way, to prevent SEO service providers from using nofollow for PageRank sculpting.[31] As a result of this change the usage of nofollow led to evaporation of PageRank. In order to avoid the above, SEO engineers developed alternative techniques that replace nofollowed tags with obfuscated JavaScript and thus permit PageRank sculpting. Additionally several solutions have been suggested that include the usage of iframes, Flash and JavaScript.[32]
Regarding internal linking, i believe that in the case of two links pointing to an internal page, being one of those links in the group i mentioned above, they will considered only the one witch feed the algorithm with more information. In sites that have the menu before the content, it will be the second link. I think that’s the smart way for them to analyse all the links to better understand the destination page content. And they are smart 😉 .

As I had a teacher at school who was always really picky on how to draw conclusions I must say that the conclusions you drew for your health situation might be true, but dangerous. For example: If slightly more women than men suffer from health deseases it could be wise to write the information toward women. But, if you take search behaviour into account thing could look a lot different: It might turn up that men search more than women or that (senior) men are more present on the net than women.

To prevent users from linking to one version of a URL and others linking to a different version (this could split the reputation of that content between the URLs), focus on using and referring to one URL in the structure and internal linking of your pages. If you do find that people are accessing the same content through multiple URLs, setting up a 301 redirect32 from non-preferred URLs to the dominant URL is a good solution for this. You may also use canonical URL or use the rel="canonical"33 link element if you cannot redirect.


Hey Brian, This article is really really awesome, Seriously you have covered all 21 points which i never read on anywhere over internet. Everyone shares basics but here you have shared awesome info specially that face book keyword research and 1000+ words in a post, and the wiki pedia ideas are really good and helpful. learned many things from this article. keep sharing this kind of info thanks
You may not want certain pages of your site crawled because they might not be useful to users if found in a search engine's search results. If you do want to prevent search engines from crawling your pages, Google Search Console has a friendly robots.txt generator to help you create this file. Note that if your site uses subdomains and you wish to have certain pages not crawled on a particular subdomain, you'll have to create a separate robots.txt file for that subdomain. For more information on robots.txt, we suggest this Webmaster Help Center guide on using robots.txt files13.
People love to learn, and webinars are an excellent way to impart your wisdom to your eagerly waiting audience. Combined with an effective social promotion campaign, webinars are a great way to increase traffic to your website. Send out an email a week or so ahead of time, as well as a “last chance to register” reminder the day before the webinar. Make sure to archive the presentation for later viewing, and promote your webinars widely through social media. If you're wondering how to do a webinar, click the link for some tips.
So many great tips! There are a couple of things I’ve implemented recently to try and boost traffic. One is to make a pdf version of my post that people can download. It’s a great way to build a list:) Another way is to make a podcast out of my post. I can then take a snippet of it and place it on my Facebook page as well as syndicate it. As far as video I’ve started to create a video with just a few key points from the post. The suggestion about going back to past articles is a tip I am definitely going to use especially since long-form content is so important. Thanks!

Search engines use complex mathematical algorithms to interpret which websites a user seeks. In this diagram, if each bubble represents a website, programs sometimes called spiders examine which sites link to which other sites, with arrows representing these links. Websites getting more inbound links, or stronger links, are presumed to be more important and what the user is searching for. In this example, since website B is the recipient of numerous inbound links, it ranks more highly in a web search. And the links "carry through", such that website C, even though it only has one inbound link, has an inbound link from a highly popular site (B) while site E does not. Note: Percentages are rounded.

×