This philosophy is beautiful in its simplicity, and it serves to correct the “more, more, more” mentality of link building. We only want links from relevant sources. Often, this means that in order to scale our link-building efforts beyond the obvious tactics, we need to create something that deserves links. You have links where it makes sense for you to have links. Simple.
Instead, in this instance, we started at wireframe stage, plopping in keywords and meta tags. Of course, the site really needed those things, and although it launched technically “optimized”, it wasn’t enough to provide a better product than our top competitor(s). A product that people want to visit, revisit, email to friends, share on social networks, and link to more than our competitors. It wasn’t even enough to move up in the rankings.
Search engines use complex mathematical algorithms to interpret which websites a user seeks. In this diagram, if each bubble represents a website, programs sometimes called spiders examine which sites link to which other sites, with arrows representing these links. Websites getting more inbound links, or stronger links, are presumed to be more important and what the user is searching for. In this example, since website B is the recipient of numerous inbound links, it ranks more highly in a web search. And the links "carry through", such that website C, even though it only has one inbound link, has an inbound link from a highly popular site (B) while site E does not. Note: Percentages are rounded.
Yesterday I was re doing our process for ideas and alltop was a part of it. Now I have also known it was a bit spammy (some of my grey sites are featured ) but now it seems way too bad. You have places like new York times next to random adsense blog x. Guy kawasaki needs to really start giving some sort of influence ranking or at least culling the total crap ones.
SEO techniques can be classified into two broad categories: techniques that search engine companies recommend as part of good design ("white hat"), and those techniques of which search engines do not approve ("black hat"). The search engines attempt to minimize the effect of the latter, among them spamdexing. Industry commentators have classified these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO. White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing.
Content gaps – make an inventory of the site’s key content assets, are they lacking any foundational/cornerstone content pieces, non-existent content types, or relevant topic areas that haven’t been covered? What topics or content are missing from your competitors? Can you beat your competitors’ information-rich content assets? Useful guides on Content Gap Analysis:
In December 2009, Google announced it would be using the web search history of all its users in order to populate search results. On June 8, 2010 a new web indexing system called Google Caffeine was announced. Designed to allow users to find news results, forum posts and other content much sooner after publishing than before, Google caffeine was a change to the way Google updated its index in order to make things show up quicker on Google than before. According to Carrie Grimes, the software engineer who announced Caffeine for Google, "Caffeine provides 50 percent fresher results for web searches than our last index..." Google Instant, real-time-search, was introduced in late 2010 in an attempt to make search results more timely and relevant. Historically site administrators have spent months or even years optimizing a website to increase search rankings. With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.
Great post. I know most of the stuff experienced people read and think “I know that already”… but actually lots of things we tend to forget even though we know them. So its always good to read those. What I liked most was the broken link solution. Not only to create a substitute for the broken link but actually going beyond that. I know some people do this as SEO technique but its actually also useful for the internet as you repair those broken links that others find somewhere else.
If you havent see it already, check out the links in shor's comment below - there are some great resources in there. In some cases you can also consider surveying your current audience or customers through email, on-site surveys or SurveyMonkey. Be sure to ask for some profiling information that you can use for determining specific persona needs like age, sex, location, etc. (Probably best not to make it sound like a creepy text chat like I just did though...) :)
For some reason I had to delete some pages, these pages are using the HTML suffix, so I blocked them in robots.txt use Disallow: /*.html, but it’s been almost a year, I found that google robot often capture these pages, How can I quickly let Google completely remove these pages? And I have removed these URL from google webmaster tool by google index-> remove URLs, but Google still capture these pages.
It’s not enough to produce great content and hope that people find it – you have to be proactive. One of the best ways to increase traffic to your website is to use social media channels to promote your content. Twitter is ideal for short, snappy (and tempting) links, whereas Google+ promotion can help your site show up in personalized search results and seems especially effective in B2B niches. If you’re a B2C product company, you might find great traction with image-heavy social sites like Pinterest and Instagram. Here's more advice on making the most of social media marketing.
You may not want certain pages of your site crawled because they might not be useful to users if found in a search engine's search results. If you do want to prevent search engines from crawling your pages, Google Search Console has a friendly robots.txt generator to help you create this file. Note that if your site uses subdomains and you wish to have certain pages not crawled on a particular subdomain, you'll have to create a separate robots.txt file for that subdomain. For more information on robots.txt, we suggest this Webmaster Help Center guide on using robots.txt files13.
Dedicate some time to brainstorm all the different ways you can attract inbound links to your website. Start small –- maybe share your links with other local businesses in exchange for links to their sites. Write a few blog posts and share them on Twitter, Facebook, Google+, and LinkedIn. Consider approaching other bloggers for guest blogging opportunities through which you can link back to your website.
Brian, I’ve drunk your Kool aid! Thank you for honesty and transparency – it really gives me hope. Quick question: I am beyond passionate about a niche (UFOs, extraterrestrials, free energy) and know in my bones that an authority site is a long term opportunity. The problem today is that not many products are attached to this niche and so it becomes a subscriber / info product play. However, after 25+ years as an entrepreneur with a financial background and marketing MBA, am I Internet naive to believe that my passion and creativity will win profitability in the end? The target audience is highly passionate too. Feedback?
Provide full functionality on all devices. Mobile users expect the same functionality - such as commenting and check-out - and content on mobile as well as on all other devices that your website supports. In addition to textual content, make sure that all important images and videos are embedded and accessible on mobile devices. For search engines, provide all structured data and other metadata - such as titles, descriptions, link-elements, and other meta-tags - on all versions of the pages.