May 31st, 2012 @ // No Comments
Google says it changes its search algorithm to deliver more relevant results to users. Everyone knows why the search giant does this – but have you ever wondered how it’s done? To answer that question, we’ll turn to a talk given earlier this month by Google Fellow Amit Singhal at SMX London.
When it comes to his work, Singhal really is “the human search engine.” He’s been working on unpaid search at Google for more than a decade. He dreams of creating a technology similar to the computer on the starship Enterprise, that can answer questions such as “How did Alfred Nobel make his money?” At the moment, that’s beyond the capability of modern search engines…but we’re getting closer.
Trial and error plays a huge role in the process. Consider this: Google’s search algorithm simultaneously considers more than 200 factors when ranking results. The company launched more than 500 changes to that algorithm last year alone. But according to Singhal, “Concurrently we have approximately 100 ideas floating around that people are testing – we test thousands in a year. Last year we ran around 20,000 experiments. Clearly they don’t all make it out there but we run the process very scientifically.”
Each algorithm change goes through the same process: build, evaluate, launch, learn, improve, repeat. A change is proposed, tested on a small scale, evaluated, scaled up, tested again, evaluated, and so forth. Vanessa Fox at Search Engine Land described a five-step process:
First, a Google engineer comes up with an idea for a signal to introduce or adjust to improve the relevance of the search engine’s results.
Second, developers run the algorithm change on test data. If it runs smoothly, human raters enter the equation in a sort of blind A/B test. They take a look at search results for a wide range of queries, but aren’t told which results were compiled before incorporating the change, and which came after the change. The human raters state in their reports what percentage of results became more relevant, and what percentage became less relevant.
Third, engineers tweak the algorithm and repeat step two several times to reduce the percentage of queries that become less relevant after the change. Until the ratings from human testers show that the tweak makes the results better overall, Google doesn’t move this change on to the next step.
Fourth, the algorithm change goes into wider testing. Google rolls it out at one of its many data centers, and rolls out modified results to perhaps one percent of the queries that hit the center. What do searchers think of these new results? Singhal notes that searcher behavior answers that question. Searchers clicking on higher ranked pages likely means that the top results are more relevant.
Finally, Google gets a statistical analysis of the results from an independent analyst. These reports get presented once a week at search quality meetings. At these meetings, engineers examine the data, discuss the effects of the changes, and debate whether or not to roll them out more widely. Does the change improve the quality of search results overall? Is it good for the web? Can the internal systems handle it without becoming overly taxed? In these typically hour-long discussions, participants answer these questions for a number of proposed changes; some will roll out, while others will get held back for more research, development, and improvement. Some ideas might even be shelved indefinitely.
It’s worth noting, by the way, that never in this process does Google consider how a change to its unpaid search algorithm will affect its revenue. Indeed, Singhal insisted, in response to a question posed after he finished his keynote address, that “no revenue measurement is included in our evaluation of a rankings change.” For Google, relevance trumps everything.
May 31st, 2012 @ // No Comments
Last week’s Facebook IPO may go down in financial infamy as being the biggest and most high profile IPO flop of the century – and this Internet Marketer’s content marketing efforts may have inadvertently triggered the epic Facebook IPO meltdown. Sounds ridiculous and completely preposterous, right? Read to the end of the story and decide for yourself!
Earlier this month, my colleagues at WordStream and I were planning our monthly content marketing effort. Like most other companies, we do a lot of content marketing on our blog, but we also make a point to publish at least one high-level story that might appeal to people who aren’t expert search marketers. The goals of these efforts are to raise the general public awareness of our company, and also drive valuable editorial links to our site.
This month, with all the intense media focus on the Facebook IPO, our big idea was to develop a study comparing the effectiveness of Facebook Advertising vs. The Google Display Network. For those of you who don’t already know, The Google Display Network is the banner advertising component of Google’s business – it consists of banner ads on Google sites like YouTube, Blogger, Gmail (etc.) as well as over two million other popular websites (it’s roughly 25% of Google’s total business). Our research compared the two biggest banner advertising venues on the Internet based on criteria such as advertising reach, supported ad formats ad targeting options, advertising performance, etc.
Our study concluded (for now at least) that, that the Google Display Network offered advertisers greater value in the areas that we tested, which came as a surprise to many investors who aren’t familiar with the two advertising platforms. To help people visualize our study results, we partnered with Brian Wallace at NowSourcing, who turned our research into an Infographic. (Click to Enlarge)
We timed the launch of our Infographic for Tuesday May 15 at 9AM EST, just three days before the Facebook IPO. We wrote a simple press release and a blog post, and then reached out to some of our friends in the industry to help get the word out. Early on, Rand helped us out big time by posting our article to inbound.org (thanks rand!)
And we got a few press pick-ups from Jim Edwards at Business Insider, and John Letzing at the Wall Street Journal, and Laurie Sullivan at MediaPost, by lunchtime, we were thrilled with all the progress – we had nearly a dozen major news pick-ups! Little did we know, that we were sitting on a story that was about to explode.
As if to validate our research, a mere six hours after we launched our study, the Wall Street Journal announced that GM was dumping all advertising on Facebook, and our newly minted research – seen as a possible explanation for GM’s move – got picked up by Mashable, ABC, CBS, Fox Business, PC Mag, PC World, the Washington Post, USA Today, AFP, CNN, The Register, Fast Company, The Economist, Forbes, etc., etc. Here’s what it looked like on Google News:
In a matter of just hours, our Facebook Advertising study got picked up in thousands of the world’s leading news publications!
Our study got picked up by all of world’s largest news networks, such as Reuters, Agence France-Presse (AFP) and the Associated Press (AP), and USA Today. Content from these news networks are translated into dozens of languages in over 150 countries – so for example, our study was being picked up in even small-town newspapers like my college newspaper: the Kitchener-Waterloo Record in Canada, as well as newspapers in New Zealand, Indonesia, Turkey, and many other countries I had never even heard of!
Within hours of publishing our study, we started getting phone calls from producers of major cable TV channels, asking us to appear on Fox Business well as national and Radio Stations such as The Takeaway, the BBC and NPR. Here’s a picture of Ralph Folz, WordStream CEO, on Fox Business last week!
I definitely have had my share of content creation efforts that went absolutely nowhere. Each effort is a learning experience, and so I’d like to share the five key ideas that I learned this time around!
When you content goes viral, expect to get a large chunk of traffic from social media outlets. Our top referring social sites were (in order): Facebook, Twitter, Tumblr, Reddit, Pinterest, Google+ and LinkedIn. I was quite surprised to see Pinterest beating out Google+ and LinkedIn. In fact, Pinterest was our 12th largest traffic referring site to our study. So if you ever do an Infographic make sure you add that Pinterest button! (If you need to make space, dump the Delicious bookmark button – that was nearly useless)
This isn’t to say that the other social networks are any less important. In fact, Twitter is effective as it always has been. For example, we got this mention in the Guardian just by reaching out to an author via Twitter!
The news cycle is fast – it’s important to adapt your story to the prevailing narrative, by anticipating different angles for your news. When we first launched our study, our press release headline read: New Research Compares Facebook Advertising to Google Display Network: Who Comes Out on Top? It was OK, but not viral.
But when the GM news broke, we re-released a similar press release with a slightly different angle: Does Facebook Advertising Work? This was because we found that the press was now looking to find reasons for why GM dumped Facebook. This new angle was much more effective than our original angle.
And we didn’t stop there. I wrote follow-up stories, such as: Why I Bought Facebook IPO Shares Today and Why I Dumped My Facebook IPO Shares at the Open Today, to keep this thing going.
I did an interview with The Independent at 4AM EST on Wednesday morning. Why? Because if you’re a reporter in London, that’s when you typically arrive in the office. The attention of the news media cycle an incredible force that is both incredibly powerful and incredibly short. If you happen to catch a lucky break as we did, be prepared to do whatever it takes to make the most of it in the short amount of time you’re in the media spotlight.
I have found that my most successful content marketing efforts are the ones that contain an unexpected result – something unusual or contrary to conventional wisdom. I call this the so what factor. As in: so what, why should anyone care about this story? Our research basically concluded that (for now at least) Facebook Advertising options aren’t that effective, and that was a pretty profound conclusion given that Facebook is essentially an advertising company (86% of revenues last year came from advertising last year), and given all the IPO hype and that exuberant +$110B IPO valuation.
No. I don’t actually think I ruined the Facebook IPO (apologies for the sensational headline, guys!). The Facebook IPO will go down in history as one of the worst IPO for retail investors. Why did the Facebook IPO crash and burn? It’s because there were more sellers than buyers at that price level.
If you’re looking for someone to blame for the loss of around $20 Billion in Facebook market capitalization as of today, blame the greedy bankers and Facebook management for setting such a high IPO price, or the Facebook for not yet developing compelling advertising options, or possibly GM for the unusual timing of their announcement.
But here’s what I can say about the impact my Facebook advertising study.
What do you think of our most recent content marketing effort? Any additional tips to share? Do you advertise on Facebook? What have your experiences? Let me know your thoughts in the comments below!
May 31st, 2012 @ // No Comments
SEOmoz doesn’t provide consulting, but our friends at
Distilled still do. Rock on!
Copyright © 1996-2012 SEOmoz. All Rights Reserved.
May 31st, 2012 @ // No Comments
As almost anyone reading this post already knows, April 24, 2012 marked a big day in the search industry. Once the initial Penguin update was rolled out (please believe me this is only the beginning and there is much more to come) the SEO industry, as we know it exploded in a flurry of fear and satisfaction.
For those of us that had sites get hit (I’ll admit I had several sites dinged by this update) what started out as anger quickly turned to fear and curiosity. Many industry publications jumped the gun and, in my opinion, began publishing tips and processes on how to ‘recover from Penguin,’ when the truth is, as mentioned by Ian Howells in his recent SEO Podcast, it’s really too soon to tell the full effects of these algorithm updates and anyone out there preaching is really just speculating. The best information I have seen thus far is from The HOTH, and that is DON’T PANIC, and BE PATIENT.
I did, however, have a real life experience where one of my sites, my own personal blog, got hit for what I am now almost sure was over optimization, and I was able to recover. What’s really interesting to me is that my site, nickeubanks.com, got hit at all… let me explain. My personal site is low traffic, low importance. I do not build links to it, I do not monetize it, it really just exists to serve as my digital resume and a place for me to openly ramble or rant when I feel like it.
Here is a screen shot of Google on Friday May 18, 2012. After a friend of mine reached out to me to ask if I had taken my site down (of course he didn’t just go and check the domain ) I asked him what he was searching for. He mentioned he had typed in some words in the title from what he could remember and my name – which should be more than sufficient to generate a SERP with my post(s). It did not. Instead this is what he was seeing:
With Inbound.org starting off the list, it was page after page of places that linked to my post – but not the post itself. My immediate reaction was fear that somehow my site was sandboxed. So to check I did a quick search for the full post title in quotes and there it was… what does this mean? That my site was penalized… but for what? As I mentioned before I don’t do any active linking, advertising, and the site has slim to no traffic. My first thought was that I might have been a victim of Negative SEO. I logged into Webmaster Tools and pulled down my indexed Google back-link profile, which I have put into a public Google Doc here so you can see it. Upon review you’ll see this is a pretty natural back-link profile, even with some links from some pretty authoritative websites… at this point I am scrambling for answers…
I was racking my brain to think of what it could possibly be that was causing my site to be buried in the SERP’s, especially for posts that have a lot of natural links, social signals, and are full of unique, well written content (note: I didn’t write most of the content in these posts).
I reached out to my buddy Mark Kennedy, as among the Philly SEO crowd he is certainly one of the most passionate SEO’s I have ever met. He had the same line of thinking that I did and immediately hit up ahref’s looking for spam-links or clues. Nothing. His next suggestion was to pour over any recent changes I made to the website. I reviewed some of the CSS changes and couldn’t find any messy code or mistakes that may have warranted the site to be dinged (Did I mess up my headers? Did I botch a declarations statement?) Nothing.
The only thing I could think of was to really take a closer look at my links, so I started inspecting each of the sites that was linking to me. During this process I stumbled across my old blog from college, 23Run.com. Here are the Google indexed links from 23Run. As you can see there are 77 of them, which out of my total indexed link profile, is roughly 11%.
I went to 23Run.com to take a closer look at how my site was linked:
And there it was… right in line with the Pengiun post from Microsite Masters showing sample data from their analysis, I had over 10% of my links over-optimized for anchor text. So I made this quick change:
And then needed to gauge about how long it would take for Google to crawl my site and index these changes, so I took a quick peak at my average crawl rate in Webmaster Tools:
and seeing that my average crawl rate was 59 pages, but my low was 24, I decided to give it the weekend and check back on Monday May 21, 2012. When Monday’s production activity calmed down, sometime in the early afternoon, I decided to run the query again and alas;
It is still ranking underneath Inbound.org, which is a bit strange, but it’s back!
Furthermore, the post is back to ranking for more broad terms, such as ‘fresh insights nick eubanks’ as you can see below:
Plain and simple, over-optimized anchor text can be dangerous. What was once the holy grail of SEO, getting links with your target keywords in the anchor text, is now something that requires careful planning and attention.
My advice is to develop your link profile to not look natural, but to be natural. If your anchor text is ‘over optimized’, you run the risk of being penalized, so make the effort and put in the time to naturalize your links. Try to replace anchor text links with naked URL’s or at the very least more natural anchor text – try to think about these links in the same sense of someone who doesn’t know you, finding your page or post and creating a link organically; most likely it won’t be your target keywords but your name, page/post title, or a more generic link text such as read more, learn more, etc.
I hope my real-life example proves useful and helps, in any small way, to dispel some of the speculation out there. Thanks for reading.
May 31st, 2012 @ // No Comments
It’s that time once again! Mozscape’s latest index update is live as of today (and new data is in OSE, the mozBar and PRO by tomorrow). This new index is our largest yet, at 164 Billion URLs, however that comes with a few caveats. The major one is that we’ve got a smaller-than-normal number of domains in this index, so you may see link counts rising, while unique linking root domains shrink. I asked the team why this happened, and our data scientist, Matt, provided a great explanation:
We schedule URLs to be crawled based on their PA+external mozRank to crawl the highest quality pages. Since most high PA pages are on a few large sites this naturally biases to crawling fewer domains. To enforce some domain diversity the V2 crawlers introduced a set of domain mozRank limits that limit the crawl depth on each domain. However, this doesn’t guarantee a diverse index when the crawl schedule is full (as we had for Index 52).
In this case, many lower quality domains with low PA/DA are cut from the schedule and out of the index. This is the same problem we ran into when we first switched to the V2 crawlers last year and the domain diversity dropped way down. We’ve since fixed the problem by introducing another hard constraint that always schedules a few pages from each domain, regardless of PA. This was implemented a few weeks ago and the domain numbers for Index 53 are going back up to 153 million.
Thankfully, the domains affected should be at the far edges of the web – those that aren’t well linked-to or important. Still, we recognize this is important and thus are focused on balancing these moving forward.
Several other points may be of interest as well:
This bit is important: Next index, we’re going back down to between 70-90 billion URLs, and focusing on getting back to much fresher updates (we’re even aiming to get to updates every 2 weeks, though this is a challenging goal, not a guarantee). The 150 billion+ page indices are an awesome milestone, but as you’ve likely noticed, the extra data does not equate with hugely better correlations nor even with massively higher amounts of data on the URLs most of our customers care about (as an example, in index 50, we had ~53 billion pages and 82.09% of URLs requested had data). That said, once our architecture is more stable, we will be aiming to get to both huge index sizes and dramatically better freshness. Look for tons of work and improvements over the summer on both fronts.
Below are the stats for Index 52:
Feedback is greatly appreciated – this index should help with Penguin link data idenitification substantively more than our prior one, and the next one should be even more useful for that. Do remember that since this index stopped crawling and began processing in mid-April, link additions/removals that have happened since won’t be reflected. Our next index will, hopefully, be out with 5 or fewer weeks of processing, to enhance that freshness. We’re excited to see how this affects correlations and data quality.
May 31st, 2012 @ // No Comments
Hi, my name’s Kurtis and I’m relatively new here at Moz. My official title is “Captain of Special Projects,” which means I spend a lot of time browsing strange parts of the web, assembling metrics and inputting data in Google Docs/Excel. If you walk past my desk in the Mozplex, be warned, investigating webspam is on my task list, hence you may come away slightly traumatized by what you see. I ward off the demons by taking care of two cats and fondly remembering my days as a semi-professional scoundrel in Minnesota.
Let’s move on to my first public project, which came about after Google deindexed several directories a few weeks ago. This event left us wondering if there was a rhyme to their reason. So we decided to do some intensive data collection of our own and try to figure out what was really going on.
We gathered a total of 2,678 directories from lists like Directory Maximizer, Val Web Design, SEOTIPSY.com, and SEOmoz’s own directory list (just the web directories were used), the search for clues began. Out of the 2,678 directories, only 200 were banned – not too shabby. However, there were 340 additional directories that had avoided being banned, but had been penalized.
We define banned as having no results in Google when a site:domain.com search is performed:
We defined penalized as meaning the directory did not show up when highly obvious queries including its title tag / brand name produced the directory deep in the results (and that this could be repeated for any internal pages on the site as well):
As you can see above, the directory itself is nowhere to be found despite the exact title query, yet it’s clearly still indexed (as you can see below by performing a domain name match query):
At first, the data for the banned directories had one common trait – none of them had a visible toolbar Pagerank. For the most part, this initial observation was fairly accurate. As we pressed on, the results became more sporadic. This leads me to believe that it may have been a manual update, rather than an algorithmic one, or at least, that no particular public metrics/patterns are clear from the directories that suffered a penalization/ban.
That is not to say the ones left unharmed are safe from a future algorithmic update. In fact, I suspect this update was intended to serve as a warning; Google will be cracking down on directories. Why? In my own humble opinion, most of the classic, “built-for-SEO-and-links” directories do not provide any benefit to users, falling under the category of non-content spam.
Some directories and link resource lists are likely going to be valuable and useful long term (e.g. CSS Beauty’s collection of great designs, the Craft Site Directory or Public Legal’s list of legal resources). These are obviously not in the same world as those “SEO directories” and thus probably don’t deserve the same classification despite the nomenclature overlap.
In the midst of the panic, a concerned individual brought to my attention that “half of our directories were deindexed” and wanted to know when we would be updating our list. If by half he meant 5 of the 228 we listed were banned and an additional 5 just penalized, then I’d have to agree. In any case, our list is now updated. Thanks for being patient!
We’ve set up two spreadsheets that show which directories were banned and/or penalized, plus a bit of data about each one. Please feel free to check them out for yourself.
Given the size and scope of the data available, we’re hoping that lots of you can jump in and perform your own analysis on these directories, and possibly find some other interesting correlations. As the process for checking for banning/penalization is very tedious and cumbersome, we likely won’t be doing an analysis on this scale again in the very near future. But we may revisit it again in 6-12 months to see if things have changed and Google’s cracking down more, letting some of the penalties/bans be lifted or making any other notable moves.
I look forward to your feedback and suggestions in the comments!
p.s. The Mozscape metrics (PA, DA, mozRank, etc) are from index 51, which rolled out at the start of May. Our new index, which was just released earlier today, will have more updated and possibly more useful/interesting data. If I have the chance, I’ll try to update the public spreadsheets using those numbers.
May 31st, 2012 @ // No Comments
With all this discussion about paid links and outing SEOs, I don’t jump on the outcry bandwagon. I try to anticipate Google’s next move. I want to focus on future-proofing our campaigns and these are great times to reconsider what Google is going to do next.
In 2005, Matt Cutts wrote about how Google considered paid links to be “outside” their guidelines. He specified “selling links muddies the quality of link-based reputation and makes it harder for many search engines (not just Google) to return relevant results.” Cutts goes on to explain that buying “links purely for visitor click traffic” is best done with the nofollow command in the rel attribute.
Fine. So paid links are okay but only when the rel=nofollow attribute is present. That’s pretty clear. But what about legitimate paid links from the past where you can’t add the new nofollow command? What about changes to how non-paid links are viewed? Are you getting penalized because of previous efforts? According to Google, yes!
As an aside: a paid link is unacceptable but a sponsored link isn’t. I understand the difference, but it’s a fine line at best.
My greater concern is with the ever-changing invisible line of the organic algorithms. What’s acceptable today isn’t allowable tomorrow and there’s no way to correct 100% of your actions after the fact. Wouldn’t it be great if we could disassociate ourselves from previous links once that invisible line moves? I say, yes! Consider when negative SEO is performed against your site. What then?
Image Credit: thebuglish.com
Both the Google Panda and Google Penguin updates focus on removing pages that break quality guidelines. If your page is working to score better in the rankings by cutting corners, your strategy is a ticking time bomb.
If you’re marketing your website with a marketer’s mind, looking for every opportunity to bring in new customers while strengthening your brand, you may not realize the next guideline addition will nix all of your efforts retroactively.
Should we stop optimizing then? Do we publish content without any care as to the format or the organization of the content elements? Of course not. Start with quality optimization tips and pivot when necessary.
Some in our industry argue that “you should’ve known this was coming” when in fact some of the unacceptable practices today were a simple transition of traditional marketing to online marketing. It’s not outside the bounds of building a strong brand or increasing your visibility. It was done with good intentions!
Google’s marketing team practices many of the same things as other marketers when promoting their hundreds of brands, cross-linking sites (laugh now and cry later), sponsoring content, placing ads, etc. Nobody would tell Google’s team not to do this. It’s good for their visibility, credibility, and revenues.
I’m not expecting Google to detail every way to win in online marketing. But there’s still a huge gap between allowable techniques and webmaster guidelines.
Image Credit: Labyrinth
Because the Google algorithm is closed, SEOs have to figure out (test) against the search engines until they get what they want. As we do this, we blog about our findings and often share them with the community as “the perfect way” to rank in the search engines. But there are going to be incorrect findings along the way, or things that worked for a particular website or keyword or index. We’re going to make mistakes. We’re going to misinterpret our findings. We’re going to make some assumptions. Does that make us blackhat marketers? Nay.
Words like “always” and “never” have driven great blog posts into the fabric of our community as much as “I tried” and “you should test” and SEOs have relied upon the best information at the time to rank websites and coach business owners how to rank their websites. Most of the time, this information is valuable and improves the user experience.
Don’t misunderstand, I like algorithm changes. They eventually make search results better for users. Smarter algorithms with segmented data make for quicker, more personalized results, and greater relevance. Then search engines overhaul their algorithm and knock a portion of the garbage out of their index.
All I ask is that we have a way to remove ourselves from the mistakes of our past!
I’m not alone.
As time goes on, Google continues to measure credibility by tying attributes back to Google properties. Google+ is an example of this and the more connected our content is to our Google+ profile, the better chance we have of becoming an authority on the subjects we write about.
Google tells us to write great content and the rest will follow. Not true. The distinction of quality content and over-optimized content is vague at best. There are web standards that are good for users, software developers, search engines, and many others. But Google begins to break down those standards when they tell us to avoid using keywords in our title, headers, and body copy. I’m being a little sensitive here on purpose. I think Google is too vague with statements like these.
Working in SEO is very much about balancing the art science of our efforts. Much like Danielson fending for himself, online marketers have been disadvantaged with organic visibility since Google started earning billions through Adwords.
The scientific mind in SEO believes it’s true until proven untrue. The artistic mind in SEO is where we get creative in finding more customers and drawing them into your site. This may include trading links with vendors, buy advertising, guest blogging, or any other way in which we build up our credibility. Sometimes we push the limit and have to pull back.
It’s the invisible line that gets me.
Social media is Google’s next attempt to avoid webspam. It centers around author rank or the credibility of the individual behind the content. Unfortunately, there are billions of web pages that don’t warrant an author behind it and therefore can’t be held up to the same standard. But when a credible author writes the same content, they will have the chance to outrank a better source because of the credibility of the author and not the quality of the content.
The latest Google Penguin updates have allowed a considerable amount of irrelevant pages on Google properties (notably Blogspot blogs now removed) as well as sites from Poland, the UK, and other country specific domains. Although it’s not technically webspam, it’s still garbage to me.
For now, they’ve introduced less relevant results: .co.uk site in my search for “SEO” from Bluffdale, Utah. I believe there may be something about the acceptance of gTLDs by iCANN last summer that Google is testing how to bring worldwide domains into the .com index. Or maybe it’s a glitch. Google is working to figure out how to introduce content from many of the different segmented data they have available from the knowledge graph.
I don’t like outing (in most cases) because marketers are often left to fend for themselves online. There are thousands of vague guidelines and a few very specific. Marketers are creative and find new ways to reach their customers, rely on traditional tactics, test new techniques, expand channels in which they can reach out, improve customer relations, and generally push the limits. Once they find something that strikes a chord with their audience, they maximize it.
There are those who participate in true blackhat tactics such as exploting software and loading 1000 links on your site (true story bro) or loading malicious code on a website for the purposes of stealing someone’s identity. These are a the true exceptions that deserve an outing and/or report to Google.
Besides, outing others is a tactic that’s as easily abused as the actions you’re reporting to Google. Not only that but why should we support Google in crowdsourcing this valuable information so they can improve their revenues at your expense?
Did I miss anything? I’d love to hear your thoughts.
Article source: http://www.seo.com/blog/links-outings-guidelines/
May 29th, 2012 @ // No Comments
Lately I’ve seen that a tight relationship between SEO and startups has been something of a foregone conclusion within the SEO industry: “Of course startups should be engaged in SEO!” Perhaps, and perhaps not. In fact, some start-up communities have taken up a stance in direct opposition to this, stigmatizing SEO as manipulative. Personally I’m just a humble consultant who has never started a company of his own, so I think it would be presumptuous of me to arbitrarily declare SEO a high priority for any start-up in any field.
If doing SEO isn’t a foregone conclusion, then, it bears further discussion as a marketing strategy. I want to approach this topic from a different angle. In any successful startup there will be someone with good business sense, someone who can look at the evidence in front of them and make their own decision about what is best for their company. So let me put some premises on the table about what SEO offers the start-up and why start-ups are uniquely positioned to leverage their position for SEO.
The end goal of SEO is to increase the number of people arriving at your site through organic search results. There are also other metrics that are intermediate factors helping you accomplish this goal, such as your actual rankings in search results. These are things that you can measure, and report successes in increasing.
In addition, there are two different sets of keywords to look at when assessing the organic search channel—branded and non-branded terms. It’s a powerful thing to demonstrate that your branded search traffic is increasing—it suggests that more people are looking for your brand online, which is a Good Thing.
On the other hand, the implications of receiving non-branded search traffic are numerous. Such traffic suggests that a site has increasing visibility for relevant search terms. If the start-up is defining new language, it suggests that users are picking up on that language. If it is increasing its share in an existing space, it suggests that the site may be cutting into the market share of competitors in organic search.
Optimization for SEO has the potential to enhance the effect of your other marketing strategies. If you are anticipating driving a lot of social interaction with your site, having your pages and URLs optimized for SEO will ensure that as people share your site you get the most value from that sharing. If you are link building through outreach and maintaining relationships with people in your industry, all of a sudden you have assets through which you can promote social content or editorial or anything else you might be up to.
Makes sense, right?
A startup—particularly at the outset of operation—has a great deal more flexibility than a more established competitor in the market. The obvious dichotomy here is between the large, established competitor and the start-up underdog. The start-up clearly has more agility than the established competitor, without a doubt. Consider:
But what about the 18-month old company compared to the 2-month old? If you don’t even have a website live, you’re even more flexible than an 18-month old company with a website up and an investment in a particular architecture or format or strategy for content publication. Oracle might be a giant awkward mess to manage from an SEO perspective, but even Dropbox by now has a more rigid infrastructure and user expectations and a great deal of variables which it must consider. Setting a course at the outset, which includes SEO will ensure that you are well positioned to be successful in organic search when you’ve achieved the same size and user base.
As part of overall SEO efforts all of the elements in the following (non-exhaustive) list might need to be manipulated:
The earlier SEO can be integrated into the business model the better.
A startup is bringing something new to the market, something with novelty (though hopefully it has some staying power as well).
In fact, novelty is what linkbait is all about. It’s something new and fresh and interesting—whether it is something explicitly new or a new take on something familiar. A new product, or a new face on an old product.
I mean, check out Y Combinator start-up Matterport. They’ve got a little Star Trek-style scanning device that makes 3D representations of any object or environment. This is some link-worthy content if ever I’ve seen some. We talk in SEO all the time about content marketing, which can be an expensive or confusing marketing route for a start-up. The thing is, in the early days, if you’ve got an interesting product that is your content. Ignore this at your own risk.
And again this matches up well with efforts you will be making on other fronts. If you’ve got a really interesting start-up idea, you’ll almost certainly have been getting links on TechCrunch or Mashable. If you’re minding your SEO on-site, you’ll be getting the full benefit these novelty-based links can drive to your site.
Practicing SEO is a bit like practicing meditation—full enlightenment is an ongoing journey requiring a lifetime of work, but also a little bit will go a long way. It may be impractical for your business to drop $10,000-$20,000 on a link bait campaign or plan a content strategy that reaches six months out. But if you could manage to do a quick technical audit of your site, even doing that will get you one step closer to success. SEO isn’t an all-or-nothing proposition. It is fully capable of scaling with your company.
I think it goes without saying that I think the above premises suggest that SEO is really something that start-ups ought to be engaged in. If you have come to a similar conclusion, I suggest reading this longer essay on the merits of SEO for start-ups. If after that you’re feeling game, checking out the following resources:
These reflect a higher-level look at the problem of SEO in the startup context. Then try to take some first steps, such as:
Thanks for reading, and good luck in your entrepreneurial endeavors! Reach me in the comments below or on Twitter.
May 29th, 2012 @ // No Comments
Last Friday, Google pushed out the first refresh of their infamous Penguin update, bringing many webmasters to stumble towards their analytics, SERPs and the like in hopes of signs of a recovery from previous ranking drops, and/or to hope that they had not seen a precipitous drop in rankings. For most, the algo refresh seemed to do little, and for good reason – Cutts informed us that it apparently only impacted .01% of search results. Not many SERPs changed, and most of the voices heard over the weekend only seemed to be coming from people that were newly crushed by Google’s pet Penguin.
Soon after the algorithm hit, we learned that Penguin refreshes, as it did last weekend. This makes it unique and similar to Panda in formation. For most, there was a lull period where these Penguin-impacted webmasters would sit around, gather the facts about the update, and then take action towards recovery where there were negative impacts in a week or two after the event. Many removed and/or edited links – others simply moved completely away from their manipulative linking strategies. However, because of this time lull between implementation and action, it’s possible that moves towards recovery were not rapid enough to see full changes, as links take time to crawl, new actions take much time to implement, and the new refresh took only a month to once again take effect.
In that respect, it may leave some to think “is a Penguin recovery even possible?”, or “should I just start over with a new domain?”. These are real questions that will come with edits to links and strategy, a refresh and no changes to our rankings. To that end, I don’t have absolute answers, nobody does – just strong suggestions about the data points we know about what survived, what didn’t, and how Google has treated penalties in the past. What I do now know, though, is that a Penguin recovery is possible, and possible in a short amount of time – because I’ve seen a big, seemingly complete recovery from the update at the first refresh. This recovery came for a website that felt a previous, critical impact from Penguin at the first iteration – that website being popular WordPress portal, WPMU.org.
On April 24th, 2012, WPMU.org was hit by the Penguin Update. Traffic from Google dropped over 81% week over week, causing a real, massive hit in revenue for the business over night. This was not the “three or four spots” Google Penguin drop, this was the “almost disappear completely” type Penguin hit that was among the worst kind of impact most websites felt – and for the owner, James Farmer, this came as a real, completely unexpected shock.
WPMU is a WordPress information hub with resources, plugins and more - but the most important of its resource portfolio is its themes. WPMU’s themes function like many WordPress installs usually do – they create citation footer links to declare the theme type being used, so when its popular theme packs get installed, they generate a “Powered by X” link in the footer of the site back to the theme page.
Although it made sense in the context of these blogs, and for these types of themes, it also generated a high volume of sitewide links on low quality sites. It also, in its majority iteration, used the anchor “WordPress Mu” – which is a somewhat valid iteration of “WPMU” – but to Google, it was likely seen as an attempt to get commercial anchor text pointing at the site.
To WPMU founder James Farmer (as well as others), this was extremely frustrating. WordPress and web design installs are a unique use case that might have been caught in the crossfire of this update. It simply makes sense for these sites to have a link in the footer back to the theme and/or designer – this is definitely what users expect, and is good from a usability perspective overall. However, when looked at purely from what we normally consider “clean” link profile characteristics, its raw numbers fell outside those “good” ratios – and surely, the nature of WordPress themes and the majority of people who select them mean that a good amount will be low quality and/or spam.
However, WPMU clearly had many other things going for it. 10,700+ Facebook likes, 15,600+ Twitter followers, more than 2,500 +1s, and over 4,250 people subscribed to Feedburner in total. Its backlink profile includes links from Technorati, Ars Technica, Wired, Huffington Post, SEOBook, Business Insider, Boing Boing and more. Surely, this isn’t a site that deserves to get penalized, right? Well, apparently Google thought differently.
Post penalty, Farmer reached out to the Sydney Morning Herald, the biggest news site in Australia, in hopes to get coverage of the events. He got what he asked for, and the Herald, according to Farmer, got his site in front of Cutts to ask why a domain like WPMU would get hit by Penguin. Cutts replied, pointing out links that in particular lead to the penalty – for example, the below pages. Copy and paste to view.
Cutts said, according to Farmer, “that we should consider the fact that we were possibly damaged by the removal of credit from links such as these”. Sure, based on what we now know or assume about the update, this makes sense. Low quality links, and also spammy, rarely-clicked footer links with over-optimized anchor text. Right.
Although this information was helpful to Farmer, what also came from it was Google awareness of a site that potentially might not have really “fit” within what they were hoping to accomplish with this update. On top of Cutts now knowing about the changes, Farmer then went on to blog the details of the penalty on WPMU, leading to more coverage and links, tweets from Rand, and also, according to Farmer, Danny Sullivan of Search Engine Land putting the site in front of Google once again.
With the burst of awareness this created in the SEO community, many people, such as myself, ended up commenting on Farmer’s post on WPMU. The community was gracious in offering advice, suggestions, and other reasons why the site may have been penalized – and from there, what Farmer might do to recover. Based on my comments and tweets at WPMU about the subject, Farmer reached out to me about taking next steps in undoing the impacts of Penguin. I obliged, and work began.
We had two choices for WPMU – get the nofollow attribute added to the links, or simply remove them completely. The first goal was to cut down on as many of the sitewide, “WordPress MU” anchor text links as possible. I initially thought nofollowing would be the best solution because of the potential for these links to drive leads for Farmer and WPMU, but Farmer thought, to make it easy to change and correct for bloggers, the best solution was to simply ask for removal.
The most perilous piece of WPMU’s link profile came from one site – EDUblogs.org. EDU Blogs is a blogging service for people in the education space, allowing them to easily set up a subdomain blog on EDUblogs for their school-focused site – in a similar fashion to Blogspot, Typepad, or Tumblr, meaning that each subdomain is treated as a unique site in Google’s eyes. Coincidentally, this site is owned by WPMU and Farmer, and every blog on the service leverages WPMU theme packs. Each of these blogs had the “WordPress MU” anchor text in the footer, which meant a high volume of subdomains considered unique by Google all had sitewide “WordPress MU” anchor text. In what might have been a lucky moment for WPMU, this portion of their external link profile was still completely in their control because of WPMU ownership.
In what I believe is the most critical reason why WPMU made a large recovery and also did it faster than almost everyone else, Farmer instantly shut off almost 15,000 ‘iffy’ sitewide, footer LRDs to their profile, dramatically improving their anchor text ratios, sitewide link volume, and more. They were also able to do this early on in the month, quickly after the original update rolled out. A big difference between many people trying to “clean up their profile” and WPMU is time – getting everything down and adjusted properly meant that many people simply did not see recoveries at refresh 1.1 – but that doesn’t mean it won’t happen at all if the effort persists.
Once .EDUBlogs got cleaned up, the majority of the link profile had been fixed. However, much of the junk still remained from independent bloggers who put up WPMU themes. Because of time constraints, I was really unable to move at all on the link cleanup outside EDUblogs as we attempted to get an effective strategy in place for people to remove footer links, and also avoid Memorial Day weekend for e-mailing. Despite this, we still may move forward with cleaning up the remainder “junk” links to prevent Penguin hitting again on a second iteration.
Although Penguin seems to be a link penalty, I would be remiss to only mention the large link-based changes that were made to the site in the month between updates. Farmer and the WPMU team also made the following changes during that time, any, all, or none of which may have made an impact on recovery. I want to clarify, here, that these cleanups were all done by Farmer as overall quality value-adds, and were not Penguin-specific improvement suggestions made by me, although some, many, all, or none of them may have contributed to the recovery.
These aren’t the only changes that occurred, certainly, but were the most notable in reference to the Penguin update, and may help in your own decision making in order to better recover your own website rankings in the future.
Just as I was about to start manually e-mailing the remaining blogs to remove WPMU links, a great thing happened – recovery. On Friday, May 25th, a clear return from the 1.1 refresh of Penguin occurred, bringing ranking and traffic levels to what look like the same spot they were previously. Given that it’s a holiday weekend, traffic is considerably down, so it’s hard to tell for certain – but considering what we know about traditional impacts from the holidays, it looks like WPMU has made a full recovery from its original hit from the Penguin update.
This Penguin recovery is a great sign not just for WPMU, but also other Penguin impacted webmasters as well. WPMU had a lot of things going for it that allowed for immediate and quick recovery – such as getting in front of Google (which may have caused an algorithmic adjustment for this use case), being a site that DESERVES to rank with tons of other great signals already, and also the ability to pull down tons of manipulative linking root domains instantly. However, these “quick fix” solutions that allowed WPMU to quickly come back also means that the long term fixes that you’re working on for your domain should also work – that is, if you implement them properly and move towards a longer term, higher quality site as you should be.
It should also be noted and taken very seriously that this post should not be considered a “blueprint” for recovery for your site. Read it and make your own educated decisions based on what you know about your link profile, your business, your vertical, and the Penguin Update in general.
Best of luck – and happy Penguin hunting!
Although Farmer and his team at WPMU did most of the heavy lifting in this recovery, I’ll do my best to answer any questions you might have in the comments. Feel free to also ping me on Twitter for a quicker response. Many thanks to Melissa Kowalchuk as well for her image design work on this post.
May 25th, 2012 @ // No Comments
(Page 1 of 2 )
If you read my piece last week about the Facebook initial public offering of stock, and you’ve read the news this week about the social network and others associated with the IPO getting slapped with class-action lawsuits and investigations, you know why I’m not a stockbroker. What happened? The stock dropped, and many smaller investors lost money – because they did not receive the guidance from Facebook that large institutional investors did, according to the charges.
What happened with Facebook’s IPO was unprecedented, not because of the size of the offering (though it was certainly the largest ever offered), but because of some questionable matters concerning who knew what, and when, and what they did as a result. Henry Blodget gives an excellent rundown of the events. It’s particularly helpful for those of us who aren’t familiar with the conventional industry practices surrounding IPOs, and therefore might not realize just how far Facebook’s IPO deviated from that standard.
So what happened, exactly? Facebook held what is known as the IPO roadshow to drum up interest in the stock. In the middle of that roadshow, the analysts at Facebook’s IPO underwriters cut their estimates for how the company would perform in its second quarter. This event is so unusual that at least a couple of long-time observers wrote that they had literally never heard of it happening before. Why did they cut their estimates? According to the lawsuits, someone at Facebook verbally told them to. But the worst part of it was that these cuts in estimates did NOT get shared with everyone.
Apparently, Facebook expected a weak second quarter. But why? We got a hint as early as May 9, when Facebook made a change to its S-1 filing with the SEC. In the revision, Facebook noted that many more users are accessing its website through mobile devices. This in turn reduced what the social network could charge for ads, which hits the company right in the long-term bottom line.
Here’s the money quote from the filing: “We believe this increased usage of Facebook on mobile devices has contributed to the recent trend of our daily active users (DAUs) increasing more rapidly than the increase in the number of ads delivered. If users increasingly access Facebook mobile products as a substitute for access through personal computers, and if we are unable to successfully implement monetization strategies for our mobile users, or if we incur excessive expenses in this effort, our financial performance and ability to grow revenue would be negatively affected.”
Now any investor who knows about this might take warning from it. The lawsuits allege that Facebook specifically told their underwriter investors to reduce their estimates for Facebook’s second quarter – and didn’t tell anyone else. Blodget explained that “The information about the estimate cut was then verbally conveyed to sophisticated institutional investors who were considering buying Facebook stock, but not to smaller investors.” This adds a whole new perspective to the increases in the amount of Facebook stock that large investors decided to sell when the social network raised the target price of its IPO.