Can You Increase Your Website Rank?

No one really knows Google’s page ranking algorithm.  However, Search Engine Optimization (SEO) companies try hard to get you to spend money with them in order to increase your page rank and results within a search engine (Search Engine Results Page or SERP).  I had a very interesting conversation with Shawn Bishop, CEO of RankPay the other week.  RankPay is a SEO service that has a different take on ranking your website, which I thought was very compelling.  According to Shawn, the company decided to flip the model; you don’t pay if you don’t get ranked.

First, a quick primer on page and website ranking.  Google’s PageRank is a score from 1-10. The higher the rank, the more credible your site.    According to Justin Phillips, Director of Operations at RankPay, “PageRank uses the web’s vast link structure as an indicator of an individual page’s value. In essence, Google interprets a link from page A to page B as a vote, by page A, for page B. But, Google looks at more than the sheer volume of votes, or links a page receives; it also analyzes the page that casts the vote. Votes cast by pages that are themselves “important” weigh more heavily and help to make other pages “important.“”  Of course, a site can have a high PageRank and not be ranked within the top 1000 positions of a search engine for any search terms.  You want your URL to also rank within a search engine result page (i.e. what you see after you search for a keyword(s)).

This is where RankPay comes in.  So, how does it work?  All you do is go to the RankPay homepage and put in your domain name.  RankPay will first provide you with your Google PageRank. Next, you enter in the key words you want to be ranked on.  For example, say you’re a provider of business analytics services.  You might enter data analysis or business analytics or predictive model as possible key words.  RankPay uses its own algorithm to tell you the SEO opportunity and how much it will cost you per month, IF RankPay is successful.  Here is a screen shot from the service.  I input a company name (sorry about that, companyx, whatever you are) and the key word, “predictive model” into the system. RankPay tells you if you are ranked within the top 30 for the big 3 search engines, if the SEO opportunity as either Good, Very Good or Excellent, and how much it will cost you per month if you’re ranked.  Here the opportunity is excellent and it will cost me $157 a month IF Google ranks the site in the top three websites within a search engine results page.

The idea is that RankPay takes only those sites/keywords where it thinks it can get ranking.  Otherwise, it would put a lot of effort (think about contacting other sites for links, increasing content, bookmarking, etc. that is involved with increasing rank) for no reward on either end.  So, if you’re a new company with a low Google page rank and you want to get ranked on popular key words, RankPay may not take you as a client.  It will however, suggest other key word combinations that might make you a better candidate.

The proof is in the pudding.  If you go to Google and search for “SEO services” RankPay shows up in the top 3.   It’s a neat little model and they appear to be successful.  The service apparently converts 1 in 20 companies that come to their site.  That’s good news for the company.

Do we need the semantic web?

What kinds of applications do we need a semantic web for?  Is the semantic web practical?  These questions (among others) were posed by Jamie Taylor of Metaweb Technologies to a group of panelists at the Text Analytics Summit last week. The panelists were no lightweights.  They included Vladimir Zelevinsky from Endeca, Ron Kaplan from Microsoft, and Kathleen Dahlgren from Cognition.  I found this to be one of the most engaging segments of the Summit.

First of all, many people define the semantic web as a “web of meaning” or a “web of data” that will allow computer applications to exploit the data directly.  Check out the W3C webpage for more information about definitions.  The panelists at the Summit got into an interesting discussion about parsing data sources for the semantic web.  Here are a few of the highlights.  Please note that I asked some additional questions after the panel, itself, so if you’re reading information you didn’t hear on the panel this is the reason.

  • What kind of applications is the Semantic Web good for?  It depends what you want to know.  For example, one of the panelists pointed out that you don’t need the semantic web to find a hardware store in Boston.  However, more unique queries might require it.   Most people have had the experience of knowing what they are looking for and using a five or six word query and still not finding it.  The panelists pointed out that entities (people, places, things) were relatively easy to extract; it is the relationships between the entities that is harder.   Vladimir Zelevinsky explained it like this in terms of information retrieval need/information retrieval technologies:
  • Known Item Search -> Keyword Search (e.g., Google – where you need to find what you know exists);
  • Unknown Item Search -> Guided Navigation (e.g., Faceted search – where you need to explore the data space);
  • Unknown Relationship Search -> Semantic Web (where you are looking not for separate items in the repository, in this case the web, but for the connection(s) between them).

The semantic web could pay off in applications that require understanding the relationships between these entities. Ron Kaplan also noted that semantic web technology provides a standard way of merging data from different sources, and that will probably enable some useful new applications.

  • Scaling the semantic web. Everyone seemed to agree that manually tagging documents is a brittle exercise. Vladimir Zelevinsky from Endeca suggested putting a parser on each machine.  He said that since you type slower than 1 sentence per second that at the moment of creation, semantics could be injected into the document.  Of course, it is a bit more complex than this, but  this was an interesting notion. Kathleen Dahlgren from Cognition said that NLP at scale was the wave of the future. NLP is complex but deeply distributed.  Computers are getting faster and cheaper, and this can make it fast and scalable.
  • Is it practical?  There is a huge amount of data out there and it keeps changing. There is also a lot of duplicate information on the web.  Is it economically viable to think about parsing the web?  Ron Kaplan said he had done a back of the envelope calculation using the following assumptions:

“The simple order-of-magnitude calculation goes as follows:  There are roughly 2.5M seconds in a month, so an 8-core machine gives you 20M cpu seconds.  If it takes 1 second on the average to process a sentence (an upper bound), then you can do 20M sentences per month.  If a web page has on the average 20 sentences, you get 1M pages per month per machine. So, 1000 machines can do a billion pages per month. More if 1 second over estimates, less if 20 sentence/document underestimates.”

So this is economically feasible. If there is a need.  And that remains the question. Is it economically viable and necessary to try to find the information in the long tail?

Follow

Get every new post delivered to your Inbox.

Join 1,190 other followers