Two people walk into a room. They both claim to have the definitive ranking for Twitter influencers for your area of interest. One uses Klout, the other, WeFollow. And guess what? Their results differ, in some cases quite wildly.
Which do you believe?
Let’s multiply the problem. Imagine you’re dealing not just with two people who have different results, but eight. Between them they’re using WeFollow, Klout, TweetLevel, Twittergrader, Twinfluence and Twitalyzer, with two of them, bless, still using Followers and Lists. How quaint.
So you have eight people all claiming to have determined who you need to follow, or monitor, or talk to. My take on this? Let’s compare all of them with each other and see if there are any congruences – that is, if I rank according to one metric, then rank according to another, and compare the two, do any of these metrics exactly match? Or nearly match? Because if they do, then it’s probable that they’re more accurate because we’re getting agreeement between them. If not, then, well, we’re stuffed really aren’t we?
So, let’s take a look…
Let’s choose a subject. Say, architecture. I’ve done some social media work in that field so I’m kind of familiar with it, and it’s a nicely defined sector. So, in the manner of Sir Alan Sugar telling your apprentices what they’ve got to do next to massage his over-inflated ego, you tell your eight people to find the top twenty Twitter influencers for architecture. After an hour or so, the results are in.
First off, the person who used Twinfluence goes a bit red in the face and has to admit that they didn’t actually get any results because Twinfluence was down. So, Sir Alan Sugar-like, you tell them they’re fired and they walk out of the room in a hot funk, never to reappear.
Next up, the Twittergrader person tells you that all but two of the candidates scored 100% on the Twittergrader scale. So you cannot determine rank. That’s pretty useless so again, you send them on their way.
Straight away you’re down to six usable, workable sources: WeFollow, Klout, TweetLevel, Twitalyzer, and the two dorks still using Followers and Lists. Being a fairly thorough version of Sir Alan Sugar, you decide to chuck the results into a spreadsheet to see how the various measures compare. You take WeFollow as the base for this because at least WeFollow is explicit, that is, it’s people voting for other people rather than being figured out by an algorithm. At least you understand this. So, you take the WeFollow ranking, and compare that with how you would rank results from the other sources.
This is what you get:
That’s right. None of them agree. There are really wild differences here. ArchRecord, which according to WeFollow is number one in the architecture world, would be ranked 10th if you were using Klout for this. According to Klout, DesignObserver is the top dog, which largely agrees with most of the other sources, but again, not with WeFollow. If we were to rank by Followers, casinclair would be 15th, but by TweetLevel, it would be 7th.
So we can scoff at the people still using Followers or Lists, but really, if there is very little agreement across the board, does it matter? The Followers and Lists results are kind of in the same ballpark, so even if they’re crude measures, why not use them?
But there are degrees to which they disagree. Let’s compare them to each other to see which are the closest by figuring out how much, on average, a Twitterer’s rank changes when you use each metric:
|Average Rank Change||4.2||5.9||4.95||4.1||5.45|
The table above shows us how much each Twitterer’s rank changes when we compare it with WeFollow (I’m just interested in change here, not whether it’s up or down, hence all the values are positive. I’m no statistician but this makes sense to me for some fairly ad-hoc reason right now). So if you rank ArchRecord by Followers, its position changes by four places compared to if you’d ranked by WeFollow. And if you look at the top table you can see that makes sense: it’s ranked #1 according to WeFollow, but #5 by Followers.
The average difference is simply the average of these positional differences (again, I’m not a statistician). So, on average, if you rank by Followers, compared to WeFollow, Tweeters would change position by a little over four (ie 4.2) ranking places. Look at the average ranking change for WeFollow compared to Klout: it’s nearly six (5.9)! On average, if you drew up a top 20 ranking according to Klout and compared that with WeFollow, your ranks would differ by six places. That’s not even close.
Anyway, I said we’d compare everything with everything so on to the next few tables, with comments below.
|Average Rank Change||3.7||2.75||1.4||4.45|
No need to panic, this is doing the same thing as the previous table, but relating ranking by Followers with the other rankings (we don’t need to include WeFollow now because we already did that in the previous table). Again, we’re looking at the absolute change, regardless of whether it’s up or down, then we average those changes at the bottom.
This time the biggest change is Followers compared to Twitalyzer, at 4.45. If two people gave you rankings based on these two metrics, you’d find that on average the positions differed by between 4 and 5 places. That’s still fairly large.
The lowest here is Followers to Lists, at 1.4. In other words, ranks by Followers compared to ranks by Lists would be very similar. Do you find this surprising? I do. I think. More below.
Let’s look at how Klout rankings compare, below.
|Average Rank Change||2.15||3.4||3.75|
This time, comparing Klout to the remaining metrics (we don’t need to do WeFollow or Followers because we did them above, remember). Klout compared to Tweetlevel is the lowest average difference but still not as low as Followers to Lists.
Next up, TweetLevel:
|Average Rank Change||2.65||4|
Again, I’d say these are fairly large. An average change in rank of 2.65 is still nearly twice that of the lowest so far, Followers:Lists, at 1.4.
And finally, List rankings:
|Average Rank Change||4.35|
Well done, you made it to the last table, where all we have left is ranking by Lists compared to rankings by Twitalyzer. It’s still not looking good is it? 4.35 means that rankings would change over 4 positions on average. So the person you said ranked 8th could in fact be ranked 4th, or even 12th.
I probably should create yet another table summarising all the average rank changes but I can’t be arsed. All we really need to look at are the biggest and, most importantly, lowest average differences.
The biggest is 5.9, which is when you compare how rankings would change, on average, if you rank by WeFollow compared to ranking by Klout. This implies to me that there’s something radically different behind those figures, different enough to make them mutually meaningless.
The lowest is 1.4. And guess which combination that is? It’s Followers to Lists.
Now, I’ve spent a lot of time agonising over how to calculate influence. If you do a quick search you’ll find a lot of people saying that Followers is not a good indicator of influence. Others say that perhaps Lists are better. But I don’t buy the other indicators. I don’t understand how they’re calculated and therefore I don’t understand what they mean or, importantly, what action to take. If you look at the Edelman equation for calculating Tweetlevel, it’s horrendously complicated. What does it mean? How do I improve it?
But with Followers, I get it. As an analogy with paper circulations, I can say to people that if, say, arcinect tweets about you, then around 7,000 people will see it. I get Lists too. They tell me that, for example, over 1,000 people have bothered to add architectmag to a list, which is pretty impressive, when compared to the others in the table.
So Followers vs Lists gives the lowest difference. From one angle you could say that’s just an indicator of the propensity of people to create lists, that is, for every 12 or so followers, one creates a list. But I don’t see any such ratio between number of followers and lists above.
So I’m going to be a bit heinous here and go against the commonly accepted wisdom. I’m going to say, in a nicely numbered chain of inference, that:
- Followers and Lists are often dismissed as indicators of influence
- There are lots of Twitter influence metrics out there that are supposedly better
- If you take any two – or three, or four, etc – and compare them, often the differences will be fairly major
- This implies that no one metric is really any better than any other metric
- Except for Followers vs Lists which seem to tally the closest
- You can gain actionable insights from Followers and Lists which you cannot from the other metrics
- Therefore: Followers and Lists are the best indicators of influence
I’m prepared to believe that some of the super-duper pro systems out there can do this better. Influence also needs time to really identify who influences whom. I know that influence is cause and effect, input versus output, etc. And this is not a scientific test, it doesn’t have a sufficiently large sample, etc.
But, if you need to draw up a list without access to a pro system, this is my take on it. The supposedly more sophisticated metrics don’t cut it.
I know it’s controversial but if anyone else can provide a convincing argument otherwise, I’d like to hear it.