Thin client? What thin client?

I just spent five minutes waiting for my laptop to boot up. It’s a fairly standard spec, running the dog’s breakfast that is Vista, but still, it shouldn’t take that long. In a world of cloud computing, could we be looking ahead to instant start-up as clients get thinner? I doubt it.

I’ve documented my history with computing on this blog before, but very briefly you go ZX81, ZX Spectrum, IBM PC running DOS, GEM (a precursor to Windows), Windows, self-built PC, and now I’m the proud owner of four networked machines that all do their own thing in their own inimitable way (sorry about talking tech with long words but I’ve just been listening to the Fry Chronicles and marveling at how Stephen Fry reacted to tech in the same way I did).

In one way it’s a history of progression, from a 1KB system that didn’t have enough memory to fill the screen with characters, to being able to store all my DVDs, uncompressed, on one hard drive.

But in another, it’s not.

The ZX81 and Spectrum were so-called ‘clean machines’, that is, you just turned them on and they booted up instantly because the operating system was built into the hardware. The early DOS-based machines were similarly quick to start up, not as immediate but pretty fast. Certainly not five minutes. And, for the record, I cannot remember one instance in which a DOS-based machine crashed. Not one.

Since then, machines seem to take longer and longer and longer to start up. And despite the laptop I’m typing at right now being several orders of magnitude more powerful than the PC I built several years ago, it doesn’t strike me as much faster. It does more things, and it’s easier to use, but it’s not really faster. My netbook, if anything, is quite a lot slower.

An anecdote: I used to work for a company that delivered financial information to the City via a web browser. It was very forward-looking stuff (too forward-looking perhaps – it never made it). I remember trying to nail down the specs for running the system and in the end we just decided ‘if it can run Windows, it can run us’. This was quite neat, but still we got enquiries from people running machines that were only a year or so old that couldn’t handle it, particularly if the City was feeling quite bullish.

I remember saying to the tech manager, with heavy irony, “So they don’t have enough capacity to run our thin-client system then?” My how we laughed.

So why, if we’re putting more onto the cloud, are we still suffering? Even the web technologies themselves – Antivirus, Flash, firewall, Java, javascript, Silverlight, the list goes on – are demanding more of our local processing power. That’s without even thinking about the supposedly processor-intensive stuff such as multimedia.

Here’s what I want: a completely online operating system. Something so server-based that all I have to do is switch my machine on, and everything runs online. The only thing the local machine has to look after is the web connection, maybe some security, and that’s it. I know there are versions of this – my Samsung netbook came with one that didn’t work so I uninstalled it – but I cannot name any.

But I doubt that’s going to happen. Chip designers and manufacturers will, out of necessity, produce faster and more capacious chips because they’re in a competitive market. In response, and for the same reason, software designers will use that computing power to do more. Your local machines will continue to gain weight.

So I’d really love not to have to wait for my machines to boot up, or to have to update every sodding piece of software on a daily basis. Instead, I’d like to transfer all the computing online, because it’s a much neater way to do it. I want a dumb but trim terminal, not a clever, overweight machine. I want something more akin to my clever little black cat rather than my stupid fat one.

Twitter influence: who do you believe?

Two people walk into a room. They both claim to have the definitive ranking for Twitter influencers for your area of interest. One uses Klout, the other, WeFollow. And guess what? Their results differ, in some cases quite wildly.

Which do you believe?

Let’s multiply the problem. Imagine you’re dealing not just with two people who have different results, but eight. Between them they’re using WeFollow, KloutTweetLevel, Twittergrader, Twinfluence and Twitalyzer, with two of them, bless, still using Followers and Lists. How quaint.

So you have eight people all claiming to have determined who you need to follow, or monitor, or talk to. My take on this? Let’s compare all of them with each other and see if there are any congruences – that is, if I rank according to one metric, then rank according to another, and compare the two, do any of these metrics exactly match? Or nearly match? Because if they do, then it’s probable that they’re more accurate because we’re getting agreeement between them. If not, then, well, we’re stuffed really aren’t we?

So, let’s take a look…

Let’s choose a subject. Say, architecture. I’ve done some social media work in that field so I’m kind of familiar with it, and it’s a nicely defined sector. So, in the manner of Sir Alan Sugar telling your apprentices what they’ve got to do next to massage his over-inflated ego, you tell your eight people to find the top twenty Twitter influencers for architecture. After an hour or so, the results are in.

First off, the person who used Twinfluence goes a bit red in the face and has to admit that they didn’t actually get any results because Twinfluence was down. So, Sir Alan Sugar-like, you tell them they’re fired and they walk out of the room in a hot funk, never to reappear.

Next up, the Twittergrader person tells you that all but two of the candidates scored 100% on the Twittergrader scale. So you cannot determine rank. That’s pretty useless so again, you send them on their way.

Straight away you’re down to six usable, workable sources: WeFollow, Klout, TweetLevel, Twitalyzer, and the two dorks still using Followers and Lists. Being a fairly thorough version of Sir Alan Sugar, you decide to chuck the results into a spreadsheet to see how the various measures compare. You take WeFollow as the base for this because at least WeFollow is explicit, that is, it’s people voting for other people rather than being figured out by an algorithm. At least you understand this. So, you take the WeFollow ranking, and compare that with how you would rank results from the other sources.

This is what you get:

_

WeFollow Rank Followers Rank Klout Rank Tweetlevel Rank Lists Rank Twitalyzer Rank
ArchRecord 1 107,490 5 41 10 66 6 2,136 5 12.2 3
archdaily 2 18,099 8 50 4 67 5 1,763 7 8.7 4
dwell 3 107,712 4 45 6 69 4 3,326 4 0 11
archiCentral 4 15,268 10 12 20 47 18 959 13 0 11
archinect 5 6,570 16 34 13 55 14 729 14 0 11
designmilk 6 159,372 2 54 2 71 2 5,137 3 29 1
wallpapermag 7 212,487 1 44 8 71 2 6,363 1 0 11
DesignObserver 8 148,762 3 57 1 73 1 5,764 2 22.3 2
MetropolisMag 9 9,627 13 31 14 53 15 1,104 11 0 11
architectmag 10 10,225 11 42 9 59 9 1,145 10 0 11
dornobdesign 11 30,754 6 45 6 59 9 1,407 8 8.2 5
AIANational 12 6,925 14 25 16 58 11 619 16 3 8
Interior_Design 13 20,749 7 27 15 57 12 1,343 9 0 11
blueprintmag 14 5,646 17 22 18 44 20 641 15 0 11
archimag 15 2,933 19 20 19 45 19 346 19 0.8 10
casinclair 16 6,647 15 49 5 62 7 606 17 7.6 6
mocoloco 17 9,672 12 36 11 53 15 1,022 12 0 11
designboom 18 15,493 9 52 3 61 8 2,075 6 4 7
architectderek 19 4,774 18 35 12 56 13 312 20 2.5 9
VariousArch 20 2,715 20 25 16 49 17 357 18 0 11

_

That’s right. None of them agree. There are really wild differences here. ArchRecord, which according to WeFollow is number one in the architecture world, would be ranked 10th if you were using Klout for this. According to Klout, DesignObserver is the top dog, which largely agrees with most of the other sources, but again, not with WeFollow. If we were to rank by Followers, casinclair would be 15th, but by TweetLevel, it would be 7th.

So we can scoff at the people still using Followers or Lists, but really, if there is very little agreement across the board, does it matter? The Followers and Lists results are kind of in the same ballpark, so even if they’re crude measures, why not use them?

But there are degrees to which they disagree. Let’s compare them to each other to see which are the closest by figuring out how much, on average, a Twitterer’s rank changes when you use each metric:

_

WeFollow Rank
compared to…
Followers Klout TweetLevel Lists Twitalyzer
ArchRecord 4 9 5 4 2
archdaily 6 2 3 5 2
dwell 1 3 1 1 8
archiCentral 6 16 14 9 7
archinect 11 8 9 9 6
designmilk 4 4 4 3 5
wallpapermag 6 1 5 6 4
DesignObserver 5 7 7 6 6
MetropolisMag 4 5 6 2 2
architectmag 1 1 1 0 1
dornobdesign 5 5 2 3 6
AIANational 2 4 1 4 4
Interior_Design 6 2 1 4 2
blueprintmag 3 4 6 1 3
archimag 4 4 4 4 5
casinclair 1 11 9 1 10
mocoloco 5 6 2 5 6
designboom 9 15 10 12 11
architectderek 1 7 6 1 10
VariousArch 0 4 3 2 9
Average Rank Change 4.2 5.9 4.95 4.1 5.45

_

The table above shows us how much each Twitterer’s rank changes when we compare it with WeFollow (I’m just interested in change here, not whether it’s up or down, hence all the values are positive. I’m no statistician but this makes sense to me for some fairly ad-hoc reason right now). So if you rank ArchRecord by Followers, its position changes by four places compared to if you’d ranked by WeFollow. And if you look at the top table you can see that makes sense: it’s ranked #1 according to WeFollow, but #5 by Followers.

The average difference is simply the average of these positional differences (again, I’m not a statistician). So, on average, if you rank by Followers, compared to WeFollow, Tweeters would change position by a little over four (ie 4.2) ranking places. Look at the average ranking change for WeFollow compared to Klout: it’s nearly six (5.9)! On average, if you drew up a top 20 ranking according to Klout and compared that with WeFollow, your ranks would differ by six places. That’s not even close.

Anyway, I said we’d compare everything with everything so on to the next few tables, with comments below.

_

Followers Rank
compared to…
Klout TweetLevel Lists Twitalyzer
ArchRecord 5 1 0 2
archdaily 4 3 1 4
dwell 2 0 0 7
archiCentral 10 8 3 1
archinect 3 2 2 5
designmilk 0 0 1 1
wallpapermag 7 1 0 10
DesignObserver 2 2 1 1
MetropolisMag 1 2 2 2
architectmag 2 2 1 0
dornobdesign 0 3 2 1
AIANational 2 3 2 6
Interior_Design 8 5 2 4
blueprintmag 1 3 2 6
archimag 0 0 0 9
casinclair 10 8 2 9
mocoloco 1 3 0 1
designboom 6 1 3 2
architectderek 6 5 2 9
VariousArch 4 3 2 9
Average Rank Change 3.7 2.75 1.4 4.45

_

No need to panic, this is doing the same thing as the previous table, but relating ranking by Followers with the other rankings (we don’t need to include WeFollow now because we already did that in the previous table). Again, we’re looking at the absolute change, regardless of whether it’s up or down, then we average those changes at the bottom.

This time the biggest change is Followers compared to Twitalyzer, at 4.45. If two people gave you rankings based on these two metrics, you’d find that on average the positions differed by between 4 and 5 places. That’s still fairly large.

The lowest here is Followers to Lists, at 1.4. In other words, ranks by Followers compared to ranks by Lists would be very similar. Do you find this surprising? I do. I think. More below.

Let’s look at how Klout rankings compare, below.

_

Klout Rank
compared to…
TweetLevel Lists Twitalyzer
ArchRecord 4 5 7
archdaily 1 3 0
dwell 2 2 5
archiCentral 2 7 9
archinect 1 1 2
designmilk 0 1 1
wallpapermag 6 7 3
DesignObserver 0 1 1
MetropolisMag 1 3 3
architectmag 0 1 2
dornobdesign 3 2 1
AIANational 5 0 8
Interior_Design 3 6 4
blueprintmag 2 3 7
archimag 0 0 9
casinclair 2 12 1
mocoloco 4 1 0
designboom 5 3 4
architectderek 1 8 3
VariousArch 1 2 5
Average Rank Change 2.15 3.4 3.75

_

This time, comparing Klout to the remaining metrics (we don’t need to do WeFollow or Followers because we did them above, remember). Klout compared to Tweetlevel is the lowest average difference but still not as low as Followers to Lists.

Next up, TweetLevel:

_

TweetLevel Rank
compared to…
Lists Twitalyzer
ArchRecord 1 3
archdaily 2 1
dwell 0 7
archiCentral 5 7
archinect 0 3
designmilk 1 1
wallpapermag 1 9
DesignObserver 1 1
MetropolisMag 4 4
architectmag 1 2
dornobdesign 1 4
AIANational 5 3
Interior_Design 3 1
blueprintmag 5 9
archimag 0 9
casinclair 10 1
mocoloco 3 4
designboom 2 1
architectderek 7 4
VariousArch 1 6
Average Rank Change 2.65 4

_

Again, I’d say these are fairly large. An average change in rank of 2.65 is still nearly twice that of the lowest so far, Followers:Lists, at 1.4.

And finally, List rankings:

_

Lists Rank
compared to…
Twitalyzer
ArchRecord 2
archdaily 3
dwell 7
archiCentral 2
archinect 3
designmilk 2
wallpapermag 10
DesignObserver 0
MetropolisMag 0
architectmag 1
dornobdesign 3
AIANational 8
Interior_Design 2
blueprintmag 4
archimag 9
casinclair 11
mocoloco 1
designboom 1
architectderek 11
VariousArch 7
Average Rank Change 4.35

_

Well done, you made it to the last table, where all we have left is ranking by Lists compared to rankings by Twitalyzer. It’s still not looking good is it? 4.35 means that rankings would change over 4 positions on average. So the person you said ranked 8th could in fact be ranked 4th, or even 12th.

I probably should create yet another table summarising all the average rank changes but I can’t be arsed. All we really need to look at are the biggest and, most importantly, lowest average differences.

The biggest is 5.9, which is when you compare how rankings would change, on average, if you rank by WeFollow compared to ranking by Klout. This implies to me that there’s something radically different behind those figures, different enough to make them mutually meaningless.

The lowest is 1.4. And guess which combination that is? It’s Followers to Lists.

Now, I’ve spent a lot of time agonising over how to calculate influence. If you do a quick search you’ll find a lot of people saying that Followers is not a good indicator of influence. Others say that perhaps Lists are better. But I don’t buy the other indicators. I don’t understand how they’re calculated and therefore I don’t understand what they mean or, importantly, what action to take. If you look at the Edelman equation for calculating Tweetlevel, it’s horrendously complicated. What does it mean? How do I improve it?

But with Followers, I get it. As an analogy with paper circulations, I can say to people that if, say, arcinect tweets about you, then around 7,000 people will see it. I get Lists too. They tell me that, for example, over 1,000 people have bothered to add architectmag to a list, which is pretty impressive, when compared to the others in the table.

So Followers vs Lists gives the lowest difference. From one angle you could say that’s just an indicator of the propensity of people to create lists, that is, for every 12 or so followers, one creates a list. But I don’t see any such ratio between number of followers and lists above.

So I’m going to be a bit heinous here and go against the commonly accepted wisdom. I’m going to say, in a nicely numbered chain of inference, that:

  1. Followers and Lists are often dismissed as indicators of influence
  2. There are lots of Twitter influence metrics out there that are supposedly better
  3. If you take any two – or three, or four, etc – and compare them, often the differences will be fairly major
  4. This implies that no one metric is really any better than any other metric
  5. Except for Followers vs Lists which seem to tally the closest
  6. You can gain actionable insights from Followers and Lists which you cannot from the other metrics
  7. Therefore: Followers and Lists are the best indicators of influence

I’m prepared to believe that some of the super-duper pro systems out there can do this better. Influence also needs time to really identify who influences whom. I know that influence is cause and effect, input versus output, etc. And this is not a scientific test, it doesn’t have a sufficiently large sample, etc.

But, if you need to draw up a list without access to a pro system, this is my take on it. The supposedly more sophisticated metrics don’t cut it.

I know it’s controversial but if anyone else can provide a convincing argument otherwise, I’d like to hear it.

Social media? I wouldn’t bother.

In the 18 months since I went freelance, I’ve spoken to a lot of people and worked with quite a few different companies, including a fair number of PR agencies.

And what have I learned? That the state of social media is pretty much exactly as it was when I first became a social media type, over three years ago. Except it’s worse. So, I’m going to make it all better, right here and now.

When I started there was a vague notion that something called a blog might be quite a useful communications tool. This was before Facebook and Twitter had started to loom quite so large. I told people how useful I thought blogs could be, but no one listened. I made it my job to find out about these developments and eventually moved on to pastures new, where there were tactics a-plenty but no concept of strategy, measurement, value.

Eventually I decided to go freelance so I could do things more how I felt they should be done. I’ve since developed what I would call fairly nifty ways of monitoring, measuring results, developing strategies. But time and time again I come up against the old problems:

  • You develop a strategy that considers all the angles – the people, the message, the brand, ownerships – maps it onto what a business does, sets targets. You’re sure it will work. It’s beautiful. There is a lot of excited waving of hands. And that’s it. Six months down the line, it’s dead in the water. Why? Because, I think, people are too busy to be bothered with it. They got along fine before it, they’ll get along fine after it. They don’t really need it.
  • Clients make unreasonable demands of social media because they’ve heard of it. They want you to do things with it, right here, right now. You want to explain to them that it’s not a tap you just turn on. But they’re too busy to care. So you get unsatisfactory results because you’ve been using the wrong solution for the wrong problem.
  • You find yourself siloed because people don’t want to know. Part of your social media strategy is that people all look after different parts of it. But they don’t because they’re too busy. You just cannot sustain this position because social media is content-driven and you cannot be the expert on everyone else’s content.

Can you see the thread here? People are too busy. They’ve got their heads down working and social media is something they’re prepared to pay lip service to, but no more. It’s nothing malicious. They’re just too busy.

I have a very clever friend who once looked after the marketing for a prominent occupational psychology firm. When I met him recently I asked how things were going. He replied sadly “No one listens to me.” Of course they don’t. They’re too busy for marketing. So it goes, they’re too busy for social media too, it would seem.

But get this: things are worse now because a lot of people have sorta kinda heard about social media. So now they feel extremely smug when they say they’re not sure about it because they don’t know how it generates ROI.

ROI? Gimme a break! How many companies know the ROI of anything they do, let alone comms?

For example:

  • What’s the ROI of your website? How much did it cost you to put together, and how much have you made from it? If you don’t know, then why did you put one together in the first place? What would be the effect of taking it down?
  • What’s the ROI of your PR or advertising? How many leads did you make out of it? What was the value of those leads? If you just increased brand awareness/value/sentiment, how do you quantify this?
  • What’s the ROI of your intranet? Has it reduced development time? Has it reduced time to market? Has it helped retain knowledge? If so, how much do you think you’ve saved on the cost of recruiting and training new staff?

Etc

The real problem here is that people have no idea of how their online efforts are doing because a) they don’t measure them and/or b) they never measured them so they have no benchmark. And c) they’re too busy to worry about this anyway.

So, my advice?

I once saw a programme about some men who spent time in a monastery. After several weeks one of them had what he classed as a spiritual experience. He went a bit ‘funny’ and couldn’t quite explain what was going on. The monk he told this to just said, in a very calm, soothing voice: “I wouldn’t bother.”

It felt nice. Nice and reassuring. Calming, some might say. Absolving, even.

So, if you’re worrying about social media, I wouldn’t bother. You’re too busy. It sounds cooooool but really, if I put a strategy together for you, you won’t follow it because you’re too busy.  So I wouldn’t bother. If you want it to do something for you, here, now, then that won’t work because that’s not how it works, so I wouldn’t bother. And if you’re suddenly overly concerned about ROI – which you never were in the past – then, again I wouldn’t bother because if you didn’t measure anything before, you won’t do it now.

There now. Doesn’t that feel better?

8 email statistics to use at parties

Any self-respecting marketing channel is incomplete without a set of cool numbers to go with it: numbers you can print on a t-shirt and impress your friends with (if you have the right sort of friends).

Social media has them. Now it’s email’s turn…

Email. We often forget it. I just did some email jiggery-pokery and started looking for stats on email. It turns out that it’s pretty hard to figure who is where in the scheme of things, as http://www.email-marketing-reports.com/metrics/email-statistics.htm points out. But I came across this page and really, it makes you realise that email dwarfs social media in terms of numbers. If numbers is what it’s really about, then email is way ahead.

I particularly like this one: In the time it takes you to read this sentence, some 20 million emails entered cyberspace. Wow.