The old web is dying and I’m not sure I like the new one

BlogPulse has no pulse

So I was playing around with dashboards and the like yesterday  – as one does – and noticed that BlogPulse has disappeared. BlogPulse was not the greatest blog search engine around, but it was the only one offering anything like useable charts. So, given that Technorati charts disappeared years ago (although they still have a page claiming they’ll be back soon), and other solutions such as IceRocket don’t enable you to pass keywords to create live charts, it would appear there is no longer any blog charting widget out there.

Is this the final nail in the coffin of blogging? Are we really so uninterested in blogging activity that charts are no longer considered viable? It would seem that way, and the ‘blogging is dead’ meme is very much alive right now.

Charting generally seems to be suffering

Recently, disappeared, without even a whimper. It just vanished. I seemed to be the only person who noticed, but was, like BlogPulse, the only solution that did something incredibly useful: it would create a tweetcloud for a search term on the fly. In other words, you typed in what you were looking for, and it created a tweetcloud for that search (not a tweetcloud of your own timeline, which really isn’t that much use but I suspect a lot less processor-intensive). Plus it did it quickly, and there was a widget for it, which enabled you to build dashboards giving an instant overview of the latest terms associated with any topic. It was great. And then it wasn’t. There are sort-of alternatives still such as Visible Tweets, and Twendz, but, while they’re very pretty, you can’t build them into dashboards.

And today, Trendistic, the only (again) solution for live charting of Twitter trends, is down. It was down yesterday too. Look for it on Twitter search and there are just a load of weird Polish references to it (who knows, maybe Trendistic is a Polish pop group). Surely – sssssurely – Trendistic can’t have disappeared too? And surely, again, it can’t just be me who thought it was an absolutely brilliant idea?

RSS is dying

If you’re detecting a pattern here, you’re not alone. It does seem that really great ideas are failing as the web grows bigger and faster. They just cannot keep up, it seems – or, at least, not until/unless they’re snapped up by one of the walled gardens such as Facebook. Free information – as in, really free, readily available, easily manipulated and shared across the entire web – is disappearing.

RSS was supposed to be the great hope of free information. Peel the content away from the format, and hey presto, you can share pretty much anything across any platform. But therein lies the problem: something free is not something you can fence off and charge for. It is free in every sense of the word.

So it seems RSS is suffering too. Google Reader used to be a really nice way to bring feeds together and create a static web page of the results as well as a newly aggregated feed. Not since its recent revamp however. All the sharing features have been ported across to Google+, presumably because Google+ is a neat, walled garden whereas RSS was messy and free. Yahoo Pipes was the ultimate RSS aggregator/mash-up tool but suffered from underinvestment by Yahoo. Even after a supposed major overhaul, it’s flaky and too slow to power a dashboard (unless you’re prepared to wait for a minute or so while the results load up). Another RSS mashup tool, XFruits, died a couple of years back. Do a search for RSS aggregator tools and it’s like a graveyard. The only viable tool that I can see is called FeedRinse which, while it offers aggregation and filtering (the two most useful features of Pipes), also feels a bit overloaded and slow. And, as with TweetCloud and BlogPulse, it’s the only game in town, which leads me to believe it won’t be for much longer.

RSS from search has been abandoned by major players too. Such as the bookmarking platform Delicious. You used to be able to search across the Delicious database and pull an RSS feed from that. Stunningly useful, as it showed you what other people considered important for any topic. Not any longer. Twitter has also demoted RSS from search: you can still do it, but you have to look around to find out how. It’s another candidate for the cull, I believe.

Mash-ups are harder

So where does this leave us if we want to create our own mash-ups or dashboards? Well we can dive into the APIs if we fancy it, and learn a smattering of HTML and javascript. But we still need reliable platforms to base our dashboards on. The familiar theme of ‘only game in town’ is revisited here, in that the only solution offering public dashboards – that is, pages that you can show to anyone without them needing to log in – is Netvibes. And every time I create a dashboard in Netvibes, I find I have to spend quite some time figuring out what works still and what doesn’t. Quite apart from discovering over the past few months that third-party sites have disappeared, I’m finding that third-party widgets in Netvibes are broken, or even that Netvibes itself is cranky. So for example, my attempts to create a dashboard yesterday were frustrated by HTML widgets only displaying the top portions of any image or javascript output, widgets generally not staying in the same place when I refreshed the page, RSS feeds not being imported correctly, and on recourse to their support forum, finding it full of spam.

It seems the free tools that were once so useful are now decaying or falling apart. I don’t know what ‘Web 2.0’ really meant, but I have a sense of something dying, something that was slower and smaller than the web today, that shared more freely but was doing so with less immediacy and monetary return. Whatever we’re moving towards, if it’s Web 3.0, then it’s becoming more consolidated, monetised, bigger, faster, noisier.

So the ‘roll your own’ approach is going to get harder. The smaller, innovative sites that did one thing, and one thing well, just cannot survive the double onslaught of vastly increased traffic and expectations of real-time delivery unless they can make money from  it.  The old, fluid, free web that comprised many islands of activity is solidifying into separate continents of influence. The game is so much harder now, that it’s only the really big players that can make sense – and money – out of it.

Nostalgia ain’t what it used to be

Me? I preferred the more innovative, dynamic environment. I liked the way that RSS could be readily shared, and smaller enterprises could create neat tools that let you do things with it, without really needing to be a developer. I guess those days are gone. Nostalgia certainly ain’t what it used to be.

Postscript: … and no sooner do I file this post then I read this Observer piece by John Naughton, entitled “Has the Internet run out of ideas already?”, on the progression of information technologies: “from somebody’s hobby to somebody’s industry; from jury-rigged contraption to slick production marvel; from a freely accessible channel to one strictly controlled by a single corporation or cartel – from open to closed system.”

I couldn’t have put it better myself. In fact, I didn’t.


Thin client? What thin client?

I just spent five minutes waiting for my laptop to boot up. It’s a fairly standard spec, running the dog’s breakfast that is Vista, but still, it shouldn’t take that long. In a world of cloud computing, could we be looking ahead to instant start-up as clients get thinner? I doubt it.

I’ve documented my history with computing on this blog before, but very briefly you go ZX81, ZX Spectrum, IBM PC running DOS, GEM (a precursor to Windows), Windows, self-built PC, and now I’m the proud owner of four networked machines that all do their own thing in their own inimitable way (sorry about talking tech with long words but I’ve just been listening to the Fry Chronicles and marveling at how Stephen Fry reacted to tech in the same way I did).

In one way it’s a history of progression, from a 1KB system that didn’t have enough memory to fill the screen with characters, to being able to store all my DVDs, uncompressed, on one hard drive.

But in another, it’s not.

The ZX81 and Spectrum were so-called ‘clean machines’, that is, you just turned them on and they booted up instantly because the operating system was built into the hardware. The early DOS-based machines were similarly quick to start up, not as immediate but pretty fast. Certainly not five minutes. And, for the record, I cannot remember one instance in which a DOS-based machine crashed. Not one.

Since then, machines seem to take longer and longer and longer to start up. And despite the laptop I’m typing at right now being several orders of magnitude more powerful than the PC I built several years ago, it doesn’t strike me as much faster. It does more things, and it’s easier to use, but it’s not really faster. My netbook, if anything, is quite a lot slower.

An anecdote: I used to work for a company that delivered financial information to the City via a web browser. It was very forward-looking stuff (too forward-looking perhaps – it never made it). I remember trying to nail down the specs for running the system and in the end we just decided ‘if it can run Windows, it can run us’. This was quite neat, but still we got enquiries from people running machines that were only a year or so old that couldn’t handle it, particularly if the City was feeling quite bullish.

I remember saying to the tech manager, with heavy irony, “So they don’t have enough capacity to run our thin-client system then?” My how we laughed.

So why, if we’re putting more onto the cloud, are we still suffering? Even the web technologies themselves – Antivirus, Flash, firewall, Java, javascript, Silverlight, the list goes on – are demanding more of our local processing power. That’s without even thinking about the supposedly processor-intensive stuff such as multimedia.

Here’s what I want: a completely online operating system. Something so server-based that all I have to do is switch my machine on, and everything runs online. The only thing the local machine has to look after is the web connection, maybe some security, and that’s it. I know there are versions of this – my Samsung netbook came with one that didn’t work so I uninstalled it – but I cannot name any.

But I doubt that’s going to happen. Chip designers and manufacturers will, out of necessity, produce faster and more capacious chips because they’re in a competitive market. In response, and for the same reason, software designers will use that computing power to do more. Your local machines will continue to gain weight.

So I’d really love not to have to wait for my machines to boot up, or to have to update every sodding piece of software on a daily basis. Instead, I’d like to transfer all the computing online, because it’s a much neater way to do it. I want a dumb but trim terminal, not a clever, overweight machine. I want something more akin to my clever little black cat rather than my stupid fat one.

What’s the sum total of all the computing power in the world?

Not a wind-up, honest. Click for source.

Not a wind-up, honest. Click for source.

Estimated reading time: 4 minutes

OK, so it’s the kind of question a child would ask, but I do have a child’s mind, especially on a Friday.

Actually I should have posted about this yesterday because, according to my new blog schedule Thursday is tech day, but I forgot.

Actually, that’s a lie. I was too busy to post, then I fell asleep on the sofa, woke up in a pool of dribble, and went to bed.

Enough. Back to the question. It started off as an idle thought but then the more I thought about it – and the more I discussed it with Giles, my mate who works at Realwire and often sits opposite me at the Hot Office – the more interesting it became.

So, my thought was: how much computing power does the world have?

Imagine a world in which we needed to harness all the computing power in the world to solve a particular problem because our very existence depended on it. Or imagine there were some critical point of complexity and size beyond which technology became conscious.

OK, so these are slightly ridiculous teen-level sci-fi musings, but the question still stuck in my head, so let’s imagine instead that it’s just an interesting question for its own sake.

Obviously it’s growing all the time – probably grew a lot while typing these words – and it’s probably a useless question to ask anyway. But it raises all sorts of little questions, little, tiny questions running around on the floor, occasionally pricking your shins with needles then scurrying away under the sofa.

First stop: I type it into Google. Like this.

First hit: the top 500 supercomputing sites. Now if, like me, you grew up watching Sesame Street, you’ll have great difficulty comprehending the word ‘supercomputing’ without thinking of ‘SuperGrover‘ (and the same goes for the word ‘phenomenon‘). This looks promising but I don’t really understand what it’s telling me. I see a lot of numbers and start getting panicky and want to run away.

So, next up: Wikipedia. This tells me what the top 10 supercomputers are. This is more like it. I try deducing total power by cross-referencing these lists, so that, for example, if Lawrence Livermore has 5.4% of the total power across all those sites, and the total power according to Wikipedia of the top ten sites is 7,360 Teraflops, then you could say, maybe, that the total power is around 14,719 Teraflops.

But that’s grossly simplifying an ever-increasingly intriguing question. Which is: what do we mean by ‘computing’?

So I add my laptop, PC and music PC together and I get something like about 8 GB of RAM and 10,000 Mhz of processing power (I think).

But my washing machine has a processor, and probably some RAM. So does my alarm clock. And my mobile phone. Even the cats are microchipped. Are they computers now?

My car has a CD player in it. Do we class that as computing power? How about the engine management system? That’s a powerful computer, even if it is a Toyota.

Other cars – let’s face it, better cars – have their own IP address.

And suddenly this isn’t a simple question any more.

I know what you’re thinking: this is a futile question because there are different types of processor, different types of RAM, and whereas a lot of them talk to each other, not all of them do, or even can. There’s no way I’d be able to get my TV to run Excel, for example (not yet, anyhow). But my TV is a computer.

I really wanted to post an answer to this today, but Giles has noticed that I keep scratching my chin and making funny noises, which is what I always do when I’m thinking a bit too hard. So, I’m going to leave it for now and maybe try and work this one out for myself.

I envisage a huge spreadsheet with things on it. The things on the left will be ‘Total number of washing machines globally’, and the things  on the right will be ‘Average processing power’ or ‘Average RAM’. Then I’ll add the whole lot up – laptops, PCs, mobile phones, supercomputers, cats, cars and washing machines – and tell how how much, roughly, we can process as a planet. Should we ever want to know, or use it.

If anyone wants to help, feel free. If you’ve already figured this one out, even better – tell me. If you think it’s a futile or ridiculous task, let me know. And if you have anything else to add, let me know. All three of you. I’m all ears.

Just how fast does a PC need to be?

Estimated reading time: 2.5 minutes (plus 4:13 if you watch the video all the way through)

In an effort to get myself back into blogging habits, and to make sure I’m up to date with everything I need to be up to date with, I’ve decided to set myself a schedule. So, today is tech day – that is, I post about something a bit techy, a bit digital, a bit wey, a bit wah.

So, PCs. How fast do they really need to be nowadays?

I ask this because I was looking through my bookmarks the other day and found a video of some guys at Samsung putting together a PC that used Solid State Drives (SSDs).

In one respect this is a very cool thing. Hard drive storage tends to be a big bottleneck in most systems, so if you can speed up data access, you speed up the whole system. Hard drives are fairly clunky old things, in that they need to find data on the (rotating) physical medium, and this takes time. But SSDs are, as their name suggest, solid state: nothing in them moves, and the data is accessed directly, exactly as it is with your PC’s memory. Which, in fact, it pretty much is.

In another respect, it’s not a cool thing. But take a look first:

Apart from the slightly self-congratulatory nature of the video, it’s pretty good. And the speeds are astonishing (although they’ll be commonplace before too long).

But the reason I’m not sure about it is this: do we really need those sorts of speeds?

Before you accuse me of being like the guy who said we only need four computers in the world, or the other guy who said everything that could be invented has been invented, or the guy who said any given program will expand to fill all available memory (I think his name was Moore) I do have a reason for saying this.

And my reason is: everything is going into the cloud. Already I have access to a super-fast computer with super-massive storage. It’s called the web. I can run seriously complex queries online through systems like Pipes (or at least I could before it went a bit crap, and I’m hopeful that it’s going to get better soon). I can – and have – uploaded spreadsheets that freeze my PC but which Google Spreadsheets handles easily.

So given the choice, I’d rather have greater upload and download speed, but do I really need a faster PC? I’d argue my PC is fast enough now. I wouldn’t have argued that about five years ago when everything was local and I needed oodles of power and storage at my fingertips. Yet today, the power and storage exists ‘out there’, in the stuff of the web.

So, it’s an impressive video if you’re impressed by that sort of thing (which frankly I am – I’m never happier than when up to my elbows in bits of kit). But do we need it? Do we? Do we really? If you’re one of my three regular readers, let me know what you think.

Five cool ways to find people on Twitter

This post is probably going to get lost in the Twitter noise – and, judging by my declining stats, hardly anyone reads this blog anyway – but I still find it useful to share knowledge occasionally, not least because every day I don’t post I suffer guilt.

I’ve recently been looking around Twitter a lot, trying to find influencers. Now, there are many, many, many, many, many definitions of what influence is, and having been through most of them I’ve come to the conclusion that you can throw away your twitter rankings and your twinfluences and your twitter indices and just count the number of followers someone has. It’s quick, it’s simple, and it tells you straight away how many people you’ll reach. And, as a rule of thumb, someone with 10,000 followers is going to be more influential than someone with 1,000.

So, with that out of the way – and no, I’m not going to enter into yet another debate about it – how do you actually find these people? Well, being someone who likes to package everything he does so that other people can do them too, I’ve come across five nice ways to do this. Go through each of these and you’ll more than likely end up with a good list.

Let’s assume we’re searching for, oh I don’t know, data quality (that really is a random choice btw). So:

1. WeFollow

Go to and type in your search term, without spaces. In this case we’d go to Look, robot, a nice list of people that talk about data quality, complete with follower numbers. Nice. But not exhaustive because WeFollow doesn’t have everyone, although it is a very good first port of call to get a quick list together.

2. Replies or retweets (especially for people)

Search on Twitter for replies or retweets, especially if you’re searching for influencers associated with a person who, in turn, is associated with a issue or topic. So, from our WeFollow data quality search, we found that ocdqblog is pretty well thought of, so do a search on Twitter for replies and retweets involving ocdqblog – in this case, search for This shows you people who have replied to or endorsed what that person has to say in some way – and, by implication, people who have been influenced by, or talk to, that person. So it’s a fair bet that they’re in some way associated with that person. So, add ’em to your list.

3. Hashtags (especially for issues)

A hashtag is small identifier that people use to make it easier to bring tweets together for a specific topic that they’re pretty keen on. So, if someone has used #dataquality as a hashtag, it’s a fair bet that they’re involved enough in data quality as a subject to use it as a hashtag. In this case, you’d search on Twitter for (you have to use the URL code %23 instead of typing in a hash symbol for a direct link – don’t ask why, you just do). This method actually works really well, I’m finding. You do land some big fish this way.

4. #FollowFriday or #FF Hashtag

Search, again on Twitter, but with the #followfriday or #ff hashtag. FollowFriday is a neat little meme that people use to say “Hey, this person is worth following for this issue” – on a Friday. So if someone is doing some good work in the data quality field, it’s likely someone else somewhere has said at some point “Person X is good at data quality #ff” or suchlike.  So by searching for”data quality” %23ff OR %23followfriday, you can find people who have been endorsed by other people as being authorities on, in this case, data quality.

5. Topic search

Finally, just search for people who have mentioned your search term – that is,”data quality”. This is probably what you first thought of doing and while it works, it doesn’t have any of the nice nuances of whether they’ve been endorsed on WeFollow, or replied/retweeted, or used a hashtag, especially the FollowFriday hashtag. So you might get a lot of hits this way, but not as many quality hits, that is, people who are really involved, or recognised or endorsed by people involved in this area.

So there you go. If you’re canny you’ll figure out ways of creating all these URLs on the fly, generated from just specifying your search term, so you can just copy and paste them into a browser and off you go (or just click them in Google Docs, which has a lovely new auto-click URL feature now). And you save enough time to blog about it afterwards.  Not that anyone will read about it.

New Google Twitter search is good but not great

Speed counts. Click image for source.

Speed counts. Click image for source.

When I heard via Shel Holz that Google is now indexing Twitter updates I got a bit excited for three reasons:

1. Were they going to show the total number of mentions? If so, we could count them as a crude index of popularity. Do a search for a term on Google and it gives you the number of hits. Number of hits roughly equates to popularity/ubiquity, and you can use that as a rule of thumb against similar searches. One term gets 10, another gets 1,000, and another gets 1,000,000. The million is probably more important. It’s a starting point.

2. Were they going to include RSS? Twitter searches do. As soon as RSS gets involved with real boolean searching, I get excited. I’m easily excited.

3. Would advertising be supported? If so, Twitter could get a slice and actually start making money.

The answer was ‘no’, ‘no’, and ‘no’.

So while I’m pleased that Twitter has shimmied into the mainstream, and might be getting somewhere towards a sustainable model, it’s not there yet. I cannot use the new feature for that ‘rule of thumb’ count; I cannot pull search results off into any other page/module/widget; and Twitter itself is close  to making money but no cigar, because while we can see the results, we don’t get directed to anyone else paying for us to see them.

I can see why the answer is no to all of the above. The updates just come in so thick and fast. Try doing a search for Tiger Woods for example, and you’re in another man’s inner circle of hell – except it’s outer (ie shared by the world) and very, very fast. Too fast to count, to provide meaningful RSS updates, and certainly to provide contextually meaningful ads.

So we’re still left with a conundrum. How can we measure such fast, ethereal information, and more importantly, how can Twitter monetise it? Perhaps we need some kind of ‘speed’ counter. Not how many updates, but how fast. Maybe speed of information is the new reach.

If it’s easy, it’s probably wrong

Me trying to get Pipes to work, yesterday. Click image for source.

Me trying to get Pipes to work, yesterday. Click image for source.

Owing to a bit of downtime – clients disappearing instead of continuing projects, yes I agree, it’s very rude isn’t it – I’ve been fighting with Yahoo Pipes and Netvibes to try and get some sort of effective monitoring solution together.

In theory it’s pretty simple. Yahoo Pipes lets you do all sorts of fancy stuff with RSS, so you can specify keywords in just one place and do searches across every known platform that has a decent search feature allied with RSS. Netvibes is a funky front-end that lets you bring together the outputs from Yahoo Pipes, alongside nice charts, and just about anything else you care to add if you know a smattering of HTML.

In practice it’s been a nightmare.

Firstly, Yahoo Pipes. I’ve complained about it many times before but it really is the only game in town when you want to bring feeds together, split them apart, change them, filter them and so on. But it’s so clunky, and more often than not it plain old doesn’t work. When I try to save something I invariably get an error, but I’ve learned that, in a marvellous twist of irony, the error message itself is in error and usually the pipe saved ok.

But there are other issues. The interface breaks, there is a huge lag between changing something and seeing the results come through in the RSS feed, and the whole system goes down often enough to be irritating. I get a strong feeling Yahoo have decided they don’t make much money out of it so they’re happy leaving it in its semi-parlous state for now at least.

On to Netvibes. That started freaking out earlier this week, and was down today with an internal server error. Again, as with Yahoo Pipes, there are other players in the field but Netvibes is the best. In fact I love everything about Netvibes except its recent flakiness. I just hope it pulls itself together and starts working properly.

So the real issue has been Yahoo Pipes. The workaround is that I don’t use them. I just grab the RSS direct from wherever – Google Blog Search, Flickr, YouTube etc – and then at least I’m avoiding the idiosyncracies of Pipes. But that takes ages, and if I change any of the keywords I have to set all the feeds up again from scratch. Much better to have a modular system that lets you do this once and once only, then purrs through every known platform and gathers it all up for you.

But that would be easy. And easy is seldom right, right?

Epoch’s Hothouse is hot!

I’m the digital associate at Epoch PR, until recently CMP Communications. It’s going great so far – we’ve won business together and they’re a lovely bunch of people with some brilliant ideas. But I didn’t realise how brilliant until I attended one of their Hothouse lunches recently.

Hothouse is the name they give to their programme of ideas. Programme of ideas? Dead right. Epoch’s take on PR is that they want to know what’s happening next as well as now. Sounds great, and I thought I may as well go along, especially as it might help me get over the exertions of the previous night’s Jackenhacks.

The ‘occasion’ was the 40th anniversary of the internet so we had people with unique, informed and extremely far-reaching views of what the internet was, what it is now, and what it could be in the future.

We had the editor of Wired UK, David Rowan. He introduced ten trends in ten minutes. Nice.

They ranged from the nature of Web 2.0 to, essentially, the nature of humanity. Take this: he talked about mapping thoughts digitally and the slightly worrying possibility of introducing viruses to that mapping and re-injecting them into people’s psyches. Sounds far-fetched? Latest word in cyberspace is that you actually can have people ‘reading’ each others’ thoughts across the interweb. It could be the first faltering steps in that direction. The thought-web. The web as thought. Good God.

We had Nico MacDonald of He talked about the importance of design in the future web, how we’ll expect to interact with it in much more intuitive ways than we currently do. For example, tactile response is important. We can already see rudiments of this with equipment such as the Nintendo Wii, and I guess his take tied in with David’s in that the ultimate interface would be, well, thought. Good grief.

Where they differed was in their view of Web 2.0. Nico’s take was that it’s purely marketing, and doesn’t really represent anything new. David’s was that there is a definite qualitative difference in Web 2.0.

I tend towards David’s view. If you want a tech definition, Web 2.0 is the application layer of the ISO model being transmitted over a network. In practice what this means is that whereas we once had networks that could just about send small amounts of data, we now have a worldwide network that is fast and reliable enough for us to share entire applications. We can run word processors and spreadsheets on other people’s machines – that’s what cloud computing is, at essence – and a by-product of this is that we can run applications that let us share stuff. That’s what social media is.

So you could think of the 2.0 in Web 2.0 as ‘two-way communication’. That’s how I explain it. In truth there’s probably some marketing in there too, as in ‘we’ve moved up a gear so give us more money’. But I do think Web 2.0 represents a change. There are many definitions of what Web 3.0 might be, and I think the best of those is that it’s mobile. Again, a definite, qualitative change. And here, the 3.0 could be ‘three dimensions’, as in, no matter where you are, you’re connected, with rich multimedia and sharing.

We also shared a table with Claire Fox. Now, I knew I recognised the name, and I sort of recognised… the voice. It wasn’t until I looked her up online afterwards that I knew why. She’s a regular on The Moral Maze on Radio 4 and made her name by laying into Michael Mansfield QC. Wow. As the director of the Institute of Ideas, she was certainly someone who brought something to the table, quite literally.

So the discussions were fascinating, and the food was great. I even managed a couple of glasses of wine after reconstituting from the Jackenhacks, on water and copious amounts of fresh air beforehand.

Why am I telling you this? To promote myself/Epoch/Hothouse? In part yes, but also because I think it’s very important to realise that what we see around us today holds the seeds of what might grow in future.

Futurology could be a load of old rowlocks – who could have predicted the rise and rise of texting, for example? I was shocked at my recent discovery that no one in PR talks about PR any more. And, on the flipside, we still don’t have jetpacks.

But it’s still worth thinking – and talking – about.

I’m no visionary but I do remember my sneaking suspicion that blogging would be important for PR about three years ago. RSS monitoring likewise. That’s why I now have blogging and monitoring as services complementing my copywriting, if you’re interested.

And for what it’s worth, I think the future of the web is going to be mobile and integration – that is, Web 3.0 will be about mobile connectivity integrating both human- and computer-generated information, so you can ‘talk’ to your IP-enabled car or call up augmented reality when getting lost in Vienna.

The ramifications for marketing communications? How about location-based advertising in your augmented reality? How about recommendations from corporate sponsors when you’re driving? If it’s done right, it’ll be unintrusive and will actually, genuinely help you.

When I was peering at Google Maps on my mobile phone’s tiny LCD screen in the rain when I went to Vienna last year, trying to figure out how to get back to my hotel, I could certainly have done with something that could just point me in the right direction and offer to find me a decent restaurant in the mix.

And given that I’m the kind of person who likes to go ‘wow’ a lot – at talking pianos, for instance – I’m certainly looking forward to future Hothouse lunches.

The only downside is that I can’t really give you a decent link on Epoch’s fancypants website for more details. This is something we all intend to remedy. One day you’ll be able to see what went on. And hear it. And, you never know – think it too.

PS And if you think this post is too verbose, you’re right. I’ve got to start cutting down…

Once upon a time…

… I was a copywriter.

Then I became a social media planner. Then I became a digital PR senior account manager. Then I was a social media strategist. Then I decided to jack it all in and become a copywriter again.

Now, I’m finding I’m sort of all of those at the same time. Confusing, isn’t it?


One the one hand, I’m most definitely a content creator. I write stuff. I can’t help it. I’ve always written stuff.

There’s a typewriter next to me in my office which was owned by my grandfather, and I used to type stuff on it when I was young. Anything. Everthing. Mostly ridiculous poems.

It’s got huge, black, bakelite keys that you can really punch down, and when you do the hammers hit the paper and don’t so much type as emboss.

When you get to the end of a line the whole carriage whacks across and nearly carries the typewriter across the room with it.

It’s got a fantastic bell that, if you recorded it and slowed it down, would probably give Big Ben a run for its money.

And, best of all, it has a large stain across the front, probably caused by some correction fluid. Now my grandfather liked things to be ‘just so’. He cleaned records thoroughly before he put them on ‘the gram’. He would spend hours cleaning his pipe. He did 10,000 piece jigsaw puzzles. So I cannot imagine the brouhaha that ensued when he spilled all that over his typewriter. The air must have been blue. He probably went out to shoot some rabbits just to get it out of his system. Fantastic.


It started with a ZX81 and went on from there.

People think I’m a geek, or a techy, but I’m not really. I just like dicking around with these things. Mostly I like them for the creativity they facilitate nowadays.

I have a home studio based around an old PC and it’s great fun. I should add that it’s the third PC I’ve used for this, because I trod on my first one and cracked the motherboard, then I blew the second one up about a week ago. I have a tragic combination of curiosity and technical ineptitude.

Social media

I’ll never forget when I first posted on this blog, subscribed to it to see what would happen, and then a few minutes later saw it appear on Google Reader. I was hooked. Have been ever since.

Now, as a freelancer, talking to people who really need to get themselves seen and heard (and read and talked about), I’m really starting to appreciate what social media can do, from the large corporations right through to single-person enterprises.

So I’m back in the trade, so to speak.

Yesterday I described Facebook as a TV studio with Twitter as the satellite dish beaming out the updates. Today I’ve been figuring out how best to get my Yahoo Pipes Social Media Search Engine sorted so that I can package that as a service. Tomorrow I’m working on a blog strategy for a management consultancy.

Brings me back to my grandfather. I once tried to explain to him what the ZX81 was about. “Eeeh, it’s beyond my ken”, he sighed. Then probably went out to shoot more rabbits.

So, imagine a Venn diagram with those three things around it, and me in the intersection. It seems that, whenever I try to move out into one or other of the bubbles, some strange gravitational pull draws me back into the middle.

That’s all. I should really talk about social media issues and news etc, but sometimes I just write… stuff.

Click image for source

Click image for source