Thursday, May 29, 2008

The death of the campaign

A version of this piece was published in Marketing in 2008
  On March 21st 1918, Erich Luderndorff launched the Spring Offensive, the massive attack that was Germany’s bid to end the First World War. In just five hours, the Germans fired over one million shells, and lightly-equipped stormtroopers cut deep swathes into British lines.

But the very speed with which the Germans advanced proved to be their undoing, as they outran their supply lines and ended up eating the very horses on which their progress depended.

Pace is everything, and a firm but flexible plan for how to deploy resources over a period of time to achieve a goal is vital. Using different assets to support each other (as Luderndorff failed to do) was as important then as it is to today’s marketer as a campaign develops.

The need to achieve cut-through from the cluttered media environment leads marketers to concentrate their resources – to focus them on a target group or time of year where their message is most likely to resonate – and accept that at other times of year and to other people, their message will remain unseen.

The production cost of TV advertising adds fuel to this, with the belief that individual executions ‘wear out’ means they have a finite shelf-life.

So the concept of a campaign is almost hard-wired in to the advertiser’s worldview. We gather our resources, make an assault on the consumer, then retreat, count our costs and regroup before having another go. After all, the advertising trade mag is even called Campaign.

But digital is challenging this approach.

The first place this was felt was search. Volumes of queries ebb and flow, driven by seasonality and publicity, but underlying demand is constant. But even though the number of people searching for ‘swimming pool’ might be lower in autumn than in spring, a pool company would still want to pick up these leads. Early activity in search followed the traditional ‘campaign’ format, but practitioners quickly realised that this was preventing them meeting existing demand from consumers – a missed opportunity.

So search tends now to be budgeted for from the bottom up. Rather than setting an amount to spend on marketing in a year, then dividing it up until an amount is reached for each medium, search volumes are modelled through the year, and the required investment set aside to meet this (allowing for extra demand created when TV activity is run).

Similarly, affiliate marketing doesn’t suit a campaign approach. Continuous activity is needed to build relationships with affiliates, and to reflect their outlay behind your brand – whilst they appreciate the impact of campaign-based activity on their own sales, they find it hard to build transaction volumes without investment over time.

But it’s the rise of Web 2.0 that has provided the most recent challenge to the campaign way of thinking.

Thousands of widgets have been created, alongside chatrooms, forums and even entire branded social networks. Of course if the venture is unsuccessful, its owners will risk little by shutting it down. But in all of these instances, marketers have stepped out of campaign-centred thinking and created entities that are long-term.

In many cases consumers have been asked to contribute their time and creativity to participating in the project. They have introduced their friends, created avatars, uploaded photos. They’ve made playlists, scrapbooks and chipped in with their own recipes, hints and tips – which means of course, that they can’t be turned off when marketing decides to move on, without creating inconvenience and resentment from users – turning a positive brand experience into a negative.

So whilst marketers have in the past taken much of their terminology and thinking from the military, perhaps now it’s time to move on. The campaign approach never really reflected how consumers behave, only the constraints of operationalising marketing in traditional media. Digital changes this – and the consequence of this change could be the death of the campaign.

Thursday, May 22, 2008

SEO: make your own luck...

A version of this piece was published in Marketing in 2008


Richard Wiseman at the University of Hertfordshire has spent eight years looking at what makes people lucky. We’re not talking rabbits’ feet and avoiding ladders here – he’s devised four principles that determine a person’s likelihood of success.

Some of the veer a little towards the obvious; ‘Maximise your chances of something good happening by creating, noticing and acting on opportunities’ he says – which seems a little like saying you can avoid the misfortune of sinking, by swimming.

But at least we now know that we really do make our own luck. And nowhere is this more true than in natural search.

Search marketing has become a huge business in the UK. We’re Google’s second biggest market, and search alone will represent just under 10% of all UK advertising this year. This might seem big, but the real search market is five times that size.

Because 80% of traffic comes from the natural results – in Google, the listings below and to the left of the paid-for ones.

And these ‘natural’ or ‘organic’ listings can’t be bought. Instead, your position in the rankings is determined by the relevance that the search engine’s algorithm judges your site to have.

So with such a huge volume of traffic to play for, you’d think it’d attract a lot of attention from marketers.

But search engine optimisation (SEO) – the way of manipulating sites to improve their ranking – is fraught with difficulty. Traditionally the unaccountable face of search, it’s gained a reputation for impenetrable jargon (even for digital media) and unethical practice, and many sites don’t realise the influence they can have on their ranking.

Often, marketers simply aren’t aware that SEO is needed, thinking it comes ‘in the box’ when they buy their website. But the skills and preoccupations of web designers are very different to those of SEOs –concerned with design, copy, usability etc., whilst SEOs focus on metadata, tags, redirects and links, and their super-niche skills change constantly to reflect the hyperevolution of search.

In other words, websites are usually designed for people, and many ignore that other vital audience, the spiders that index sites for search engines.

These spiders see websites very differently. Animation and images are often invisible to them, they need clues to help them understand site structure and if you’re not careful they can easily misinterpret your efforts.

Take the 301 Redirect. Lots of sites have both the .co.uk and the .com web address, but only one site – you type in one and get redirected to the other. Google’s spider looks at this, and assumes you’ve got two sites with the same content – a common technique for trying to fool the search engine into ranking you higher. So Google’s algorithm will penalise your site for this – pushing you down the ranking.


The solution is simple. The redirect needs to be a particular type – a 301 Redirect. Doing this makes no difference to users, but tells Google you’ve only got one site – meaning you don’t get penalised.

There are hundreds of techniques like this, and properly implementing them can impact hugely not just on the volume of traffic you can get from search, but on the quality of your listing – one advertiser went from 500 to 23,000 referrals a month on one keyword alone, just by implementing a proper SEO programme.

But nowadays, effective SEO also impacts on paid-for search. Google takes the quality of your landing page into account when determining your ranking in paid search, adjusting the minimum bid downwards if it deems site quality to be high. So a poorly-optimised site might have a minimum bid of 15p, whilst a well optimised site could be 10p – meaning an SEO programme could pay for itself just in savings on paid-for search.

With this much value at stake, we can’t afford to let search happen to us. It’s time for sites to throw out the rabbits’ feet and start making their own luck.

Thursday, May 15, 2008

Is digital advertising recession-proof?

A version of this piece was published in Marketing in 2008


News from the sharp end of the financial sector informs us that UK banking lunch budgets are slashed to £54 a head, whilst their German counterparts are barred from expensing trips to brothels. There are probably no more indicative measures for the climate in the city, so it’s safe to assume that it’s tough out there right now.

And whilst everybody’s assiduously avoiding mentioning the ‘R’ word, there’s no doubt that retailers are starting to worry as the credit crunch starts to bite. Share prices in many of the major high street retailers have halved over the last year as the city factors in expected consumer belt-tightening, and retail sales have only been propped up on the high street by deep discounting, with non-food prices falling at their fastest rate for 20 years.

Last time there was a recession, the internet took the full brunt of it. There was carnage as the dotcom bust ripped through the economy, taking hundreds of flaky web companies (and some good ones) with it.

So this time around, is the internet recession-proof, are stock market woes going to hit digital too?

Back in 2001, many internet businesses were still in the pre-profit stages of development. Their markets lacked scale, many of the management teams lacked the experience to weather a storm, and the online advertising market (a key revenue stream for many businesses today) that year was worth just £166m.

Online retail has been the key driver of growth in online advertising, and online retail is a capital intensive business, requiring heavy upfront investment to create a service. This means that scale is critical to businesses online, whether they’re selling airline tickets or insurance policies, because there’s a very low marginal cost of sales.

As the business scales, volume efficiencies develop much faster than in traditional retail where staff and premises are forced to grow in line with expansion.

The pre-profit phase of an online business is a scary place to be. You could be down a lot of money and still waiting for that tipping point to be hit. No wonder many investors pulled the plug back in 2001 – it looked then that the world had lost confidence in the web, and there were real concerns about whether that tipping point could ever be reached.

But this is 2008, and a lot has changed.

For a start, the audience is bigger. 32m people are now online compared to just 18m back in 2001. So any online business now has access to a potential customer base that’s 75% bigger – a crucial scale element that’s driving scale economies into web companies.

And those people now transact more online, 74% agreeing that credit card use is safe online (60% did in 2001). So they’re spending more – the average online shopper’s six monthly spend is now £628, up 37% on 2001.

So there’s a bigger, more economically active audience online now, and they’re spending much more time online than before, driven by broadband penetration that ranks the UK 11th in the world.

All this has created a vigorous online ad market that’s grown 1600% since 2001, reaching £2.8bn last year. For online businesses this is a double benefit. It’s a substantial revenue stream for many, but it’s also a key sales driver.

Advertising in traditional media is (wrongly) often regarded as a cost. But the accountability that comes with both display and search advertising online has caused it to be regarded differently. Whether this is formally reflected in their P&L or not, many enterprises now see online advertising as a cost of sale – which means they can directly gauge the impact on revenue that cutting ad budgets will have.

There’s no leap of faith here – spending less generates less. So whilst the rest of the economy may be in for a torrid time over the coming months, scale, accountability and attitudes are likely to mean digital’s unlikely to share the pain.

Thursday, May 8, 2008

Newspapers - adapt or die?

A version of this piece was published in Marketing in 2008


Every medium that’s ever been invented is always expected to replace those media that went before. So TV was expected to kill cinema, radio to kill newspapers, newspapers to kill town criers (probably).

And for no medium has this been more true than for the internet, which has been touted as the killer of pretty much anything you can think of.

The reality though is more nuanced, and provides at least as much opportunity as it does threat.

In this context, Charles Darwin had it right:

“It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change.”

Newspapers adapted to the popularity of television news by changing their approach – either focusing on gossip and features, or bringing depth and consideration to stories that TV couldn’t give time to.

And the success of cinema chains like Vue is testament to how investing in the value proposition can bring audiences flocking to a medium that if we believed pundits in the ‘50s and ‘60s, would be dead by now.

But for the Capital Times newspaper in Madison, Wisconsin, last week was a landmark in their history as the company closed its afternoon-published newspaper, becoming an exclusively online proposition.

"Today marks our last edition as a traditional daily newspaper of the sort Americans knew in the 19th and 20th centuries," an editorial read. "Starting tomorrow, The Capital Times will be a daily newspaper of the sort Americans will know in the 21st century.”

So are they right? Is the future exclusively online, or is it likely to remain a mixed economy?

Of course, there is no ‘right’ answer. Whilst it is still economic to distribute newspapers in physical format and consumers demand them, there will still be a business – but this is obvious.

It seems likely that at some stage in the future, demand may shift to the consumption of media on portable devices. Units with roll-out colour screens that allow a highly portable but easily viewable experience are already in prototype, and with the ongoing desire of mobile networks to find a use for 3G we might not have long to wait.

Newspapers as diverse as the Sun and the Guardian have recognised this – building audiences to their online product. Their objective is to move the brand from being a ‘newspaper’ to a ‘media’ play – with the newspaper, website and mobile services being the outward manifestations of this brand.

This is smart, because it develops secondary revenue, constructs a successor to the primary vehicle should that market start to fall, and widens consumers’ expectations of the brand.

It isn’t just newspapers that face this challenge though – TV companies too are investing in their web presence with a view to achieving the same goal. Channel 4 are now regularly commissioning multimedia projects – the Big Art Mob, a four-part TV series comes with a community website and a mobile site that let users upload images of civic art, whilst the recent Embarrassing Bodies series is accompanied by a website and online games. All this content is merged in around their 4OD online video site, where you can catch up on shows you missed.


But the Capital Times isn’t completely abandoning its print past. It is hedging its bets, continuing every week to publish free an entertainment guide and a news digest.

Because in the past when newspapers and TV stations lost audience, they closed for ever. Now though, a future exists for these brands on the web and in mobile, and for stronger brands in building further value in their relationships with audiences – through enriching their output with content in these other channels.

So far from killing other media, the internet is creating new opportunities for them to evolve – and it’s their adaptability to change that will determine their success in meeting this challenge.

Thursday, May 1, 2008

Web 3.0, it's so this year

A version of this piece was published in Marketing in 2008


For a moment there, you thought you’d just about understood it. You were just getting to grips with podcasts, you’d found a use for RSS feeds, even web 2.0 seemed to be making some sense.

But here in the world of digital, we’re never satisfied if those green shoots of comprehension are beginning to germinate in the wider world. Frankly, everything is so last year, and the more we can have that’s new, the better.

Web 3.0 has been threatening to be that new thing now for a couple of years, and reassuringly, there’s still a lot of debate about what it actually is.

From the web evolving into a series of 3D spaces (I don’t know what this means either) to the idea of cloud computing (taking our PCs and all their word processing, email and calendar functions, and putting them on the web so they can be accessed from anywhere), there are plenty of views as to what web 3.0 could look like.

The most commonly accepted version though, is that promoted by Sir Tim Berners-Lee, who invented the web in the first place.

The Semantic Web is a term he uses to describe a web of data that can be made sense of by machines, on a global scale.

The web is a huge mishmash of information – pictures, music, text, video – and whilst search engines index some of this, they’re really just reporting back the existence of this information rather than actually comprehending it. If we could apply standardised structures to the data though, machines could mesh it together and create new understanding from it.

So why is this important?

If machines could understand the information we put on the web, they could share knowledge with each other, and make conclusions and recommendations based on the information they find.

Websites would understand that the weather forecast in Barcelona is for rain on the date we just booked a flight for, recommending clothes we can buy, whilst events in the city on those dates could be presented and selections loaded automatically into our calendar and accounting software.

The idea is a big one – it’s joined up writing, compared to the laboured reception-class script of the WWW.

The problem is though a human one. When we make data available, say, an airline schedule, this will need to be done in a machine readable standardised format as well as a human-readable one.

As the saying goes, the great thing about standards is there are so many of them. So even if we can succeed in marrying up all the standards that will inevitably flourish, there’s still the challenge of getting people to stick to them.

Here, Cory Doctorow’s theory of Metacrap comes in useful.

People lie, he says. People are lazy, they are stupid. We don’t know what we don’t know, and any taxonomy for data is inherently skewed by the personality of the author. Finally, there’s always more than one way to describe something – as he puts it, “I’m not watching cartoons, that’s cultural anthropology”.

Doctorow’s thesis is essentially this – since the semantic web relies on humans structuring data in such a way that it is consistent and error-free, it is unlikely ever to succeed since as humans we’re fundamentally flawed.

Web 3.0 could be the basis for artificial intelligence, but would we want to turn our lives over to machines that whilst incredibly bright are basing their decisions on unreliable information?

For businesses transacting online, web 3.0 compliance could be a critical success factor in the future, since consumers will inevitably gravitate towards services that make their lives easier.

But given that we’re still struggling to make the 2D web a navigable proposition for everyday folk, I suspect we’re still going to have to plan our own suitcase packing for the foreseeable future.