Thursday, May 1, 2008

Web 3.0, it's so this year

A version of this piece was published in Marketing in 2008


For a moment there, you thought you’d just about understood it. You were just getting to grips with podcasts, you’d found a use for RSS feeds, even web 2.0 seemed to be making some sense.

But here in the world of digital, we’re never satisfied if those green shoots of comprehension are beginning to germinate in the wider world. Frankly, everything is so last year, and the more we can have that’s new, the better.

Web 3.0 has been threatening to be that new thing now for a couple of years, and reassuringly, there’s still a lot of debate about what it actually is.

From the web evolving into a series of 3D spaces (I don’t know what this means either) to the idea of cloud computing (taking our PCs and all their word processing, email and calendar functions, and putting them on the web so they can be accessed from anywhere), there are plenty of views as to what web 3.0 could look like.

The most commonly accepted version though, is that promoted by Sir Tim Berners-Lee, who invented the web in the first place.

The Semantic Web is a term he uses to describe a web of data that can be made sense of by machines, on a global scale.

The web is a huge mishmash of information – pictures, music, text, video – and whilst search engines index some of this, they’re really just reporting back the existence of this information rather than actually comprehending it. If we could apply standardised structures to the data though, machines could mesh it together and create new understanding from it.

So why is this important?

If machines could understand the information we put on the web, they could share knowledge with each other, and make conclusions and recommendations based on the information they find.

Websites would understand that the weather forecast in Barcelona is for rain on the date we just booked a flight for, recommending clothes we can buy, whilst events in the city on those dates could be presented and selections loaded automatically into our calendar and accounting software.

The idea is a big one – it’s joined up writing, compared to the laboured reception-class script of the WWW.

The problem is though a human one. When we make data available, say, an airline schedule, this will need to be done in a machine readable standardised format as well as a human-readable one.

As the saying goes, the great thing about standards is there are so many of them. So even if we can succeed in marrying up all the standards that will inevitably flourish, there’s still the challenge of getting people to stick to them.

Here, Cory Doctorow’s theory of Metacrap comes in useful.

People lie, he says. People are lazy, they are stupid. We don’t know what we don’t know, and any taxonomy for data is inherently skewed by the personality of the author. Finally, there’s always more than one way to describe something – as he puts it, “I’m not watching cartoons, that’s cultural anthropology”.

Doctorow’s thesis is essentially this – since the semantic web relies on humans structuring data in such a way that it is consistent and error-free, it is unlikely ever to succeed since as humans we’re fundamentally flawed.

Web 3.0 could be the basis for artificial intelligence, but would we want to turn our lives over to machines that whilst incredibly bright are basing their decisions on unreliable information?

For businesses transacting online, web 3.0 compliance could be a critical success factor in the future, since consumers will inevitably gravitate towards services that make their lives easier.

But given that we’re still struggling to make the 2D web a navigable proposition for everyday folk, I suspect we’re still going to have to plan our own suitcase packing for the foreseeable future.