The Metaverse: 1779 more words (of definition and caution) on the Word of the Year

This piece was written for the Children’s Media Foundation Yearbook, which is being published July 5th to coincide with the Children’s Media Conference in Sheffield, UK.

Via his film La Chinoise, Jean Luc Godard suggested we should view the work of Méliès, the first fantasy filmmaker, and the Lumière Brothers, the first documentarians, the other way round. You can argue that the Lumières, by placing their camera down and choosing what to film, were changing reality around them, whereas Méliès, in his use of special effects and trick photography, was documenting the underside of the mind. The current debate about the Metaverse reminds me of this critique of our earliest filmmakers. The first filmmakers shared tools in common – cameras, film stock, projectors – much as digital makers across AR and VR today share game engines and 3D models. Remember that audiences reacted viscerally to the Lumières’ train leaving the station as though it might hit them through the screen and perceived black and white scenes from life as the same kind of magical vision that Méliès offered with his trip to the moon. But their goals and storytelling techniques could not have been more different. We are in the equivalent of these early filmmaking days with AR and VR now, a long way from a maturing medium and discovering the Citizen Kane of VR narrative or the Sesame Street of AR early learning. Everything is shiny and new – we will go through many cool experiments, gimmicks and spectacles to uncover what sticks and is genuinely useful/joyful in these emerging media.

People regularly conflate AR and VR, which is strange to me, given that the two serve contradictory purposes. AR is anchored to physical objects in the real world, whereas the goal of VR is to remove the user from their physical surroundings and immerse them in an alternate digital reality. They are diametrically opposed. So it’s no wonder that there is even greater debate and misunderstanding over the Metaverse.

Like it or not, Metaverse has been the word of the year. I don’t much care for the word at all, but we seem to be stuck with it for now (thanks, Zuck). So let’s start with some definitions. The term Metaverse was famously coined by Neal Stephenson in his 1992 novel Snow Crash and was created to refer to a single 3D virtual world mapped over the real world. Many people focus on the first element only, thinking of the Metaverse as being all about VR and online virtual worlds like Fortnight and Roblox, and the possibility that some day you might move seamlessly between such worlds with your self defined virtual identity and resources, whether to a game session, a work meeting or a virtual marketplace. Ofcourse, this requires an underlying infrastructure that current virtual worlds only offer in their own games.

Others use the Metaverse as a much broader catch-all for the range of technologies emerging around Web 3, 5G and the blockchain, NFTs and photogrammetry, as well as AR and VR. We are told that these developments will deliver a newly democratic 3D internet with a creator led content/experience economy and open access for all and provide the backbone for journeys across virtual worlds.

[A side note: Those of us who remember the early days of the web will note the worryingly familiar utopian magical thinking here. After all, underlying all of this technology is the power of cloud servers, machine learning and AI. And megacorporations with massive, competing interests.]

Both views above are valid, but I prefer to think of the Metaverse as a series of digital layers over our physical world, layers we might choose to access based on context.

The internet we know is built on connected devices. The Metaverse and Web 3 will also use contextual or semantic computing and spatial imaging that combine to paint our physical world with 3D digital data – whether invisible (Object recognition/mapping), translucent (AR information and customisation) or transformational (VR and virtual worlds) – and deliver content shaped by context.

Imagine some layers you can’t see that speak to machines like self driving cars; layers that you can see in real world space – say, the archive of the BBC deployed to enhance public spaces with location relevant AR content, or fashion designers making digital elements to enhance your appearance; fully immersive layers in VR that transform the limits of your living room walls into a spacial expanse for learning new skills or playing with friends from across the globe. All of these layers are possible in some form with today’s technology – and much more is to come.

The engines of this new era come primarily from games and 3D graphics companies – for example, Epic’s Unreal engine, the mobile 3D engine Unity and the Pokémon Go creator, Niantic. But these game engines’ objectives aren’t just about games. Through its users’ gameplay, Niantic is mapping the world around us, scanning and ‘owning’ our landscapes. Unreal is providing deeply realistic 3D tools and environments for uses that range from training humans to designing cars, delivering visual effects, as well as fighting fantasms. Real time virtual events can be delivered via the engine, not just in Fortnight but in custom built environments, along with the advertising and merchandise that go with them. Unity is aiming to give digital objects the same ‘rights’ as physical ones, uniting with apps like Sketchfab to offer any physical asset its digital twin. These platforms may have their roots in gameplay, and that will persist, but the layers of the Metaverse they are building will cut across almost every use case you can imagine. Eventually.

The technology to support the Metaverse (whatever your defintion) is inevitable, but hardware and software will take more time to develop than many realise. The other major question mark is about human behaviour. Do most people want to spend work, social and personal time in a virtual construct? At the moment, gamers are the early adopters of VR technology. Will the rest of us follow?

It’s worth noting that Snapchat have been determinedly focused on the present or near present. Take a look at their recent Spectacles demo video for a glimpse of what’s already possible with their AR glasses and the range of use cases they are exploring. AR glasses from Meta are coming soon, and the Lens studio model from Snap has quickly become core to Instagram and TikTok creators. They are banking on real world experiences you can share with friends and family enhanced or even defined by digital overlays. And in May, Google opened up its Streetview maps for AR creators to anchor their work in any physical location shown.

So what does all this mean for children? Many cheerleaders promote VR for kids without fully recognising the difference between an explorable world on a computer versus an immersive experience via a headset. Many parents ignored the 13 plus advice on headsets in their purchases for their families last Christmas. While playing a round of Beatsaber might pose little risk, there are few safe spaces for children in existing social VR environments and almost no coherent moderation. Devices aren’t being designed yet for a child’s physiognomy and equally there is little research on the potential impact on mental development of exposure to VR experiences at an age when perception of the real world is still malleable. For all the immense potential of VR to deliver amazing edtech learning journeys, more research is urgently needed in this area. At the same time, we need to recognise, like training a young mind to control the Force and become a Jedi, that some form of the Metaverse will be a huge part of our children’s lives, so we need to let them embrace what it offers as soon as is appropriate.

AR provides an easier and safer on ramp. In my own work, I’ve found that that AR overlays mean little to very early learners – the world is full of surprises and things to discover for them, and all five senses need to be engaged in that process. But once their brains have grasped abstractions like symbols on a page forming language they can understand and share, AR overlays on the real world can fit coherently into their mental landscape. They understand the rules and behaviour of physical objects and symbols. They have a context into which AR can be placed to support play, social interaction and learning without undermining cognition.

I’m exploring an AR use case that has potential value for kids and adults. I’m bringing the Metaverse to books. Here’s what I’ve learned.

A book is a platform for stories, images and a mental projection from author to reader – Stephen King describes writing as a way for described objects and people to travel through time and space from his desk to your living room – telepathy. But a book is much more than that. It’s a physical object which has visceral qualities and can become talismanic for a reader – we keep them on our shelves because we value them as objects as well as containers. They become badges of our identity. Books are consumed in a context – a place, a time, a state of mind that changes the way we perceive them and the role they play in our lives. We live with books, we use them, we touch them – we don’t just watch them. They trigger our own thoughts, emotions and memories as well as containing those of their authors and we see them as cherished objects as a result. As we build a bridge to digital tools from physical books we can unleash all that stored potential of emotion and interaction.

So I’m building the bridge between physical books and digital tools to empower active readers, social readers, engaged learners, puzzlers, thinkers and creators. Always remembering that by putting down your device, or taking off your smart glasses (when we have them), you can also just focus on the ink and texture of words and pictures beautifully printed on physical paper in all its time tested simplicity and glory.

I actually think that’s a microcosm of how we will feel about the Metaverse as a whole – it will be there for you when it is what you need/want. But you’ll also need to be able to switch it off, and just look for shapes in clouds, hold hands or read a good book.

Whatever your definition of the Metaverse, I suspect Neal Stephenson would agree that stories in books already spark the best Metaverse of all, owned by none but ourselves: the one in our imagination. And right now we can still imagine the Metaverse is anything we want it to be.

The importance of Public Service Media for Kids isn’t up for debate.

On July 8th, I participated in a debate at the Children’s Media Conference to discuss the future role of Public Service Media in the UK. That day, the Children’s Media Foundation published its report on the same subject, with articles by a wide range of people who have helped to shape what our kids watch, how they play and where they learn. You can read my own contribution and the whole report by downloading it here.

To watch a recording of the online session, you need to have registered for the Conference. But here’s a transcript of my opening remarks:

“As I’ve been attending sessions this week at CMC, I’ve been thinking about the phrase, you are what you eat. The original version of this phrase was from a 19th century French lawyer, and he actually said, “Tell me what you eat, and I will tell you who you are.” He was making a comment about class – how rich was your diet, what could you afford to eat – and about national characteristics – where could your ingredients grow, the food you eat being connected to the soil it grows in. It was a clever construct. Of course, our lawyer hadn’t heard of Google: “Tell me what you search and I will tell you who you are.”

But what about our media consumption? Does that tell us who we are or is it the other way round?

In every session, I’ve heard commissioners saying they are looking for content that reflects kids’ lives here in Britain and is made by diverse British voices. In the end, public service media is all about national and individual identity. And that matters more than ever.

In my piece for the CMF report, I talk about the Lean In generation we’ve been raising, proactive, activist, game-ified, used to interacting with media and each other in both digital and physical spaces. I propose that we let our audience in on our commissioning choices, making them advocates for their favourite characters, genres and ideas. Our audience trust us, and we need to trust them.

It’s this relationship with our audience, particularly our kids audience, that justifies our future.

I think any debate about the future of public service media also needs to take account not just of the reality we are currently confronting in the market dominance of US streamers but also consider what’s coming next. Real time 3D engines like Unity and Unreal are changing the way we make linear video and virtual spaces, advanced AI that is capable not just of running algorithms but also generating personalised iterations of content, and a new wave of computing, that will move us on from mobile connected devices that allow us to access to content when and where we choose to contextual computing – in which physical spaces, objects and people all have their digital doppelgänger. We’re painting the physical world with data. And that data will drive increasingly personalised content based on where you are and what you are doing as well as who you are with.

We variously call this digital context a Metaverse or a Mirrorworld. The name’s unimportant. What does matter is that we recognise how radically this wave of change will impact content. For decades, we’ve repeated the concept that content is king. This has never been entirely true, as content is a prisoner of context. Context changes content – a can of soup in a kitchen is to be eaten; in a gallery it’s to be admired. Let’s say that if content is king, context is queen. They rule together, in constant dialectic, one affecting the other. But the balance is about to shift. It’s time for Queen Context to rule.

Well, Public Service Media is all about context. I think of Context as the kitchen and content types are the ingredients for feeding and nurturing our digital identities.

There’s always room for the all you can eat buffet of content in our lives, but there’s also a kitchen where we meet, talk and make food together, making choices together. Do we want the algorithm to set the menu? Or do we went public service media in the kitchen?”

Why AT&T’s divestment of WarnerMedia made me think of Nam June Paik – it’s deja vu all over again.

Nam June Paik is known as the father of video art, but he was also a man with a great turn of phrase.  He coined the term ‘electronic super highway’ well before the internet was a thing.  He was always imagining the ways technology might change human behaviour, and vice versa.  I had direct experience of this.  A mutual friend had recommended me to him as a writer.  He was preparing a live telecast called ‘Space Bridge’, related to the Seoul Olympics, and he wanted text to scroll across the screen at random moments during the event.  I asked him for a brief of what he wanted the text to say.  His answer, ironically left for me in the middle of the night as a voice mail message, was:

“Television is like telephone.  Doesn’t matter what you say.  Point is to make the call.”

As a brief for a writer, it was at once liberating and maddening.  At the time, I thought Nam June was riffing on Marshall McLuhan’s famous adage, ‘the medium is the message’.  But over the years, I’ve come to imagine that Nam June was instead channeling a board room level telephone executive, or knew something specific about the future of phone companies – they really don’t get the difference between the television and the telephone.  Or maybe they just don’t care.

I thought of all this when I read that AT&T was divesting itself of WarnerMedia, just three years after merging with it.  The move probably makes sense for both companies.  WarnerMedia and its new partner Discovery are much better matched, and have a complimentary portfolio of media assets to compete in the streaming wars.  AT&T have huge challenges ahead with the roll out of 5G and will need all their resources to compete for spectrum and build infrastructure for the data painted landscape where we will all soon reside.

The real question is why do telcos keep buying media content companies at all?  

In the mid nineties, I was hired by Tele-TV, a consortium of three regional American telephone companies to create interactive television using optical fibre instead of coaxial cable to reach American homes with video on demand, interactive shopping channels and all your regular broadcast tv.  Large press announcements were made, major Hollywood players were recruited, close to a billion dollars was spent, test neighbourhoods were wired, content was produced… and then the telcos shut it all down.

It was my first experience of the once a decade dance between the phone companies and the media business, imagining all those synergies that can maximise consumer relationships and shareholder value.

One thing that fascinated me was the confidence of the marketing folk at the telcos that consumers would prefer to subscribe to a phone company than a cable company for their tv content.  I was assured that was because ‘we provide much better customer service’.  I’m really not sure they understood that people don’t subscribe to content services for the reliability of the signal, but for the variety and desirability of the content.

But they were also certain that consumers would like ‘bundles’.  A new idea back then, we are all now familiar with bundling services, in which you get a discount for buying phone, broadband and tv in one discounted package from a single provider.  For the phone companies, though, a premium content service like HBO is insignificant next to the phone and data services they provide – it’s like the toy you get with the Happy Meal.  Or the bonus video on demand service you get with your one-day shipping fee from Amazon.

Here’s a link to a promo video Pacific Bell used for promotion of the VOD service we built back in 1996.  It looks old fashioned now, but then it was a genuine step change for TV.  25 years ago, EPGs were brand new.  Many of the features we designed and trialled then are now commonplace.  The idea that you could pause, rewind or bookmark a streaming video was a major leap forward for functionality and amazed consumers.  We also developed interactive advertising applications with car companies and major retailers. All these elements are standard  practice in the world of smart TVs and streaming services. Of course, my favourite innovation on the platform – a duo of animated characters (they were named Terrence and Virgil) to host the genre playlists on screen – is yet to be broadly adopted. But I’m confident that the invisible algorithms recommending shows to us behind the scenes today will one day soon be replaced by animated AIs.  Perhaps Terrence and Virgil will ride again!

The regional  US telcos lost interest in Tele-TV once new legislation allowed them to compete nationally for the emerging data services market.  The 1996 Telecommunications Act massively deregulated the phone and cable industries, freeing up the regional telcos to provide long distance and national ‘information services’.  At the time, it was expected that the bill would generate greater competition and reduce costs for the consumer – for example, with telco funded media like Tele-TV competing against cable providers.  But instead, the Act misread the role of the internet and set off a vast consolidation across the industry, with the booming business of data across phone lines requiring the full attention and investment of the phone companies.

AT&T’s decision to let go of WarnerMedia after spending the last three years merging with it feels remarkably familiar to me.  The stated reason that ‘uptake of services based on bundling with HBO+ has been lower than anticipated’, is a remarkably sad indictment of a strategy.

So much for content being king.  In the end, the phone company wins as long as we watch, buy, play via their pipes, landlines or 5G, and the data they carry, so AT&T don’t need to care about what we watch.  Nam June Paik was right. “Doesn’t matter what you say.  Just matters that you make the call.”

I READ, YOU READ: Polarity Reversal illustrates Barack Obama’s reading of “Green Eggs and Ham” with Kinetic Typography

In honour of Children’s Book Week, I’m sharing a video publicly for the first time that I made for a Polarity Reversal pitch a few years ago with kinetic typography, meaning animated word forms.  The video is based on a recording of former US president Barack Obama reading Dr Seuss’s Green Eggs and Ham.

The project was called “I Read, You Read,” and the goal was to use kinetic typography to enhance early literacy and encourage kids to read aloud, where voice triggers word response and vice versa.  Screens should be allies of books in the path to increase a love of words, reading skills, and literacy. The current campaign to switch on subtitles for children’s TV shows is an idea that deserves widespread support, but more digital entertainment content should be proactively designed to not just incorporate, but reward reading and writing.  Reading aloud and following along while another person reads should not be limited to school books and bedtime stories.  Written words are powerful wherever we engage with them, whether on page, stage, or screen. All fiction is interactive when we read to each other.

So, why am I such an advocate specifically for Kinetic Typography? 

• Research shows that children learn to recognise word forms more effectively when the sound is accompanied by kinetic typography. 

• The word becomes a ‘character’ as well as a group of symbols. The word’s design and motion are mnemonics for meaning. 

• What’s more, spoken word with kinetic typography is just great fun!—Reading a story becomes ‘spoken-word karaoke’ for kids.

‘I Read, You Read’ is a project I’d still love to make, if there’s anyone out there who’s interested in helping fund the initiative.  For now, please enjoy the video and share it with children.

‘Barack Obama Reads Green Eggs and Ham’ was animated by Luis Sa, with art direction from Huw Gwillam

Gone in a Flash: Amazing content that lives on only in memory

‘Welcome to Pine Point’ was, for me, a hallmark moment in the evolution of digital storytelling. Funded by the National Film Board of Canada and made by Michael Simons and Paul Shoebridge, this web based factual experience told the story of Pine Point, a small town in the Canadian northwest  that was built to purpose by a mining company. When the mine’s seam was exhausted, the town wasn’t just abandoned – it was packed up and erased from the landscape. All that remained were memories.

The documentary told Pine Point’s story in the form of a virtual scrapbook. The experience was intensely moving, visceral and personal, combining grainy photos, old home movies and shared memories with the author’s point of view. The work was widely acclaimed, influencing me and many other storytellers, and earned a bucketful of awards. And then one day, like the town that was its subject, it was gone.

You see, ‘Welcome to Pine Point’ was made with Flash, Adobe’s brilliant programming language, which was also not just abandoned by its corporate master, but switched off entirely. Luckily for ‘Pine Point’, the NFB funded a native app iteration of the experience, so the work lives on in a revised form.  But with the end of Flash, a vast swathe of the world’s best interactive creativity has been lost, and will live on like the town of Pine Point only in the memory of its users, or in old screen grabs or video walkthroughs.

This was brought home to me when I went looking for examples of my work for BBC Children’s from 2008 to 2011.  Almost all of it was made in Flash.  After 2011, the platform shifted to HTML5 or Unity for interactive experiences, and the old work was decommissioned.  This is a new experience for me.  I have an archive in the back of a closet full of master tapes, floppy discs, zip drives and other formats that I can’t easily access.  I even have cans of 16mm film in there, gathering dust.  But if I was determined to find something – and no-one had already posted it online – I could probably find it with enough time and money.  But the Flash work of a whole generation of game makers, designers, animators, storytellers is just… gone.  

Loss of creative endeavour and human knowledge is hardly new.  Let’s recall the history of Papyrus, the ancient Egyptian’s back up hard copy of their narrative:

It is difficult to overstate the importance of papyrus in the history and development of writing. In a way, the invention of papyrus marked the beginning of the globalization of documentation and the literary form. Before papyrus, writing was a skill reserved for a very small minority and often came in the form of at most a few sentences on a fragment of clay or piece of leather. With the papyrus scroll, the Western world gained a standard surface on which it could create and document. The scroll fostered the creation and survival of some of the world’s most influential documents, ranging from some of the first fixed law codes to the important literary works of Rome’s brightest minds.

  • Dartmouth Ancient Books Lab

Unfortunately, Papyrus tended to develop mould and rot in wet conditions, or simply crumble to dust if it got too dry – never mind the unfortunate fire at the library of Alexandria which consumed half the existing scrolls of the time.  Only the lucky few documents, made more important as much by their survival as by their content, were transferred first to parchment made of animal skin, then to paper, and now to code.  But in this modern age where received wisdom is that nothing is lost and we need the right to be forgotten, the escalating rate of obsolescence makes it inevitable that huge quantities of work will disappear.

Is there a bright side to this?  I know there’s work I’ve done that I’d be perfectly happy for no one to see again.  But there’s more that I suspect is better than I thought it was at the time – I’d like the chance to find out.  It’s like the best photo is always the shot that got away – a phenomenon photographers of the old school will recognise from those moments when the film stock or the equipment ruined a frame.

I recently was thinking about a project I wrote many years ago.  It was an ambitious mixed media sitcom.   I pitched the concept to a studio, got a development deal, and wrote a mini bible and multiple drafts of a pilot script.  Eventually, the studio passed and the project went into turnaround, the rights reverting to me.  I put it in a drawer – the studio had pushed the drafts in a direction I didn’t  like, and I knew the concept would be expensive to produce with existing technology.  When I decided last week to take another look at it, twenty years and four house moves across two continents later, I realised I had no idea where it was.  Not the hard copies, not digital drafts, not the notes.   I could spend weeks trawling for pages and discs in boxes, but there was no guarantee I’d find it.  So I’ve decided to start afresh.  Better not to be reminded of the parts that failed, and just build on the bits that have stood the test of time in memory and continue the journey of the idea.  Certainly, Simons and Shoebridge’s journey back to Pine Point gave the memory of that community another life, and even a resurrection after digital death.  I’ll let you all know how my own journey of memory turns out.

It reminds me of Proust’s Madeleine.  What’s better – the madeleine itself or the memory of it?

I Want to Believe: The Promise of AR Glasses

“I have learned from my mistakes… and I am sure that I can repeat them exactly.”  Peter Cook’s famous adage haunts me these days, as I read breathless articles about the latest initiative for AR glasses, the roll out of 5G, and the arrival of the Metaverse.  Any day now, while Qualcomm and Huawei battle for wireless 5G bandwidth, Apple or Amazon or some as yet secret unicorn of the technocracy will announce the breakthrough product that will make it all possible. 

A single teasing image of reflections in glasses set of a frenzy of speculation in Apple’s promotion for this year’s WWDC in June. But on a recent shareholder call, Facebook’s Mark Zuckerberg poured cold water on speculation of imminent AR glasses released as consumer products, calling it “one of the hardest technical challenges of the decade”, before reassuring us that glasses will be “the next computing platform”.

I want to believe.  

Much of my creative focus over the past five years has been in exploring the potential for mixed reality storytelling, combining physical objects and environments with digital overlays to deliver immersive experiences.  I have notebooks filled with projects that can only be delivered when this glorious vision of a physical world painted with digital beauty, characters, teachers, information and community is fully realised.

But I’ve been here before and I’ve been burned.

Back in the mid nineties, I’d developed a reputation for innovative approaches to storytelling and reaching audiences.  I’d created and curated a new kind of TV based on short content for short attention spans in a stream that was part funhouse, part laboratory. I’d developed my first interactive narratives for consumption on CD-Roms – a film noir detective story with branching narratives.  And I was obsessed with Myst, Robyn and Rand Miller’s exquisite and infuriating game world.  I was in Barcelona, working on an animation project when I was contacted by a headhunter for three American telcos, setting up a venture called Tele-TV.  

Within days, I was in Los Angeles talking to Sandy Grushow, former head of the Fox TV network and now president of this new venture. He was laying out the vision for interactive television, video on demand, VCR controls on any programme, customised playlists, interactive story formats.  Two of the partners, Pacific Bell and Bell Atlantic, already had test communities wired for piloting the system we would build, with plans for millions of wired households by the end of 1996. We would reinvent television, delivering a Galaxy of on demand content and interactive experiences.  In fact, that’s what it would be called: the Tele-TV Galaxy.

I wanted to believe.

Twenty five years later, we live in the world Sandy pitched and I bought that day, based on his reassurance – from the engineers at the telcos – that the roll out of fibre for broadband was happening at lightening speed.  The market would be there by the time we developed the video on demand product. Our team designed and built amazing products over the next 24 months – including a number of innovations for smart TV in EPG, navigation and interaction design and format that are now standard practice.  The test bed customers in Carlsbad, California and Reston, Virginia loved the service we delivered.  But the roll out didn’t move at lightening speed. It moved at the speed of a dial up modem.  Tele-TV stuttered, froze, and died a quiet death.  I went back to developing regular old TV shows and licked my wounds.

My mistake was to glimpse the future and be so seduced by its possibilities, I believed it was already here.  It took another decade for VOD platforms to deliver on the promise with the first iterations of Netflix, BBC iPlayer and another ten years for the plethora of streamers we now use.

And now I want to believe again.

I believe that AR glasses will give a wearer superpowers.  The ability to understand every language, to recognise anyone, to learn the stories of people and the places where they meet, to find their way through an unknown city, to discover layers of art, history and information, to play with reality itself, even to speak to ghosts.

I’ve played my part again, aiming to show the way – my AR powered novel, ‘The Ghostkeeper’s Journal’, combines the printed page with a digital layer in the ‘Ghost-O-Matic’ app. I’ve conceived a variety of digital/physical experiences with franchises like ‘Star Wars’ and ‘Jurassic World’.  I’ve worked on AR treasure hunts around town centres, and AR designed for the classroom.  I’m exploring data overlays and social messaging via location.

I am still a believer.

And I hope that the AR Glasses we’ve been promised and the infrastructure to support them will be ready any day.  With interactive TV, the issue in the 90s was about speed of two way data delivery.  With AR glasses, the challenge is not only the roll out of 5G, but more significantly the development of suitable (and affordable) lenses.  For AR overlays to work in glasses (as opposed to goggles or headsets like the Hololens)  we need to solve issues around available light, field of view, miniaturisation of product and optical prescription variations, just to start.   So the increasingly breathless reports about consumer smart glasses from Apple, Facebook, Niantic and Amazon, added to the existing efforts from Snapchat and N-Real, not to forget the ongoing ‘enterprise’ product sets from Google, Microsoft and Magic Leap, need to be read with a measure of skepticism.  The Metaverse as most of us imagine it will not be delivered by the first iterations to reach market.

What they will deliver well are use cases that work within the limits of the first generation lenses and frames.  This means building apps for which limitations can actually be an opportunity – for example, where AR is contained within field of view without breaking frame.  Or using 2D overlays rather than 3D objects in the experiences you build.  Smart design of early apps for smart glasses will be as important as the glasses themselves.

So I’m focusing on what I can make now, with existing technology, knowing it will pave the way for what is to come. We can start painting the physical world with digital stories, information and entertainment, even if it will be a while before we can see it all hands free.

I’m getting too old for broken promises. And I don’t want to learn the same lesson all over again.  But I still want to believe.  That’s why this time I’m keeping my expectations within the frames, even as I dream outside the box.

How to Make All of Us Commissioners of BBC Content (and win back younger audiences)

Late last year, OfCom released its third annual report on BBC performance.  Once again, the decline in younger audiences for BBC services was highlighted.  According to the report, time spent with the BBC by 16-34 year-olds now stands at less than an hour a day, down 22% since 2017.  The largest drop of all is among those aged 16 to 19.

This is the precise audience segment for which I was responsible ten years ago, when these young people were part of the CBBC remit of 6-12 year olds.  I was an in-house BBC executive producer, the editorial lead for CBBC’s websites and interactive content.  CBBC was in its heyday, when both the channel on air and the online offering regularly topped a million users in a given week.  The BBC Children’s iPlayer had been launched at the end of 2008 to great acclaim.  But the research was already warning us that YouTube was becoming the most popular destination for children 6 to 12 in the UK, even though it was a service for 13 plus.  My final task on staff at the BBC in 2014 was to launch YouTube channels for CBeebies and CBBC, in an effort to create journeys back from YouTube to BBC platforms.  But as this generation has become potential license fee payers, they have drifted away from the BBC’s services to sign up for Netflix, Disney+ and other streamers.

Ten years ago, we knew we were talking to a remarkably active and activist generation – new platforms allowed them to engage with our programmes in much more personalised and empowered fashion.  We encouraged kids to make their own content with our brands, writing collaborative stories for Tracy Beaker, submitting items to Newsround, making games with our characters as well as playing them. In 2013, we even ran a competition online to select a new host for Blue Peter.  We trusted kids and gave them greater control of their experiences with content.  This is the bar we now have to meet to attract young audiences back to the BBC.

OfCom’s report claimed that young adult users find the iPlayer confusingly general – the core public service concept of ‘content for everyone’ – whereas the streamers, with their more rapacious data harvesting and algorithms, deliver ‘content for me’.  The BBC can’t compete on these terms, because as a public service institution, it cannot track user behaviours and preferences as closely as its competitors.  AI and algorithms  that provide the tailored experience of a Netflix homepage aren’t available at the same level of granular data detail to the BBC.

But I contend that the BBC has another way to create stronger links between individuals and BBC content.  And it’s through that other great area of debate – the licence fee.  Many resent paying the fee.  Some have bought into the false narrative that the BBC wastes public money on high salaries and overheads.  But most feel, with more evidence, that the BBC doesn’t reflect their lives and interests (another theme in the OfCom report, largely expressed by users from lower income households or regions further from the south east).   It doesn’t feel like the BBC is for them.

This has to change.  After all, the BBC belongs to the public.  We should have a say in what the BBC produces.  For the generation that the BBC is losing fastest – the activist, game-ified, digital natives in their late teens and twenties – this would come as naturally as liking a post.  Rather than presenting us all with a binary choice – pay or don’t pay, watch or don’t watch – we need to let license fee payers choose how to spend their license fee as members of the BBC community.  We need to be commissioners of our own content.

Imagine a cross between the iPlayer and Kickstarter.  Commissioners place their development slates on the site, with target ‘pledge points’ from license fee payers required to green light any content.  Licence fee payers get 157.5 pledge points (equivalent to the £ amount of their fee) to pledge as they choose.  You could spread your points across twenty ideas, or place it all on one.  You could commit your funding to a specific genre you love  – say, natural history series or comedy specials or politics podcasts.  Suddenly, you are a stakeholder – your choices are reflected in the content getting made.  The BBC can keep you up to date on your personal selections, with updates from production and access to early trailers.  The content makers can engage with you and other pledges – a built in audience test group for their ideas. You can share the updates with your friends, making you an advocate for the content and helping bring more of your peers back to the BBC.

This system could also become a submissions platform, opening up the BBC to a new range of diverse voices and ideas.

Of course, engagement with such a system would be optional.  Many won’t have the appetite for gamification of their licence fee payment, and that’s fine.  Plus the areas of greater public need, such as Children’s, News and Learning, will need to be ring-fenced.  And commissioners, people with immense curatorial expertise, still need to influence content choices, so a formula for input from the pledges and the commissioners would need to be developed.  But this kind of approach could not only re-energise younger audiences around BBC content, but it could also create far greater transparency and understanding of how your licence fee gets spent, and how much of it goes directly to content that you value.

A public service protecting its users’ data, the BBC has limitations in ways it can compete with commercial streamers.  But perhaps empowering the public is the way for the BBC to win that competition and remind the UK audience it has a personal stake in public service media that is the envy of other nations.