This piece was written for the Children’s Media Foundation Yearbook, which is being published July 5th to coincide with the Children’s Media Conference in Sheffield, UK.
Via his film La Chinoise, Jean Luc Godard suggested we should view the work of Méliès, the first fantasy filmmaker, and the Lumière Brothers, the first documentarians, the other way round. You can argue that the Lumières, by placing their camera down and choosing what to film, were changing reality around them, whereas Méliès, in his use of special effects and trick photography, was documenting the underside of the mind. The current debate about the Metaverse reminds me of this critique of our earliest filmmakers. The first filmmakers shared tools in common – cameras, film stock, projectors – much as digital makers across AR and VR today share game engines and 3D models. Remember that audiences reacted viscerally to the Lumières’ train leaving the station as though it might hit them through the screen and perceived black and white scenes from life as the same kind of magical vision that Méliès offered with his trip to the moon. But their goals and storytelling techniques could not have been more different. We are in the equivalent of these early filmmaking days with AR and VR now, a long way from a maturing medium and discovering the Citizen Kane of VR narrative or the Sesame Street of AR early learning. Everything is shiny and new – we will go through many cool experiments, gimmicks and spectacles to uncover what sticks and is genuinely useful/joyful in these emerging media.
People regularly conflate AR and VR, which is strange to me, given that the two serve contradictory purposes. AR is anchored to physical objects in the real world, whereas the goal of VR is to remove the user from their physical surroundings and immerse them in an alternate digital reality. They are diametrically opposed. So it’s no wonder that there is even greater debate and misunderstanding over the Metaverse.
Like it or not, Metaverse has been the word of the year. I don’t much care for the word at all, but we seem to be stuck with it for now (thanks, Zuck). So let’s start with some definitions. The term Metaverse was famously coined by Neal Stephenson in his 1992 novel Snow Crash and was created to refer to a single 3D virtual world mapped over the real world. Many people focus on the first element only, thinking of the Metaverse as being all about VR and online virtual worlds like Fortnight and Roblox, and the possibility that some day you might move seamlessly between such worlds with your self defined virtual identity and resources, whether to a game session, a work meeting or a virtual marketplace. Ofcourse, this requires an underlying infrastructure that current virtual worlds only offer in their own games.
Others use the Metaverse as a much broader catch-all for the range of technologies emerging around Web 3, 5G and the blockchain, NFTs and photogrammetry, as well as AR and VR. We are told that these developments will deliver a newly democratic 3D internet with a creator led content/experience economy and open access for all and provide the backbone for journeys across virtual worlds.
[A side note: Those of us who remember the early days of the web will note the worryingly familiar utopian magical thinking here. After all, underlying all of this technology is the power of cloud servers, machine learning and AI. And megacorporations with massive, competing interests.]
Both views above are valid, but I prefer to think of the Metaverse as a series of digital layers over our physical world, layers we might choose to access based on context.
The internet we know is built on connected devices. The Metaverse and Web 3 will also use contextual or semantic computing and spatial imaging that combine to paint our physical world with 3D digital data – whether invisible (Object recognition/mapping), translucent (AR information and customisation) or transformational (VR and virtual worlds) – and deliver content shaped by context.
Imagine some layers you can’t see that speak to machines like self driving cars; layers that you can see in real world space – say, the archive of the BBC deployed to enhance public spaces with location relevant AR content, or fashion designers making digital elements to enhance your appearance; fully immersive layers in VR that transform the limits of your living room walls into a spacial expanse for learning new skills or playing with friends from across the globe. All of these layers are possible in some form with today’s technology – and much more is to come.
The engines of this new era come primarily from games and 3D graphics companies – for example, Epic’s Unreal engine, the mobile 3D engine Unity and the Pokémon Go creator, Niantic. But these game engines’ objectives aren’t just about games. Through its users’ gameplay, Niantic is mapping the world around us, scanning and ‘owning’ our landscapes. Unreal is providing deeply realistic 3D tools and environments for uses that range from training humans to designing cars, delivering visual effects, as well as fighting fantasms. Real time virtual events can be delivered via the engine, not just in Fortnight but in custom built environments, along with the advertising and merchandise that go with them. Unity is aiming to give digital objects the same ‘rights’ as physical ones, uniting with apps like Sketchfab to offer any physical asset its digital twin. These platforms may have their roots in gameplay, and that will persist, but the layers of the Metaverse they are building will cut across almost every use case you can imagine. Eventually.
The technology to support the Metaverse (whatever your defintion) is inevitable, but hardware and software will take more time to develop than many realise. The other major question mark is about human behaviour. Do most people want to spend work, social and personal time in a virtual construct? At the moment, gamers are the early adopters of VR technology. Will the rest of us follow?
It’s worth noting that Snapchat have been determinedly focused on the present or near present. Take a look at their recent Spectacles demo video for a glimpse of what’s already possible with their AR glasses and the range of use cases they are exploring. AR glasses from Meta are coming soon, and the Lens studio model from Snap has quickly become core to Instagram and TikTok creators. They are banking on real world experiences you can share with friends and family enhanced or even defined by digital overlays. And in May, Google opened up its Streetview maps for AR creators to anchor their work in any physical location shown.
So what does all this mean for children? Many cheerleaders promote VR for kids without fully recognising the difference between an explorable world on a computer versus an immersive experience via a headset. Many parents ignored the 13 plus advice on headsets in their purchases for their families last Christmas. While playing a round of Beatsaber might pose little risk, there are few safe spaces for children in existing social VR environments and almost no coherent moderation. Devices aren’t being designed yet for a child’s physiognomy and equally there is little research on the potential impact on mental development of exposure to VR experiences at an age when perception of the real world is still malleable. For all the immense potential of VR to deliver amazing edtech learning journeys, more research is urgently needed in this area. At the same time, we need to recognise, like training a young mind to control the Force and become a Jedi, that some form of the Metaverse will be a huge part of our children’s lives, so we need to let them embrace what it offers as soon as is appropriate.
AR provides an easier and safer on ramp. In my own work, I’ve found that that AR overlays mean little to very early learners – the world is full of surprises and things to discover for them, and all five senses need to be engaged in that process. But once their brains have grasped abstractions like symbols on a page forming language they can understand and share, AR overlays on the real world can fit coherently into their mental landscape. They understand the rules and behaviour of physical objects and symbols. They have a context into which AR can be placed to support play, social interaction and learning without undermining cognition.
I’m exploring an AR use case that has potential value for kids and adults. I’m bringing the Metaverse to books. Here’s what I’ve learned.
A book is a platform for stories, images and a mental projection from author to reader – Stephen King describes writing as a way for described objects and people to travel through time and space from his desk to your living room – telepathy. But a book is much more than that. It’s a physical object which has visceral qualities and can become talismanic for a reader – we keep them on our shelves because we value them as objects as well as containers. They become badges of our identity. Books are consumed in a context – a place, a time, a state of mind that changes the way we perceive them and the role they play in our lives. We live with books, we use them, we touch them – we don’t just watch them. They trigger our own thoughts, emotions and memories as well as containing those of their authors and we see them as cherished objects as a result. As we build a bridge to digital tools from physical books we can unleash all that stored potential of emotion and interaction.
So I’m building the bridge between physical books and digital tools to empower active readers, social readers, engaged learners, puzzlers, thinkers and creators. Always remembering that by putting down your device, or taking off your smart glasses (when we have them), you can also just focus on the ink and texture of words and pictures beautifully printed on physical paper in all its time tested simplicity and glory.
I actually think that’s a microcosm of how we will feel about the Metaverse as a whole – it will be there for you when it is what you need/want. But you’ll also need to be able to switch it off, and just look for shapes in clouds, hold hands or read a good book.
Whatever your definition of the Metaverse, I suspect Neal Stephenson would agree that stories in books already spark the best Metaverse of all, owned by none but ourselves: the one in our imagination. And right now we can still imagine the Metaverse is anything we want it to be.