Into the Looking Glass, where everything is not as it seems. Where a mere shift in perspective can reveal new dimensions and endless possibilities. This is the realm of the Metaweb, a world that beckons us to explore beyond the veil of Today's Web.
Pema Chodron's wise statement rings especially true for the Metaweb. We finally have the power to shape our online experience and make it reflect our values and beliefs.
Think of the Metaweb as a looking glass, a lens through which we can view the online world. It transforms unstructured online content into a construct that can be addressed and interacted with. It gives us access to a wealth of contextual information and interactions, all at our fingertips. What we see depends on our filter settings, shaped by our tribal affiliations and personal preferences.
So let us step into this world, and see what wonders and discoveries await us. Let us embrace the Metaweb and experience the hyper-dimensional web.
Recall the earlier discussion about the web cake. The bottom level is Today's Web, and the top is the Metaweb. The Metaweb is your personal lens for the online world. At the Metaweb level, you can see artifacts attached to content on webpages.
Remember how we angled the web cake like a laptop screen? Think about the bottom level of the cake – the webpage content. As your attention moves among the content, you have on-demand access to 360° of contextual information and interactions.
Welcome to the hyper-dimensional web. The Metaweb turns unstructured online content representing a claim or an idea into a construct that is addressable as a location in the Metaweb. Anyone can attach information and interactions. What someone sees depends on their tribal affiliations and personal filter settings, but the data are there – everything related to the claim or idea.
We're about to throw you a curveball. Some of you may have already been expecting it. Now imagine grabbing the sides of the Metaweb level, and ripping it away from the content level of the web cake. Hold it up so that you can see through it and scan your environment. Imagine now that, through the cake, you can see 360° of contextual information and interactions associated with items you focus on in your environment. Focus your attention on something in your environment, and imagine everything that anyone might want to attach to it. The bottom level of the cake is now your physical reality.
I'm in my living room. I focus my attention out the window. The overlay tells me it is 27 degrees centigrade with winds of 8 – 16 kilometers per hour and the time is 17:09. I toggle the display to 80.6 degrees Fahrenheit, winds of 8 – 10 miles per hour, and the time being 5:09p. I can also request historical temperatures or the forecast for the upcoming week from various sources. My sandals are near the door. I got them last year before a trip to Amazonia, and they have served me well. The overlay gives me options to see the 3D model; info about its components, the supply chain, and the manufacturer; and the list of shops that have my size available and their distance from me, ordered by proximity.
I focus on the mango on the plate on the table. The overlay shows a 3D cross-section model of the mango and its nutritional information. I can choose options to display health benefits, an article about mangos improving sleep, song references, recipes, and cooking shows featuring mangos. Also, NFTs with mangos, a list of nearby places to get mangos, a one-click order button, a poll on favorite levels of ripeness, upcoming meetings about mangoes, and more. Everywhere I focus, I see possibilities of information and interactions.
Now strap the cake onto your head like a big pair of glasses. The bottom level of our cake is now virtual reality. A DAO is performing an animated reenactment of a historical event in the Abyssinia metaverse, which takes place in the Tigray region of Ethiopia.
As you walk the town of Adwa, a young soldier hands you an invitation from 19th-century Emperor Menelik II and Empress Taytu Betul. Activating the more-info icon next to Menelik displays metadata, including his battles, wives, children, and songs about him. Registrations are happening on the Overweb for a dinner conversation next week about Menelik and the Age of Princes. When you arrive at the emperor's castle, you see that metadata exists for almost everything in your visual field, even the injera on the table. The injera bridges to info from the Web about its ingredients and health benefits. It also bridges to other places in this and other multiverses. The farm that produced the teff. The processing "factory" where it became flour. The kitchen that made the injera. You can jump into experiences for the farm, the factory, and the kitchen to learn about the traditional process of making injera from start to finish. You can also focus on other objects in the environment to see what metadata and experiences they hold.
Flashback to the physical world. The bottom level of our cake becomes your physical reality. Through the Metaweb lens, you can access information and interactions associated with places, objects, and visuals in physical reality. In the grocery, for products on the shelf, you have access to ingredients, nutritional information, personalized warnings, and dangerous combinations. In the streets, you can access metadata about murals, signs, and geolocations. Perhaps, an artist's statement, a time-lapse of the artist's painting, or augmented reality enhancements. This could also happen in an art gallery.
Switch realities to a hyper-dimensional graph of the connections between semantic objects that represent concepts, including inquiries, perspectives, claims, evidence, and data. The bottom level of our cake is now the conceptual space. If you like, you can decompose the semantic objects into their constituent words and phrases. For each semantic object, you can follow bridges outward to different realities. To related content on the web. To objects and places in the metaverse. And to related objects and places in a digital twin model of physical reality. Or to another semantic object.
Think of a hyper-dimensional knowledge graph that connects between and among semantic webs of concepts, real and virtual objects, and online content. As shown in Figure 14.1, the graph anchors the conceptual realm with bridges to the online, virtual, and physical realities.
This expansive understanding of the Metaweb suggests the "web" in "Metaweb" isn't the World Wide Web. It's the spider-like web that connects realities. The four realities that affect our collective cognitive capabilities are physical, online, virtual, and conceptual. Were the Metaweb to ultimately support full expression that adequately represented our collective memories and dreams, this knowledge graph would be a virtual Akashic records.
It could enable us to transcend human limitations and regenerate ourselves, our communities, and this garden planet we inherited from our grandchildren. But it could also help machines achieve artificial general intelligence (AGI), and soon thereafter, artificial super intelligence (ASI), enabling machines to transcend humans in intelligence by orders of magnitude. Humanity needs to discuss this possibility and build mechanisms that respect and protect human life.
As we discuss the two next possibilities, some may notice an impulse to get involved and help build the connective tissue of our external realities, the Metaweb. We trust some will focus on ethical analysis and preventative remedies for potential unintended consequences, and hope that all projects will consider AI safety. Once again, the integral accident.
Figure 14.1 Connecting the realities.
One could say we understand something in how it connects to our active internal "knowledge map." This comprises what we perceive and what we understand and recall about the focus of our attention and how it connects to our perceptual map of the world. This is our basis for sensemaking. In the future, AI will augment sensemaking, helping us make sense of our world individually and as a collective.
Engelbart protégé, David Smith, CEO of Croquet, speaks of the Augmented Conversation as a discussion between participants that is "extraordinarily enhanced" with computer tools and capabilities. The computer AI is a full participant in the conversation. It allows us to discuss and explore complex systems, datasets, and simulations as easily as we talk about the weather. A guarantee of shared truth is necessary. What I see, you must see. Doing something that affects the shared simulation must change the simulation for everyone.
Otherwise, the communication channel is not trustworthy. The shared system must enable modification and extension of the system while running for all participants. Thus, we can use the system itself to extend the system, improve it, and add new capabilities. Smith suggests a new type of operating system, built from scratch, focused on shared simulation and deep security.
We think the Overweb can accommodate the Augmented Conversation for online ecosystems and leverage existing online content, rather than starting from scratch.
In a future with ubiquitous context, wherever you are – online, in a virtual world, or IRL – you have access to deep layers of context. Every addressable location, object, and idea enables access to contextual information and interactions through its connections to a universal content graph. The accessible context is the most relevant section of the graph – the information connected to the focus of your attention.
You use your attention (line of sight, touch, cursor), gestures, and voice to navigate and interact with a rich, interactive digital overlay. Your overlay is a composite view of the relevant portion of the Universal Content Graph filtered by your digital assistant based on your preferences, needs, and activities. Your assistant bot manages your smart filter and performs actions within the overlay on your behalf.
By democratizing linking with proper incentives, the Universal Content Graph self-assembles from bridges connecting online content snippets with relationships. This graph underlies the meta-layer above the Web. As the metaverse and spatial computing grow, the graph will bridge the virtual and physical realities with the conceptual realm creating the Metaweb, the connective tissue of our realities.
Graphs are incredibly efficient in retrieving information. One can find information on a knowledge graph easily and quickly. It's the only viable structure to connect the world's information. No matter the size of the graph, directly connected elements are instantly retrievable. Surely, the proverbial Akashic records reflect a similar pattern. Our brain certainly does.
Thus, wherever one is – online, multiverse, or in the physical world – in the future, you'll have access to deep layers of context within and between realities and the conceptual realm. We call this ubiquitous context. Context is available everywhere, you just need to focus your attention.
Ubiquitous context is reminiscent of how we access information IRL. With ubiquitous context, we can access more information and context by focusing our attention. That's how perception works in real life. When riding bikes, we scan the environment. If we see something to avoid, we focus our attention on changing our route. If we see a mural of a jaguar,2 we may stop and admire it or take a photo. We focus on seeing details and context. With photos, we can zoom in to reveal even more details.
But ubiquitous context is much deeper. You go as far down the rabbit hole as you desire. Or go wide with context across and within realities. Keep zooming for more context, and context for the context. The graph has no beginning or end.
Figure 14.2 shows what ubiquitous context could look like in an art gallery.
Figure 14.2 Ubiquitous context in an art gallery (courtesy of Future Text Publishing).
Imagine a future visit to an art gallery. Since you follow the genre or artist, or activated the museum tour, a digital badge hovers next to the painting you are viewing. Moving your attention to the badge – whir – a context menu appears, providing a 360° overview of the information, interactions, transactions, and experiences connected to the painting. Yes, you can jump into the painting or at least the AI-generated 3D virtual world if you like.
Or you can follow bridges to video of the artist, work-in-process photos of the painting, and reviews by art historians and local art critics. Access is available to the notes from the artist, historians, and critics about specific parts of the painting. Depending on your filter, you may also see text and video notes of other museum-goers. You can engage in conversations and polls related to the painting.3
You can add notes, polls, conversations, and bridges, and comment on other pieces of contextual information. Options exist to purchase a print from the museum store or to have one shipped to your home. You can navigate to paintings with similar subjects and styles. Or with the same artist or era. If you like, you can quickly retrieve definitions and sentence examples for unfamiliar terms used by the local art snobs.
With ubiquitous context, wherever you focus, your attention becomes a launchpad for insight, debate, and discovery. Real and virtual objects, geolocations, and visual, aural, and textual patterns become anchors for intertwingled digital overlays. These unlock massive waves of innovation, creativity, and collaboration, as well as increase humanity's capacity for shared knowledge and collective intelligence.
Doug Engelbart, a pioneer in the field of human-computer interaction, envisioned a future where computers augment human intellect and enable collaboration on a scale never before possible. With the development of Croquet, a physics engine that perfectly replicates interactive physics simulations, we are a step closer to realizing this vision.
Human communication mediated by AI and Augmented Reality devices will enable us to dynamically express, share and explore new ideas with each other via live simulations as easily as we talk about the weather. This collaboration provides a "shared truth" – what you see is exactly what I see. I see you perform an action as you do it, and we both see exactly the same dynamic transformation of this shared information space. When you express an idea, the computer AI, a full participant in this conversation, instantly makes it real for both of us, enabling us to critique and negotiate its meaning. This shared virtual world will be as live, dynamic, pervasive, and visceral as the physical.
The future of collaboration and computing is not just about adding a new layer of technology over our current reality, it's about creating a seamless, multi-user reality where the physical and virtual worlds are indistinguishable. With the development of interoperable, scaling, and development-friendly platforms, we can create a digital world that is accessible to all, regardless of device or operating system, and enable every participant to share and collaborate with every other participant in this enhanced digital world.
As we move towards this augmented reality, the virtual world will become as real as the physical world, and we will live together inside a shared information space that co-exists with and amplifies the physical plane of existence. Communication, far more than anything else, defines the true value of Augmented Reality. AR displays are transparent not just so we can see the real world with a digital overlay, but because we need to look another person in the eyes as we engage with them in this extended digital space.
The emerging AR Cloud expands the scope of how we collaborate, create and share ideas within it. This next generation of computing capability will allow us to extend and annotate the real world, but more importantly, it will allow us to easily and instantly create and explore new universes that we will build from scratch as part of our everyday discussions.
The Metaweb will connect among and into the digital twins of objects in physical reality. Smart tags will attach to anything, such as a sound, a physical object, a geolocation, a sequence of words, a product's packaging, etc. The overlay could recognize people using their facial structure. To protect people's privacy, in the Overweb, we cannot attach smart tags to a person without their expressed consent.
Physical objects such as books, paintings, cafés, grocery stores, products, and more can link to relevant information ecologies on the Web. Wherever you look, you can access relevant information and interactions. In the physical reality, a virtual world, the conceptual space, or online. For example, a physical book can connect with geolocations and objects in the book. It can connect to book reviews, book clubs looking for members, and drop-in conversations happening on the pages of the book. Even polls about controversial passages, AMAs with the book's author, and much more.
Focusing on content related to physical objects such as books or mangos can display manipulable 3D models of the object. It can also display geolocations in a digital rendering of the physical reality where the object is available or prevalent.
Every discrete object, NFT, and location in the metaverse is bridgeable and interactive between and within multiverses. Geolocations, multiverse locations, and objects can have easter eggs, polls, and meetings. Similarly, bridges can connect objects and locations between and with multiverses. For example, a Sandbox land NFT could bridge to an object or NFT in a different Sandbox location. It could also bridge to an object in Horizons World or any other environment with addressable objects.
As we move from a 2D to a 3D display, we see that attention triggering is natural for a virtual experience. The primary headset control is the user's line of sight, and they already use their focus to navigate the extended realities. In the future, activating one's focus on an object in a virtual world could bring up relevant objects, information, and interactions from the Universal Content Graph.
Imagine a refrigerator for a cooking experience in the metaverse that connects to your home refrigerator. Rather than a digital twin of your refrigerator, this is a refrigerator experience that works with refrigerators that implement its protocols. You interact by focusing your attention via line of sight and via voice or gesture commands. Opening the refrigerator displays a 3D model of your refrigerator with its doors open and visual representations of the foods inside. You can tilt the display or switch the order of products to see items that aren't visible.
But the real fun happens when you convert the 3D model into a virtual walk-in refrigerator with the contents neatly organized on shelves. You can adjust the grouping of items depending on what you have a taste for, food categories, or even proximity to expiration. You can also ask the AI to generate recipes based on what's available but not spoken for in your refrigerator at home. Activating an item enables access to nutritional information, recipes with nutrition calculations, and ideas for combining with other items in the refrigerator into a meal. It also provides access to the supply chain history of the products, remaining amounts, and edibility likelihoods. You can display a floating array of the product's ingredients, 360° videos from the manufacturer, related virtual experiences, relevant cooking shows, purchasing options, and AmazonFresh delivery times (or whatever delivery services exist at that time).
Focusing on one of the ingredients displays its history, nutritional facts, purchasing options, and virtual experiences, for example, related to its production and environmental impact. At any point in the exploration process, relevant choices are present. You see what you choose to explore, nothing more. Mind you, the creator of the cooking experience doesn't curate the cornucopia of experience options. Rather, the options are the product of human-AI collaboration in the Metaweb ecosystem for which participants receive rewards based on the value of their contributions. Importantly, smart filters apply, and, of course, you only activate the Metaweb when you need it. This entire experience could be done with a headset or from a VR-enabled next-generation home theater or an office.
The Metaweb has the potential to become the predominant discovery mechanism in the metaverse. Focus-driven navigation is natural given that line-of-sight with head-mounted displays is the primary control in virtual experience systems. Navigating with bridges doesn't require typing, which is awkward with hand controls and head gesturing. Voice and gestures are useful as secondary controls. The Metaweb doesn't rely on keywords to describe multidimensional objects and can handle the long tail, which could be more important in virtual experience than online.
Without a doubt, the Metaweb community will develop plug-ins for the Unity 3D and Unreal engines, the two dominant virtual reality development platforms. Thus, enabling virtual experience developers to access the Metaweb functionality within their development environments. The community will also develop a virtual experience navigation SDK for integration into spatial browsers associated with headsets/glasses (e.g., Oculus Rift, Magic Leap, Verses). This will enable people to explore the online and extended realities as well as attach smart tags and build bridges between and among these domains.
We see quantum search or focus-based navigation of the Universal Content Graph eventually becoming the predominant way to search in the metaverse, and across multiverses and realities. The participant specifies which realities are relevant. For instance, you're in the metaverse. You activate the overlay for an object of interest and filter the relevant information from the Web based on the number and strength of supporting bridges. Maybe you filter for digital twins of the object in the physical reality. Or perhaps your smart filter is using a custom ranking algorithm from a developer in the ecosystem, who gets rewarded when you use it.
We think the accountability provided by Overweb's one-person-per-account policy will raise vibes well above today's no-accountability web that's beset with inauthenticity and theft. Were there consequences, they'd think twice. Certainly, crime would be less, and mainly in environments, unlike the Overweb, with throwaway accounts. Having people take responsibility for those they introduce makes people think twice about who they bring into the fold and could ease the removal of bad actor accounts.
Absent proof of humanity, the metaverse is dangerous and more so as it gets hyper-realistic. Imagine placing your worst IRL screaming incident or argument into a virtual world. Then multiply it 100x. We must not cede the metaverse to trolls.
Words, thoughts, and concepts make up the conceptual realm. The conceptual realm includes a theoretical level with theories and models and an ideal level of idealized entities. The universal content graph can map the conceptual realm with words, inquiries, perspectives, claims, evidence, and data sources as nodes. This enables argument mapping, dictionaries, thesauruses, translation tools, and more.
Conceptual argument maps could connect to online content that support, contradict, or cite and to digital twins, geolocations, and virtual experiences. This would enable machines to process human knowledge at a higher level of performance and intelligence.
A word of caution. Such a knowledge graph would be useful for AGI. Therefore, its conception should address this possibility and minimize the possibility of such a graph being used against humanity. To be clear, someone will build the graph, because it's valuable. We can, however, take a proactive approach by instilling elements that protect humanity into its foundation and licenses.
As we embark on the next chapter, we find ourselves standing on the edge of a technological revolution.
The Metaweb, and in particular its full instantiation, the Overweb, has the potential to unlock doors that were previously closed to us. The Overweb, with its foundation on the principles of accountability, presence, interconnection is far beyond the capabilities of today's web, and it will change the way we interact and engage with the digital world.
Get ready to journey with us as we explore how the Overweb can revolutionize industries, disrupt old systems and create new opportunities, increase transparency and trust, and allow for true decentralized ownership and control of data. The Overweb is the next level of the web, and its limitless potential is waiting to be unlocked. With its potential to change the way we live, work and play, it's a must see for those who are looking to shape the future.
1. This section is adapted from an essay in Benjamin, D. (2020). Ubiquitous Context. In F. Hegland (Ed), The Future of Text (p. 118). London, UK: Future Text Publishing. https://permanent.link/to/ the-metaweb/future-of-text
2. We have never been to a place that celebrates an animal as Tulum does with Jaguar. Balam, the jaguar, heralds murals, paintings, and the signs of businesses and organizations, both visually and in name. In Mayan cosmology, the jaguar represents the power to face one's fears and to confront one's enemies, and is associated with vision, the ability to see during the night and to look into the dark parts of the human heart. https://permanent.link/to/the-metaweb/balam-jaguar
3. A similar system exists at the Museum of Old and New Art (MONA) in Hobart, Tasmania, Australia, which offers a GPS-enabled handheld device instead of a guidebook. The "O" keeps track of what exhibits are nearby and offers a set of audio choices related to each piece including a summary description of the piece; the curator's notes; the museum owner's thoughts on the piece; and maybe some review of the piece from a journal or newspaper or the artist's telling of why and how they made it.