Signposts Towards a Human Noosphere

The idea of a human noosphere, or global collective consciousness, comprised much of the idealism of the web in the late 20th century. However, as we worked towards this vision, we discovered that our human natures, both as individuals and as collective organizations, introduced speed bumps along the way. As our world becomes increasingly connected, can updated OARS (Obligations, Acknowledgements, Responses, Safeguards) framework help us address our own individual obligations and perceived biases to build a more beneficial and benevolent future for all? PCI welcomes suggestions from members of the coalition on what signposts we may need to steer us towards a more connected and civil future. Thoughts from the still-nascent noosphere welcomed.

________________________

By Dr. David Bray

Having become a father last year, I was walking home today, carrying my toddler son who had fallen asleep against my shoulder, when I began to reflect on what kind of world I would like for him to live in come 2030. This triggered thoughts on the idea of a human ‘noosphere‘.

The concept of a human noosphere – a global collective consciousness on the planet, or interconnected ‘Mind Space’ – arose in the second decade of the 20th century. The idea was first introduced by geologists, who suggested there were three phases of life on Earth, starting with inanimate matter (the geosphere), followed by the arrival of animated life (the biosphere), and an ultimate phase where humans transcend their individual thoughts of self and their internal motivations to achieve a collective consciousness that surpasses our individual selves (the noosphere). Some philosophers, notably Teilhard de Chardin, suggested that evolution’s natural selection tended towards increasing complexity of consciousness among lifeforms.

Through my work, I’ve spoken with Vint Cerf and other Internet pioneers who talked of online chat room discussions from the late 20th century, where people expressed their hopes that the World Wide Web could help us achieve global consciousness, or noosphere. Much of the idealism for the Web included this hope for the future. Yet looking back on the last few decades, we now see some of the cautionary signs towards such an ideal. In attempting to work towards a greater human, global consciousness, we’ve discovered that our human natures, both as individuals and as collective organizations, introduce speed bumps along the way. We have also seen recent cases where the Internet and related technologies may be creating greater homogeneity of thought – producing echo chambers online or (worse) surveillance states that directly or indirectly pressure conformity of thought and behavior, and publically shame those who act or see differently. In considering both potential futures, neither a highly polarized world, full of acrimonious thoughts on the Internet or a homogeneous, highly restrictive world containing only conforming thoughts sounds like hopeful aspirations for 2030.

I’ve written a lot over the years about the importance of positive #ChangeAgents — individuals (including anyone of us willing to do so) who “illuminate the way” and manage the friction of stepping outside the status quo. More recently, I’ve written thoughts wondering if the increasing polarization throughout the world that we’re witnessing is a temporary or more long-term phenomena tied to the current nature of the Internet and the different services, including apps, websites, and social media, that operate on it.

As I walked home this morning carrying my sleeping son on my shoulder, I puzzled over  ways to overcome elements of human nature — including the fact that we each see the world differently as a result of our experiences, training, and more — to work towards a more beneficial and benevolent ‘noosphere’ for all (should such an event ever actually occur).

The OARS Framework

The last year has seen Europe’s General Data Protection Regulation (GDPR) go into effect. In several developed nations, concerns are rising about the terms and conditions associated with

Online services, packed with legal jargon that most people don’t read through fully before they click accept. Even when individuals do read through these terms and conditions thoroughly, they only receive the binary proposition to “accept or reject”.

As an alternative to the voluminous “terms and conditions” currently provided by online services, I would like to suggest a simple 2×2 table. This table is intended to take up no more than half-a-page, where entities that provide services in the world, whether they be corporations, startups, communities, or NGOs, can list short bullet points containing four important elements:

  1. Top-left: Obligations in this Context – What principles the entity believes about its relationship with its stakeholders.
  2. Top-right: Acknowledgements in this Context – What “known unknowns” may exist tied to transactions and relationships
  3. Bottom-left: Responses to Obligations – What the entity will do based on expressed Obligations
  4. Bottom-right: Safeguards to Acknowledgements – What the entity will do based on expressed Acknowledgements

Imagine if the public could expect to find a short, concise, 2×2 table expressing these four crucial elements for every website and app. The table for this framework is an idea that evolved out of talks I’ve given on both “The Future of Work” and “The Future of Governance,” which included a case exercise where I asked participants to think about their obligations, perceived biases or blind spots, proactive steps they’ll take, and safeguards they’ll put in place with a new initiative.

This updated OARS framework takes that question of “perceived biases” and asks an entity to acknowledge that some biases exist in any human endeavor because of our experiences, training, background, and more. For example, an organization sponsored by a certain group may receive subtle nudges from that sponsor and should acknowledge them. An engineering firm will probably be great at engineering efforts, yet may not see other perspectives outside of their expertise. Even when an organization commits to “do their best” for a certain endeavour, there may be unknown factors that impact its delivery, which should be acknowledged.

Thoughts Towards the Future

As the last few decades have shown, there will be unintended uses, both helpful and harmful, for almost any tool or technology. In an increasingly connected world, we need more rapid mechanisms that can identify third-order or fourth-order unintended uses of technology and adjust appropriately. This updated OARS framework asks any entity to think about what safeguards it might implement, should a well-intended service start to be used for unintended circumstances. For example, an organization might perceive online advertisements that trigger violent radicalization of certain groups to harm others as an unintended use of their service.

In this example, a potential safeguard could be an “Ombuds” group, where early identification of such concerns can be shared and the organization can rapidly learn, adjust, and respond accordingly.

I want to emphasize that this OARS framework is still draft, and only one piece of the complex puzzle on how we can resolve the current challenges of biases, echo chambers, distrust, privacy concerns, and unintended uses of technologies tied to our increasingly connected world. I propose the OARS framework as just one possible signpost that might allow us to more readily identify that corporations, startups, communities, and organizations are all compromised of people with different thoughts, views, and experiences.

A significant amount of the polarization occurring in the world seems to be reinforcing existing biases in individuals, communities, and organizations instead of triggering conversations about how we might find ways to co-exist in a plurality of different perspectives on the world (barring hateful or injurious biases, which do need to be remedied). Additionally, when considering the arrival of AI, machine learning, and other algorithms that influence human activities, we should be mindful that such algorithms will only be as good as the diversity, representativeness, and robustness of data fed to them. This further underscores the need to apply a similar OARS framework to AI, machine learning, and algorithmic endeavors as well.

If 2030 is to be a more benevolent future, as opposed to an acrimonious or conforming future, then recognizing we may need new signposts to steer us towards more people-centered endeavors is an important next step.

Thoughts from the still-nascent noosphere welcomed.

________________________

If you liked this article, you can follow us on Twitter and Facebook for more updates.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.