Decentralized Web: meeting report

By John Ryan

This June, some of the founders of the web and the Internet, called for the web to be re-thought, building in new goals of openness, and security, and permanence, moving the architecture toward a decentralized model. Many of the ideas align strongly with the founding precepts of the People-Centered Internet, particularly trust, in Precept 4, and User Control of Data, in Precept 7: Personal information in the digital environment is protected by law and controlled by the individual

As IEEE Spectrum’s summary of the meeting asked: Will the Decentralized Web Summit be looked back on like, the first Hackers’ conference or the Mother of All Demos? Many participants felt that they were at the beginning of something new, a wave to redesign the web – designing it to the benefit of the people it connects, or at least shifting some balance of power toward them.

The basic problems the Decentralized Web aim to resolve are:

  • that increasingly users and their data are herded into proprietary walled-garden silos that lack ways to share content between silos (Facebook, LinkedIn, Google apps, etc.)
  • that users lack control of their own data
  • that the architecture of trust – the original Internet and the original web assumed all users were trustworthy – needs to be updated with one that builds in protections against bad actors

Technologies explored and showcased at the Decentralized Web summit included derivatives of Blockchain (some with truncated side chains to lower processing complexity and add human-curation); the DAO (the Distributed Autonomous Organization that has spawned the digital currency Ethereum), distributed hash tables (Kademlia and others). Each project has at its heart the creation of a newly-architected, truly peer-based web. Demonstrations included, literally, publishing a website that did not push the site to a central server, but instead to a distributed S3 storage web; the URL created included a full, Bitcoin-style call.

The general view is that this set of technologies can and should enable a reversion toward user control over their own data, toward stronger privacy and strong protections against malware. The unanswered questions remain: to what extent the technologies will truly end up helping the billions of users who need better privacy and control solutions – versus enabling a next-generation DarkWeb to emerge? And will the giants who dominate today’s Internet adopt these technologies, tolerate their existence, or aim to eliminate them? And how will powerful governments act in the the face of technologies that hand more power – and deeper, ubiquitous encryption – to decentralized, more anonymous users?

Some key points from the Decentralized Web meeting:

  • Web initiator Tim Berners-Lee focused on challenging the increasing importance of closed groups, walled gardens. Today’s Internet, as used by billions, revolves around silos – closed user communities; we communicate with friends on Facebook, with colleagues on LinkedIn, he noted. We have messaging clients from Skype and Whatsapp and Wechat and Line and more. Payments can be made via Paypal or Apple Pay or Android Pay and more. None of these communicate with each other.
  • Berners-Lee, Internet Archive founder Brewster Kahle and author Cory Doctorow challenged the tradeoffs around security and privacy: we lost privacy to get free services; we lost security to get easy log into fun sites; and now governments and commercial agencies know our every move.
  • Vint Cerf, creator of TCP-IP and initiator of the People-Centered Internet project, called for permanence to be built in to Internet content. Today’s reality is that much of the Internet’s vast array of content is ephemeral – web sites, or pieces of content within those pages, or postings on Facebook, etc., disappear from view: they are edited, taken down, the links become invalid, or the content is behind a wall – for subscribers only. How, speakers asked, can we build history – our own personal and society’s history – into the actual mechanisms of creating Internet content? The Internet Archive does a decent job of caching as much content as it can (and has some remarkable, historic successes around that), but, Cerf asked, why cannot the act of publishing content create its own permanent archive?
Posted in PCI News.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.