Marking the Web’s 35th Birthday: An Open Letter from Tim Berners-Lee

#web #decentralize

https://webfoundation.org/2024/03/marking-the-webs-35th-birthday-an-open-letter/

[Edit] : Due to possible legal liabilities by copy pasting the full article here. I only keep a link to the original site.

5 Likes

I cannot believe the arrogance of this poor old fellow: to claim so emphatically that he “invented the web”. :disappointed_relieved:

He can claim that because it is true:

https://home.cern/science/computing/birth-web

https://webfoundation.org/about/vision/history-of-the-web/

1 Like

I was steeling myself for a bland, common-knowledge, predictable response like the one above. I’m a network user & admin dating from 10 years before his “invention” date, and I can attest that far more than his own personal contributions were used to build the “web” that we know today. That’s why his oft repeated claim seems self-aggrandising, and the fawning admiration from decades of groupies only makes this worse.

Ooohh boy is there a good amount of stuff to unpack here.

tl;dr (because it’s DEFINITELY too long): Tim Berners-Lee is attempting to ascribe principles and idealism to an internet that no longer ascribes to either. The Solid protocol doesn’t solve a problem most internet users have in the way Mosaic did in 1991.

I too wax nostalgic about the early days of the internet, to an extent. I actually still peruse Usenet and IRC and even a few BBSes (which have long since moved to Telnet or SSH vs. direct dial-up)…but I’d submit that there’s more to the story than “the good ol’ days before Google and Microsoft basically-monopolized e-mail”.

The internet was a much smaller place at the time. Tim might consider the first decade (i.e. 1991-2001) to be the one that “fulfilled the promise”, but there are some rose-tinted glasses in that assessment. 1991 saw the release of Mosaic, sure…but it wasn’t until 1995 that Apache was released. CERN HTTPd came out the year before Mosaic, but one needed NextStep to run it because Linux was only a kernel at the time. I’m hard-pressed to find a free (let alone FOSS) way to have had a web server in 1991, and even if there was, hosting-grade bandwidth was prohibitively expensive. Yes, the internet of 1991-2001 was “decentralized” in comparison to the FAANG internet we have today, but the divide between those who could host content (which required something expensive to implement) and those who could consume it (dial-up internet whose monthly fee was less than that of one dinner at a sit-down restaurant). You can’t have a “decentralised spirit I originally envisioned” while also having such a huge cost disparity between consuming content and serving it. That was obvious in 1991.

The second decade was fueled by the availability of broadband internet connectivity. The demand for that bandwidth, I would argue, was fueled by the desire for media content. I submit that Napster was at the forefront of that trend, and in so doing drew the ire of the media creation and distribution industry. The nature of copyright infringement in the case of Napster is tangential to the point; the ability for an average user to get music from their computer instead of a record store was going to go one of two ways: the way it did (commercial sales of music online that led to the iTunes Music Store and later Spotify), or the total prevention of media distribution on the internet. This story largely repeated for the movie industry as well; Limewire and Bittorrent being the harbinger for Netflix and other movie streaming services. Once the content expectation of the internet went from Usenet posts and low-res photos to music and video content (i.e. things that took both time and money from teams of people to create), a paradigm shift was inevitable.

That’s just the socioeconomics of it. Consuming content was - and still is - way easier than serving it. Even today, in 2024, this is palpable the further down the stack you go. Wix and Squarespace make it extremely easy to make a ‘canned’ website where a user just brings their text and photos and a few A/B choices to the creation process…but those were well into the second decade. Geocities was really the precursor to those services, but users had to learn HTML, and work within the strict limits that free-hosting-meets-social-network were able to provide…and you were still dependent on the centralization that Geocities provided for all of the socioeconomic reasons already discussed.

The interesting part, I think, is that the third decade made a decentralized internet even easier to implement. A super-inexpensive Raspberry Pi can serve a low-volume website, as can NAS appliances made by Synology or QNAP. There’s a boatload of free/FOSS software that can leverage a residential internet connection to host content that was released in the third decade, from Virtualmin and Turnkey Linux, to the Raspberry Pi and its mountain of projects, the third decade made the decentralization of the internet possible in ways that were almost inconceivable in the ‘golden era’ of the first decade.

And yet, to his point, it’s less popular. Why? Because there’s no motivation, and because centralization is almost inevitable. Perhaps I make the best tutorial videos ever devised, but zero people will find them on a self-hosted MediaGoblin instance, vs. thousands who will find them on Youtube. The availability of an audience is a strong motivator for most people creating content, and as much as Berners-Lee extols the virtues of a decentralized internet, it needs to be indexed and searchable somehow. This was done with physical cards in card catalogs in the pre-internet days (at a library…centralization), with Archie and Lycos and Altavista in the early pioneering days, and Google and Bing today. Keeping an index is hard, and while I can type even longer dissertations than this one regarding the things I loathe about both Microsoft and Google, I can at least acknowledge that indexing the internet is an incredibly challenging and daunting task.

I could go on and on, obviously, but I’ve got two more related thoughts. The First Decade was largely defined by the enthusiasts. I’d even argue that “The Zeroeth Decade”, i.e. 1981-1991, the decade before Mosaic and HTTP, was even more so defined by the enthusiasts that made SMTP and NNTP and IRC and BBS systems and all the other alphabet soup that was post-internet but pre-web. The need to seek out internet connectivity was a barrier to entry. Like all barriers, this was a double-edged sword as it kept out great contributors as well as troublemakers…but let’s not pretend that the number of PCs in homes in 1996 (i.e. halfway through “the first decade”), even per-capita, matched the number of iPhones in active use by 2012.

The reason for this is that there will always be more consumers than creators. This has been the case since pen was put to parchment. Even here, in the /e/OS community, where the community is almost a self-selecting subset of smartphone users who are sufficiently concerned about data harvesting and are sufficiently technical to implement /e/OS on their phones (or willing to purchase a pre-installed phone [itself more challenging than buying an iPhone]), there are way more consumers than creators. How many confirmed /e/OS users are there, vs. the number of contributors to the Gitlab? I’d call even 3% optimistic, but assuming it’s that number, that means 97% of us are consumers. Any guesses on VLC? Several orders of magnitude fewer, no doubt. How about cars; how many people here who own their car perform repairs of consequence? How about restaurant staff vs. customers, or retail staff vs. customers, or people who eat food vs. people who farm it? This disparity shows up in every area of life.

…which is what brings me to my closing thought: The Solid Protocol. Mosaic and HTTP took over the alphabet soup of other internet protocols because it allowed easy consumption. This ease of consumption lead to a paradigm shift in real life. In 1991, life was still mostly-analogue, in 2024, its permeated everywhere. As a result, using computers isn’t really an option, meaning that most people who use computers, use them out of necessity, rather than enthusiasm. Consequently, how does one feel a need to contribute, when even consumption is compulsory? When that is the mindset of many computer users, Solid is the barrier. Apple makes it almost-impossible to avoid storing data in iCloud, and Google makes it almost-impossible to avoid storing data in Google Drive, and Microsoft makes it almost-impossible to avoid storing data in OneDrive…and that’s just in the consumer space. I have no idea what information LexisNexis or Equifax have on me, and I haven’t heard of half the trackers that AppChoices allows me to opt out of so God only knows what personal data they have, yet Tim wants end users to embrace Solid to “decide how data is managed, used, and shared”?

It really comes down to the fact that Mosaic solved a concrete problem and made the internet more usable by enabling consumption, while Solid does not. Solid, in broad strokes, enables what has been around for much of the Third Decade - personal ownership, accessibility, and security of one’s personal data. There has never been a greater number of avenues to accomplish this, whether Free (like XigmaNAS) or Easy (like Synology), and yet the per-capita number of internet users who take control of their data is likely lower than it was ten years ago.

Berners-Lee was, and still is, brilliant…but the problem he’s looking to solve is the wrong one. The problem isn’t a need for a technical solution for decentralization; there are green pastures of those. The problem is that the centralized services and large companies provide “good enough” with minimal effort. Solid doesn’t seem to do this.

I value everyone who got this far.

7 Likes

Yes. DEFINITELY.
:wink:

I was concerned about that, too.

The Web Foundation’s website, unironically, connects to google.com, googletagmanager.com, and cdn-images.mailchimp.com. (About Mailchimp.) Lol.

1 Like

This topic was automatically closed after 6 days. New replies are no longer allowed.