Copied


IPFS: The Decentralization of the Web and the Future of Blockchain

Donatella Maisto   May 17, 2020 00:00 4 Min Read


It was in 1980, when at CERN, as a consultant in the field of software engineering, Tim Berners-Lee realized, for internal use, to facilitate the dissemination of information between the different centers of CERN, the first software to store information using random associations. 

 

Webp.net-resizeimage - 2020-05-15T170551.358.jpg

 

This prototype, never published, will form the conceptual basis for the future development of the World Wide Web. It will be necessary to wait until 1989 for Tim Berners-Lee to propose a global project on the hypertext, then known as the World Wide Web. Tim Berners-Lee laid the cornerstone of the web by imagining "a decentralized information management system. 

 

On the occasion of the web’s thirtieth birthday it was Tim Berners-Lee himself who highlighted "growing abuses to be stopped on the internet", adding that "If we fail to defend the freedom of the 'open' web, we risk a digital dystopia of radical inequality and abuse of rights. We must act now".  

 

That boundless field which has “created opportunities, given voice to marginalized groups and made our daily life easier” is becoming increasingly ambivalent, relegated to enclosed spaces by server farms, in the hands of a few mammoth technological poles, that operate autonomously from one another, moved by less than philanthropic logic. 

 

At these times so little flattering is accompanied by another problem, much more technical and operational, which, however, strengthens and is linked to what already treated and could lead, if managed according to ethically correct paradigms, a new life of the web, a sort of Renaissance of Modern "Arts." 

 

The communicative protocols that regulate the web seem to "suffer" from obsolescence. 

 

The technologies and file formats used at the dawn of the use of computer networks are now unusable and all communications sent between the late '60s and the early '70s are no longer readable. 

The same problem, which concerned the data of the past, cannot fail to be repeated in a few years, given the speed of change in telecommunications and computer networks. 

 

Working on the archiving of HTML pages, entire web portal, and multimedia content, to make them available and usable over time could be a good way to go, although not without obstacles, first of all, the impossibility to intercept all the new contents, which appear daily on the net. 

 

Secondly, those who thought to operate in this sense have moved this machine on centralized systems and therefore, however, subject to the possibility of not being reached for the most disparate reasons by “web surfers.” 

 

It is increasingly alive, however, the interest in a protocol that can store all the information on the web without weighing too much on a single technological infrastructure, then decentralizing it. The keyword is IPFS - Interplanetary File System. 

 

The IPFS takes advantage of blockchain technology and the peer-to-peer Torrent protocol, creating a permanent web and distributing "bits" of portals and websites among all Internet users who decide to install the client program associated with the project on their computers. 

 

Each file and all blocks within it are assigned a unique identifier, which is a cryptographic hash. Duplicates are removed over the network and version history is tracked for each file. This leads to permanently available content. 

 

In addition, the authenticity of the content is guaranteed through this mechanism and, when you search for files, you essentially ask the network to find nodes that store the content behind the unique identification hash associated with that content. 

 

Links between nodes in IPFS take the form of cryptographic hashes thanks to its data architecture Merkle DAG (Directed Acyclic Graphs).

 

What are the benefits? 

 

1. Each file is assigned a unique identifier, which is a cryptographic hash

2. No duplication: files with the same content cannot be duplicated and are stored only once 

3. Tamper-proof: the data is verified with their checksum, so if the hash changes, IPFS will know that data has been tampered with 

 

Each node stores only the content it is interested in and indexes the information that allows it to understand who is storing what. The IPFS framework essentially eliminates the need for centralized servers to provide users with content from websites. 

 

With IPFS instead of looking for locations (servers), you search for the content itself. The machine that makes the request may not trust a single server to provide the required file and have the ability to do it to millions of computers that can provide that specific file. 

 

The IPFS protocol distributes "components" of portals and websites among all users who decide to install the client program associated with the project on their computer. When the request is received, the IPFS system will automatically search for the stored pieces. 

 

In case a server should be unattainable and a portal no longer open, the IPFS system will automatically search for the archived "pieces" corresponding to the missing ones and make them available again. 
These distributed and decentralized archives allow you to consult previous versions of the same page or portal. 

 

The search for the contents will then be based on the hash identifier, so as to be sure to find exactly what you were looking for while also remaining safe from any malfunctions of the Network precisely because the information on which the project is based is scattered and redundant on numerous nodes independent of each other. 

 

The obvious advantages of the IPFS distributed storage model apply to much more efficient data storage and immutable permanence and thus: 

 

1. No reliance on servers 

2. Cost reduction 

3. Reduction of ISP censorship 

4. The historicity of data stored on the network 

 

The performance of the IPFS protocol naturally clashes with that of the protocol currently most used on the web, which is HTTP. Adopted by Web browsers in 1996, it is the basic protocol of our web browsing and the backbone of the client-server paradigm. 

 

The problems key deriving from the implementation of HTTP today are the result of the massive increase of Internet traffic: 

 

1. Inefficient delivery of content from downloading files from a single server at a time 

2. Expensive bandwidth costs and file duplication 

3. Increased centralization of servers and providers leading to increased Internet censorship 

4. Fragile history of information stored on the Internet and short life periods of web pages 

5. Slow connection speed 

 

This is one of the weaknesses of the current centralized web. Blocking a site simply requires finding the server on which it is located and asking the operator to turn it off or telephone operators to block the connection to the specific IP address of the server. In the case of a site to which a large number of users contribute you will need to block all those computers, greatly increasing the resistance against censorship. 

 

The decentralized web, therefore, seems to offer solid guarantees against censorship and internet blockages, ensuring a new control over its data. Again, the essential ingredients for this new web are cryptography and peer-to-peer technology.  

 

Centralized servers would be replaced by distributed blockchain nodes. Users connect to a Dapp, with a specialized browser or a decentralized web browser tailored or through a plugin and this browser interacts with the back-end logic of a software program that can run on a distributed network. 

 

Every step forward that technology takes always raises positive and negative aspects. The important thing is to go forward, understand, overcome, improve, facilitate, in total transparency, reliability, and safety. 

 

Even web 4.0 and its development horizon no longer see the dichotomous relationship between man and machine, but increasingly highlight its duality. 

 


Read More