Storj Launches Version 3 of Its Decentralized Cloud Storage Platform

Storj

On October 30, 2018, Storj Labs released a public alpha for their version 3 (V3) platform –– enabling developers and companies to test their decentralized cloud storage solution. The team also shared an updated white paper featuring their latest research on decentralized and distributed systems in cloud storage.

Decentralized Cloud Storage Refresher

Decentralized cloud storage solutions like Storj enable users to securely store their data on decentralized clouds utilizing peer-to-peer networks instead of storing their information on the servers of large corporations. This model works like an Airbnb for data; users with extra hard drive space can rent out their space as a place for other users to store their information.

The decentralized cloud storage model has several benefits over centralized cloud storage:

  • “Trustless” security: Users are the only ones who have access to their private keys, and are, therefore, the only ones who can access their files. Decentralized storage providers or hackers can’t access a user’s private information. 
  • Lightning-fast networks: In centralized cloud storage models, download speeds are contingent on a centralized data center. But, because decentralized networks are shared, download speeds are shared too. The more users on the network, the faster the network. 
  • Open-market for data storage: By creating an open market for storage, decentralized storage companies can provide lower rates than those of incumbents such as Amazon, Microsoft and Google. 

Storj Public Alpha

Starting today, the Storj public alpha allows developers and companies to access and build decentralized cloud storage applications by downloading and running the V3 test network on their local hardware.

With their latest update, the Storj team aims to set themselves apart from competitor projects like Filecoin, Sia and MaidSafe, and position themselves as leaders in the decentralized cloud storage space.

In an interview with Bitcoin Magazine, Storj Co-Founder and CSO Shawn Wilkinson shared, “The biggest reason I think we are the leaders in decentralized cloud storage is because of our experience and track record in the market. We are now on the third iteration of our network, while others haven’t conducted their initial launch after several years in development. Not only is our early team experienced, we’ve also hired new individuals with some of the best experience in the industry.”

Wilkinson also noted that the Storj team hopes to drive practical adoption of decentralized cloud storage by making it simple to rewire existing cloud storage solutions with Storj’s decentralized cloud storage platform.

For example, Storj V3 is built to be Amazon S3 compatible, meaning that integrating Storj into applications that currently use centralized cloud storage generally requires changing only a few lines of code and a few minutes of time.

Storj Partners

Decentralized cloud storage could be helpful for a variety of use cases.

“Any application or company that is generating data outside of the public cloud, or has large file sizes, would be a perfect client. This is because cloud providers will often charge egress fees to transfer your data off the network. Also, because of our distributed nature, our platform is most cost effective for large files. However we work great for anyone in need of cloud storage and can lower costs for most storage use-cases,” shared Wilkinson.

Current Storj V3 partners include Couchbase, MongoDB, FileZilla, InfluxData, Kafka and Blocknify.

"We chose Storj because we shared similar values of privacy via end to end encryption and creating resilience through decentralization,” said Chris Cowles, co-founder and CEO at Blocknify, a Docusign competitor that leverages blockchain technology. “Because Storj uses S3 standard of integration, implementing Storj is familiar and easy.”

Key Storj V3 Developments

The updated Storj white paper highlights the team’s learnings from the V2 network, addresses design constraints and security deliberations, defines the Storj platform’s relationship with blockchain technology, and addresses the team’s key goals moving forward.

Scalability

While Storj V2 was only capable of smoothly scaling to 100 petabytes of data, V3 aims to handle exabytes (and more) of data storage by utilizing horizontal scaling to contend against incumbent cloud storage solutions, and it aims to update this alpha so that node operators can share their excess storage capacity with the network in early 2019.

Architecture

Functions of the Storj network have been decoupled into separate components to allow developers to make changes to parts of the system without impacting the whole. The team hopes this will lead to faster development and greater open-source contribution.

Data Uploading

When files were uploaded in Storj V2, the data would be encrypted, sharded (split into different pieces), replicated and distributed. Wilkinson explained that in V3, “Files are divided into segments, which are then divided into stripes. After, stripes are organized into erasure shares and uploaded.” Storj claims that erasure shares enable video streaming and buffering functionality similar to the YouTube experience, even if using 4K settings.

At a high level, Erasure codes allow the receiver to recover all data from any portion of the data. For example, erasure codes are used by satellites when they transmit data because they assume some data will not reach its final destination.

Storj’s decision to utilize erasure codes to deliver file resiliency is unlike most decentralized cloud storage providers, which choose replication to deliver reliability in case storage nodes fail.

In an interview with Bitcoin Magazine, JT Olio, Storj Labs director of engineering, explained the reasoning for Storj’s transition to erasure codes:

“Purely using erasure codes for resiliency is much more efficient in terms of the required storage capacity and bandwidth used to meet service level agreements. We found that our new architecture is able to achieve AWS-level resiliency with an expansion factor of 2-3, meaning for every gigabyte of data stored, we use 2-3 gigabytes of storage capacity on the network. Systems that use replication to achieve the same resiliency would require 10-16 gigabytes of storage capacity per gigabyte of data stored. In our preliminary tests, compared to our previous network (which predominantly used replication), we have greatly improved file durability while cutting the expansion factor in half.”

Revenue Sharing

Storj launched a program to share 10 percent of every dollar earned from clients that Storj Partners introduced to the network. The team hopes this will help open-source companies generate revenue when their users store data in the cloud.


This article originally appeared on Bitcoin Magazine.


by Erik Kuebler via Bitcoin Magazine

Comments