My FeedDiscussionsHeadless CMS
New
Sign in
Log inSign up
Learn more about Hashnode Headless CMSHashnode Headless CMS
Collaborate seamlessly with Hashnode Headless CMS for Enterprise.
Upgrade ✨Learn more
Solana - 8 Key Innovations Simplified and Analogized

Solana - 8 Key Innovations Simplified and Analogized

Note: This is a long blog but if you wanna completely understand why Solana is revolutionary, be patient and go through it.

Harsh N. Patel's photo
Harsh N. Patel
·Sep 29, 2021·

19 min read

Solana has 8 key innovations that are way different and look very promising as compared to other blockchains. The first innovation is with the consensus mechanism with Solana called Proof of History (PoH). PoH is a high frequency Verifiable Delay Function, don't worry I'll simplify Verifiable Delay Functions.

"Verifiable Delay Function requires a specific number of sequential steps to evaluate, yet produces a unique output that can be efficiently and publicly verified". A VDF takes longer time to execute but very less time to verify and agree on a single output (the output here would be reaching consensus or in other words creating a block).

Verifiable Delay Functions are analogous to Time-Delay loops in normal programming.

PoH being a high frequency VDF is used to order events in a chronological order by using cryptography and timestamping.

Analogy

  • Imagine you're bouncing a ball on the ground.

  • The basic set of events here would be:

  • Ball in your hand --> Ball released from your hand --> Ball hits the ground --> Ball bounces back up --> you catch the ball.

  • Obviously a normal sane human being would easily identify events in it's chronological order, but machines? ....nah :/ as they can't reason yet and aren't that intelligent, we need to label the events for them.

  • Now imagine a group of machines or computers (nodes) are in an organisation named Solana. These computers are discussing about the ordering of events in our ball example. Since they don't know the workings of physics.

  • Computer-A: might say "the ball rose from the ground and the hand catched it"

  • Computer-B: "No, the ball was already in the air and then it touched your hand and reached the ground"

  • Similarly other computers might have different opinions and they won't come to a single agreement. But they need to agree on a single decision to help the Solana organisation take actions and grow (i.e. to create a block).

  • Enter the boss Anatoly Yakovenko...

  • He begins by appointing a mentor named PoH(Proof of History algorithm) to help these machines out.

  • PoH had already recorded the ball bouncing event in his cryptographic camera.

  • This Cryptographic camera works by taking a photograph of all the events in chronology and each image is converted into a cryptographic hash (take it as a unique string of digits and letters) and numbering them in order (count value) creating a cryptographic video( isn't actually a video just a metaphor).

  • The cryptographic "video" is transmitted to each member(computer) and thus they agree on a single but correct understanding of the event and take decisions and execute actions (create blocks) for Solana.

Explanation

What we understand from the above example is, PoH is a high frequency clock that timestamps events using cryptography. The events here would be transactions made to the Solana blockchain. This VDF is a recursive function that takes in in input data or transactions, generates a unique hash and appends a count number to it --> detects a new event (transaction) and the VDF acts on it --> generates a new hash completely different from the previous one and increments the count number.

They intentionally take longer time to execute and are applied where you need to execute a function after a certain period of time. Proof of History is like "a clock before consensus" as the official Solana blog mentions, consensus simply means block creation after majority of the network nodes agree on the block to be created. Comparing the clock mechanism of Solana with Bitcoin.

Bitcoin uses Proof of Work mechanism where computers or nodes spend time on solving complex mathematical computations to generate a nonce, hence proving that they "worked" for creating or mining the block. This also verifies that they spent enough time to create a block. This process is slow and hence Bitcoin's block time is 10 minutes whereas Solana's block time is 400ms.

So why do we need this?

Decentralised systems like Blockchains have a very important requirement to agree on time and stay synchronous with each and every node on the network.

I'll simply explain other benefits of PoH below:- -The input events can refer to the past events, and this reference can be passed in as an input to PoH algorithm to verify that the past event occured before the current event. For eg. Alice sent a 100$ to Bob as a payment for completing a task, Bob did receive the 100$ but lies that he didn't get the payment and in fact says that he sent 100$ to Alice as a loan similar to trying a double spend attack. Now the PoH algorithm has already seen and recorded the transaction and hashed it with a count number in the ledger, so Bob can't scam Alice as the algorithm knows the sequence of the transactions.

  • The output of the PoH algorithm can be verified in parallel. Parallel means the output can be fragmented and sent to many devices for verification.

  • Data can be inserted into the sequence by appending the hash of the data to the previous generated state. The state, input data, and count are all published. Appending the input causes all future output to change unpredictably. It is still impossible to parallelize, and as long as hash function is pre-image and collision resistant, it is impossible to create an input that would generate a desired hash in the future, or create an alternative history with the same hashes. We can prove that time passed between any two append operations.

  • The protocol would be resistant or tolerant to ASIC attacks.

TOWER BFT

Simplified Jargon

Tower BFT is an optimised version of BFT or Byzantine Fault Tolerance Assuming you already know fault tolerance I won't explain it here. Byzantine Fault is a type of fault that occurs in decentralised systems. Nodes may behave strangely for unknown reasons or intentionally and they may send false information while communicating with other nodes, transferring different messages to different nodes making it hard to identify the malicous or faulty node. To solve this problem, the network would require certain number of honest nodes which would come to an agreement or a consensus regarding a faulty node and hence the system can arrive at a conclusion, mitigating the problem. This kind of a system can be called as a Byzantine Fault Tolerant system.

Practical Byzantine Fault Tolerance(PBFT) is the same as BFT but here, the consensus is achieved to decide the validity of the block.

"In bitcoin(proof of work), block proposer is the fastest miner, whereas, in proof of stake, block proposer is the richest miner. In PBFT, the block creator may not be any special miner, but the proposed block which is committed to the chain would be the most agreed block" -coinmonks. Going back to our previous example the "cryptographic camera" uses a SHA-256 cryptographic hash function to hash every event.The function is recursive (loops on itself) and starts sampling all the generated hashes into a data structure with count numbers on each sample.

This data structure ensures storage of chronological ordering of events and time of the events. We can insert messages or transaction info into each iteration of the hash function to embed it in the generated hash as discussed above in this article. This data structure is such that even if a new computer joins the network and tries to decode the data structure it will be able to compute state of each and every node on the network without the other nodes having to tell the new computer what is what....

Each PoH hash uniquely identifies that fork of the ledger. When a validator votes on a PoH hash of a message it will only be considered valid if the hash actually exists on the ledger.

A simple analogy Validators are like Miners for Solana when compared with Bitcoin.

Voting Analogy

  • Take a block as an election candidate opting for the election but in this case the elected candidate once elected can't be removed from power.

  • Validator nodes here are the people voting for the candidate(block) to be elected.

  • Time slots are the time period for which the election lasts.

Tower BFT is basically a voting game for the validation of blocks. The rules of this game are as follows.

It employs PoH algorithm as a reliable clock to employ the validation procedure of a block The simple parts of this Tower BFT algorithm are:- 1) Validator Vote 2) time-out (the expiry time of a particular vote) 3) time slots - the duration of time in which a validator node is allowed to vote for a particular block, since time is measured in no. of hashes PoH produces (remember PoH runs continously, capturing events like a time lapse or a high speed camera and hashes them), Solana provides a fixed time slot i.e. for the no. of hashes PoH creates in 400ms and hence the block time is 400ms for Solana. Each Validator node can vote only in a particular time slot.

Each validator can vote only once in a particular time slot, once a validator votes it cannot change it's vote preference for a block(i.e. it can't say "ah shit i voted for the wrong block, let me change my vote"), so validators have to be careful in choosing which block to vote for. Most validators choose a block where majority of the votes go, as Bitcoin network chooses the longest chain to add a new block.

A validator is allowed to vote only if they stake a particular amount of SOL tokens. If any validator votes for the wrong block they might lose a part of their staked Sol and if it continues they might lose all of their SOL and might get kicked out of the network.

There is a tendency of a particular vote to be rolled-back as if the validator never voted, after each time slot and to buy more time and not get kicked out of the network a validator has to keep voting correctly to double their time out duration as explained in the next point.

As the particular vote of a validator(keeping in mind they voted for the correct block) gets older after every time slot (400ms), their time out duration will double, i.e their expiry time would double and accumulate over each time slot that passes and there will be a point when the vote gets frozen on the network and would be immutable for eternity (it will get converted into a canonical form) although this requires 2/3 of the network nodes should vote on the same block.

If a validator skips voting for a certain amount of time slots, they will lose their time-out duration they earned and their previous votes may die out eventually if they don't start voting again.

This allows continous streaming of blocks and block creation and hence the network continously runs and is reliable.

Once a validator reaches their timeout duration of 2^32 slots, they will be eligible for a block reward similar to block reward in mining bitcoin.

Here I conclude with my simplest explanation of Tower BFT.

TURBINE - BLOCK PROPAGATION PROTOCOL

Simplified explanation

The networks are required to propagate large amounts of data to a large number of interconnected nodes. The simple vanilla way would be to send a particular size of data to each and every node of the network, this is a very slow process and the network bandwidth can't accomodate all these connections. This is why we need a new way to transmit large amounts of data fast and efficiently. Turbine sends large amounts of data by simply breaking up large data block into small packets and sending them through different routes (this method is called UDP).

Each validator node retransmits packets back to other group of peers or validator nodes called neighbourhoods. The main network is basically a tree of neighbourhoods.

There's a security threat to this, there can be adverserial nodes (rebel or opposing nodes) which can refuse to transmit or retransmit packets.

To solve this the leader of the nodes creates special type of codes called Erasure codes, Erasure codes allow any validator to reconstruct the whole block without having to receive all of the packets or messages. "If the leader transmits 33% of the block’s packets as erasure codes, then the network can drop any 33% of the packets without losing the block" - solana blog.

The validators may have different specs, and the tree is constructed accordingly with validators that have higher staking amount placed closed to the leader node in a tree. This still has a security threat like when the rebel or malicious nodes may reposition themselves in the tree and perform a DoS(Denial of Service) attack.

To solve this problem what Turbine algorithm does is for every data packet or a message to be sent they generate a new stake- weighted tree, i.e. as each packet is transmitted through different paths or routes, the algorithm generates a new ordering of nodes,even the leader nodes change for every transmission, with the neighborhood changing and the closeness of a node to a leader node changing, as the path every data packet takes is different and so would be the nodes that they pass through.

This curbs the DoS threat or an eclipse attack as the malicious node would require the full control of the network which is near to impossible. So this was all about the Turbine algorithm.

GULF-STREAM

I'll begin with explaining what are mempools...

Mempools are like giant pools or records of transactions that have been sent but haven't been processed by the network. Basically the less the size of a mempool, the more better and scalable the network is, obviously if the network has a large no. of users to send transactions. So a network tries to reduce the size of the mempool by processing transactions as fast as it can.

Analogy

Bitcoin and Ethereum have a mempool size ranging between 20k-100k. Mempools in BTC and ETH are propagated between random nodes using a gossip protocol, gossip protocols works on the exact same process as a virus spreads in an epidemic, A message travels through node by node in a peer-to-peer fashion until it reaches the destination node. This as we can guess is a slow process due to message or a packet having to go through each and every node at times, for a large volume of transactions the network would get much slower.

Solana specifically solves this problem by increasing throughput to roughly 50000 transactions per second.

Explanation

Here every validator knows the order of the upcoming leader nodes and can forward transaction request or message before hand to the intended leader node instead of waiting for a transaction message to go through a broadcast like in a gossip protocol.

The way it works is, clients select a very recent block hash (simply a block) that has been approved by all of the nodes and added to the network. Since the block hash is recent it may have a tendency to rollback or expire as discussed in Tower BFT section, the transaction may either pass or fail depending on the voting strength(the more the number of confirmed blocks that a validator voted for, the more its voting strength) and amount of tokens staked by the validator node.

Hence validators can execute transactions beforehand or drop any that fail keeping the TPS rate high.

In addition to that leaders can prioritize processing transactions based on stake weight, the validator who has staked 1000 SOL would get its transactions processed and confirmed before the transactions of the validator who has staked 100 SOL.

In this way transaction processing is optimized on Solana.

SEALEVEL - Smart Contract parallel processing

In Solana smart contracts are called programs. Solana has on chain programs built by their core developers like the System Program, Loader program, Token program etc.

Accounts are nothing but a pair of public keys and private keys, Accounts have an owner field in them and the owner field is set to program's public key, i.e. the account is owned the program. I won't discuss much about programs and accounts as they're already discussed in the solana blog by Anatoly.

A transaction contains instructions, the instructions contain the program, program instructions accounts the program wants to read and write. When a transaction specifies the instructions beforehand, the system program(the main smart contract responsible for governing other contracts) assigns memory to the accounts specified in the transactions and executes the instructions to read and write on the memory.

Solana basically employs the same architecture which embedded systems use, Solana is like a huge decentralised operating system that is highly optimised and the CPUs are the individual nodes which keep it running and the storage system is enabled by CloudBreak which I'll discuss later.

Solana employs parallel processing by employing SIMD, which basically means that a single program instruction is executed on multiple data streams and accounts.

In solana, each instruction specifies accounts that it wants to read and write ahead of time so the system program allocates memory for it and hence the overall performance is optimized. The pending transactions are easily sorted and non-overlapping transactions are executed in parallel.

For more details just go through the Solana blog.

CLOUDBREAK - Horizontally scaled accounts database

Simply speaking as Solana can be compared with an operating system, CloudBreak can be considered as an architecture that solana follows to store data. To increase scalability in blockchain without distributing the network itself (sharding) Solana prefers using RAM, SSDs and Threading efficiently as an ideal operating system would do. As the adoption of Solana grows so does the storage requirements.

Solana uses efficient memory handling techniques used in various software and embedded systems, It uses Horizontal scaling architecture, don't worry I'll provide an analogy that I borrowed from the article.

Analogy

"A useful analogy for understanding this distinction is to think about scaling as if it were upgrading your car. Vertical scaling is like retiring your Toyota and buying a Ferrari when you need more horsepower. With your super-fast car, you can zoom around at high speed with the windows down and look amazing. But, while Ferraris are awesome, they’re not very practical- they’re expensive, and at the end of the day, they can only take you so far before they’re out of gas (not to mention, only two seats!). Horizontal scaling works similarly in that it gets you that added horsepower, but it doesn’t mean ditching the Toyota for the Ferrari. Instead, it’s like adding another vehicle to a fleet. You can think of horizontal scaling like several vehicles you can drive all at once. Maybe none of these machines is a Ferrari, but no one of them needs to be: across the fleet, you have all the horsepower you need".

Explanation

The official solana blog on CloudBreak has some technical jargon so I wouldn't dive much deeper in it, I'll just simplify and shorten it.

  • It stores the index No.s of the transactions and block data in RAM.
  • Accounts are stored in memory-mapped files i.e. the files that point to a certain part of the memory in the hardware and it itself acts as a memory. The maximum storage for accounts currently is about 4MB.
  • Each mapped file can only store accounts that are present in that particular block of the network.
  • The stored accounts in mapped files are distributed across all of the SSDs on the nodes of the Solana network.
  • Copy-on-write semantics are used, i.e. imagine a lot of users want to access the same unique document, the users would be provided with the references to the document (here memory mapped files play that role) and if a user wants to make changes to that single document the algorithm would actually make a copy of that document and give it to the user to incorporate changes in it so that the original document stays intact.
    • Writes or changes are appended to a random memory map in the current block on which it is changing account states
    • Each change made also requires the indexes to be changed.

So what are the benefits of doing this?

  • This enables sequential write benefits to the network. For eg We are storing an image to a pendrive, the pendrive drivers would store or write the image data in a sequence. Sequentially writing data to that same disk takes about 30ms per MB. So if you sequentially write 100MB of data to a disk, it will take around 3 seconds. Basically making the data storage process significantly faster.

  • It enables garbage collection of invalid accounts that might have been rolled back as an example.

  • data storing across many SSDs are horizontally scaled (explained above).
  • data reading from those SSDs are also horizontally scaled.
  • the block integrity(i.e. whether the block is not changed, corrupted or ambiguous) is maintained by sequential reading of data. Solana uses RAMs and SSDs for storage of accounts and account states, which obviously speeds up the process but if the network traffic grows exponentially it will switch to SSDs which would also give somewhat faster read and write speeds than traditional Hard Drives.

PIPELINING

Pipelining is commonly used method in CPU operation and designing where in data inputs or streams are given to different hardware units for processing. The data streams or inputs are provided in sequential order for further optimization.

Analogy

  • Take a company, whose Manager is assigning different tasks to different employees.
  • These tasks are assigned to employees in a specific order or sequence.
  • And then strangely the manager says all of employees of different departments should work at the same time and amalgamate the results to a final result to finish the project.

Explanation

  • The manager we discussed above is the Transaction Processing Unit (TPU) which consists of CPUs, GPUs and SSDs

  • GPUs are the main entities that enable parallelization.

  • "The team had to develop a way to quickly validate massive blocks of transactions, while quickly replicating them across the network. To achieve this, the process of transaction validation on the Solana network makes extensive use of an optimization common in CPU design called pipelining."

ARCHIVERS

Archivers represents a group of nodes that are interconnected in a network outside of the main Solana network and basically works for storage for Solana. These nodes do not participate in transaction verification for the network.

Solana ledger or the block history needs to be stored somewhere cause that's obvious, block history is the heart of any blockchain.

Photography Contest - An Analogy

Imagine a photography contest where participants (Archiver nodes) are asked to take pictures of a certain Industrial process in sequence (here the main Solana blockchain, with blocks in sequence).

  • The participants use our well known cryptographic camera (PoH). As we know PoH is a high speed cryptographic camera that takes photos (cryptographic samples or pieces of the ledger in this case).
  • There are two sets of judges in this contest: 1) Client Judges (Replicator clients) 2) Validating Judges (Validator nodes)
  • After all the participants have taken the photos of the process, the Client judges ask the participants to display their photos for verification.
  • After going through all of the photos taken by each participant, the client judges sign a certificate (proof of replica) for each of the individual participant that they have actually taken the photos.
  • The Validator judges pass a judgement(validation) on the Client judges certificates to verify that the judges actually signed the certificates and are not misjudging the participants.
  • The Client judges can cross check on Validator Judges that they actually validated a correctly signed certificate.

Hence this is how Proof of Replica algortihm works.

Explanation

The process is simple:

  1. The archiver nodes signal the Solana network that it has a certain amount of space for storage. the solana ledger employs PoH sampling algorithm to sample blocks.
  2. The samples are assigned to Archiver storage for replication.
  3. The validator nodes also participate in the validation of replicas(copies of the pieces of ledgers) taken.
  4. For security reasons we have to check if the sampling of the pieces of the blocks are actually recorded and stored in the Archiver nodes otherwise the valuable ledger data would be lost.
  5. Proof of Replicas (explained in Analogy) consensus algorithm made specifically for Archiver nodes ensures this security.
  6. Our cryptographic camera then starts to hash the pieces of the blockchain (samples)
  7. PoH ledger(storage of our cryptographic camera) records these hashes in correct sequence so that the order of the block remains unchanged.

With this I conclude my blog, feel free to review and suggest any changes in it, I've taken majority of the information from the official Solana Medium blog published by Anatoly My DMs are open on my Twitter account and if you like this, support me by giving a follow.