{"id":5873,"date":"2021-06-10T17:20:45","date_gmt":"2021-06-10T17:20:45","guid":{"rendered":"https:\/\/www.geekdecoder.com\/?p=5873"},"modified":"2021-06-10T17:20:45","modified_gmt":"2021-06-10T17:20:45","slug":"setting-up-a-private-ipfs-network-with-ipfs-and-ipfs-cluster","status":"publish","type":"post","link":"https:\/\/www.qbytes.cloud\/index.php\/2021\/06\/10\/setting-up-a-private-ipfs-network-with-ipfs-and-ipfs-cluster\/","title":{"rendered":"Setting Up a Private IPFS Network with IPFS and IPFS-Cluster"},"content":{"rendered":"<p>Setting Up a Private IPFS Network with IPFS and IPFS-Cluster<br \/>\nCreate 2 New Vm&#8217;s with Debian. In this case, these are 2 kvm VM&#8217;s but you can use any ones.<\/p>\n<p>node0 bootstrap node, 192.168.0.95<br \/>\nnode1 &#8211; client node, 192.168.0.116<\/p>\n<p>Create a new user &#8220;ipfs&#8221;. Add sudo rights to the user ipfs.<\/p>\n<p>Installing IPFS through the command-line is handy if you plan on building applications and services on top of an IPFS node. This method is also useful if you\u2019re setting up a node without a user interface, usually the case with remote servers or virtual machines. Using IPFS through the command-line allows you to do everything that IPFS Desktop can do, but at a more granular level since you can specify which commands to run.<\/p>\n<p>For this article, I have created a new user &#8220;ipfs&#8221;<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\n# adduser ipfs\nAdding user `ipfs' ...\nAdding new group `ipfs' (1001) ...\nAdding new user `ipfs' (1001) with group `ipfs' ...\nCreating home directory `\/home\/ipfs' ...\nCopying files from `\/etc\/skel' ...\nNew password:\nRetype new password:\npasswd: password updated successfully\nChanging the user information for ipfs\nEnter the new value, or press ENTER for the default\n        Full Name &#x5B;]: IPFS\n        Room Number &#x5B;]: 1001\n        Work Phone &#x5B;]:\n        Home Phone &#x5B;]:\n        Other &#x5B;]:\nIs the information correct? &#x5B;Y\/n] y\n<\/pre>\n<p>By default sudo is not installed on Debian, but you can install it. First enable su-mode:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ su -\n<\/pre>\n<p>Install sudo by running:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\"># apt-get install sudo -y\n<\/pre>\n<p>After that you would need to play around with users and permissions. Give sudo right to your own user.<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\"># usermod -aG sudo ipfs\n<\/pre>\n<p>Make sure your sudoers file has sudo group added. Run:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\"># visudo\n<\/pre>\n<p>Allow members of group sudo to execute any command<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">ipfs   ALL=(ALL:ALL) ALL\n<\/pre>\n<p>You need to re-login or reboot device completely for changes to take effect.<\/p>\n<p>IPFS Install<\/p>\n<p>Download the Linux binary from dist.ipfs.tech (opens new window).<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\n# cd \/home\/ipfs\nwget https:\/\/dist.ipfs.tech\/kubo\/v0.15.0\/kubo_v0.15.0_linux-amd64.tar.gz\n<\/pre>\n<p>Unzip the file:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\ntar -xvzf kubo_v0.15.0_linux-amd64.tar.gz\n\n&gt; x kubo\/install.sh\n&gt; x kubo\/ipfs\n&gt; x kubo\/LICENSE\n&gt; x kubo\/LICENSE-APACHE\n&gt; x kubo\/LICENSE-MIT\n&gt; x kubo\/README.md\n<\/pre>\n<p>Move into the kubo folder and run the install script:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\ncd kubo\nsudo bash install.sh\n&gt; Moved .\/ipfs to \/usr\/local\/bin\n<\/pre>\n<p>Test that IPFS has installed correctly:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nipfs --version\n&gt; ipfs version 0.15.0\n<\/pre>\n<h4>Initialize IPFS<\/h4>\n<p>For the purpose of this tutorial, we will install two nodes: a bootstrap node and a client node. The bootstrap node is an IPFS node that other nodes can connect to in order to find other peers. Since we are creating our own private network, we cannot use the bootstrap nodes from the public IPFS network, so we will change these settings later. Select one of your machines as bootstrap node and one as client node.<\/p>\n<p>IPFS is initialized in a hidden directory in your user home directory: ~\/.ipfs. This directory will be used to initialize the nodes. <strong>On both machines<\/strong>, bootstrap node and client node, run the following.<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">IPFS_PATH=~\/.ipfs ipfs init --profile server\n<\/pre>\n<p>Repeat steps 1 and 2 for all your VMs.<\/p>\n<h4>Creating a Private Network<\/h4>\n<p>To generate the swarm key there are two options: use a bash script, or install a key generator.<\/p>\n<h4>Option 1: Bash script<\/h4>\n<p>Create a swarm key<\/p>\n<p>Swarm key allows us to create a private network and tell network peers to communicate only with those peers who share this secret key.<\/p>\n<p>This command should be run only on your Node0. We generate swarm.key on the bootstrap node and then just copy it to the rest of the nodes. Works on Linux. On Mac, use a generator.<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ echo -e &quot;\/key\/swarm\/psk\/1.0.0\/\\n\/base16\/\\n`tr -dc 'a-f0-9' &amp;lt; \/dev\/urandom | head -c64`&quot; &amp;gt; ~\/.ipfs\/swarm.key\n<\/pre>\n<h4>Option 2: Installation of a key generator<\/h4>\n<p>The second option is to install the swarm key generator. Do this is you have a mac.<\/p>\n<h5>Install Go<\/h5>\n<p>Follow Instructions here &#8211; <a title=\"https:\/\/golang.org\/doc\/install\" href=\"https:\/\/golang.org\/doc\/install\" rel=\"noopener noreferrer nofollow\">https:\/\/golang.org\/doc\/install<\/a><\/p>\n<p>To install the swarm key generator we use <code>go get<\/code>, which uses git. If you have not installed git yet on your bootstrap node, do so with<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ sudo apt-get install git\n<\/pre>\n<p>Run the following command to install the swarm key generator:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ go get -u github.com\/Kubuxu\/go-ipfs-swarm-key-gen\/ipfs-swarm-key-gen\n<\/pre>\n<p>Run the swarm key generator to create the swarm file in your .ipfs directory:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ .\/go\/bin\/ipfs-swarm-key-gen &amp;gt; ~\/.ipfs\/swarm.key\n<\/pre>\n<p>Copy the generated swarm file to the .ipfs directory of all client nodes.<\/p>\n<p>From Node0 home directory<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ cd .ipfs\/\n$ cat swarm.key\n\/key\/swarm\/psk\/1.0.0\/\n\/base16\/\n25f64b1cf31f649817d495e446d4cbcc99000b8cc032a89b681e5f86f995fa28\n<\/pre>\n<p>On node1, create swarm.key in \/home\/ipfs\/.ipfs<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ nano swarm.key\n<\/pre>\n<p>Add to file the 3 lines from node0 swarm.key:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\/key\/swarm\/psk\/1.0.0\/\n\/base16\/\n25f64b1cf31f649817d495e446d4cbcc99000b8cc032a89b681e5f86f995fa28\n<\/pre>\n<h4>Bootstrap IPFS nodes<\/h4>\n<p>A bootstrap node is used by client nodes to connect to the private IPFS network. The bootstrap connects clients to other nodes available on the network. In our private network we cannot use the bootstrap of the public IPFS network, so in this section we will replace the existing bootstrap with the ip address and peer identity of the bootstrap node.<\/p>\n<p>First, remove the default entries of bootstrap nodes from both the bootnode and the client node. Use the command on both machines:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">IPFS_PATH=~\/.ipfs ipfs bootstrap rm --all\n<\/pre>\n<p>Check the result to see the bootstrap is empty with:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">IPFS_PATH=~\/.ipfs ipfs config show | grep &quot;Bootstrap&quot;\n  &quot;Bootstrap&quot;: null,\n<\/pre>\n<p>Now add the ip address and the Peer Identity (hash address) of your bootstrap node to each of the nodes including the bootstrap node.<\/p>\n<p>The ip address of the bootnode can be found with hostname -I.<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ hostname -I\n192.168.0.95 2603:8081:2301:3b54:5054:ff:fe4c:c469\n<\/pre>\n<p>The Peer Identity was created during the initialization of IPFS and can be found with the following statement.<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ IPFS_PATH=~\/.ipfs ipfs config show | grep &quot;PeerID&quot;\n    &quot;PeerID&quot;: &quot;12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3&quot;\n<\/pre>\n<p>Use your results as follows:<\/p>\n<p>Assemble the add bootstrap statement as follows.<\/p>\n<p>Example:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ IPFS_PATH=~\/.ipfs ipfs bootstrap add \/ip4\/192.168.0.95\/tcp\/4001\/ipfs\/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3\n<\/pre>\n<p><strong>Run your statement on both the bootstrap node and the client node.<\/strong><\/p>\n<p>You should see:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ IPFS_PATH=~\/.ipfs ipfs bootstrap add \/ip4\/192.168.0.95\/tcp\/4001\/ipfs\/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3\nadded \/ip4\/192.168.0.95\/tcp\/4001\/ipfs\/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3\n<\/pre>\n<h4>Start the network<\/h4>\n<p>The private network is installed, so we can test this network.<\/p>\n<p>We will use an environment variable to make sure that if there is a mistake in our configuration or the private network is not fully configured, the nodes don\u2019t connect to the public IPFS network and the daemons just fail.<\/p>\n<p>The environment variable is LIBP2PFORCEPNET and to start the IPFS nodes you just need to start the daemon using the &#8220;ipfs daemon&#8221;.<\/p>\n<p>Run on both nodes.<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ export LIBP2P_FORCE_PNET=1\n<\/pre>\n<p>To start daemon:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ IPFS_PATH=~\/.ipfs ipfs daemon\n<\/pre>\n<p>Do note the message log stating&#8230;&#8221;Swarm is limited to private network of peers with the swarm key&#8221;, which means that our private network is working perfectly.<\/p>\n<p><em><strong>Note: Each console is now showing te daemon command. Open 2 new consoles to node0 and node1.<\/strong><\/em><\/p>\n<p>Now add a file to our private network on one node and try to access it from the other node.<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ echo &quot;Hello World!&quot; &gt; file1.txt\n$ ipfs add file1.txt\nadded QmfM2r8seH2GiRaC4esTjeraXEachRt8ZsSeGaWTPLyMoG file1.txt\n 13 B \/ 13 B &#x5B;==========================================================] 100.00%\n$ ipfs cat QmfM2r8seH2GiRaC4esTjeraXEachRt8ZsSeGaWTPLyMoG\nHello World!\n<\/pre>\n<p>Take the printed hash and try to the cat file from client node &#8211; node1.<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ ipfs cat QmfM2r8seH2GiRaC4esTjeraXEachRt8ZsSeGaWTPLyMoG\nHello World!\n<\/pre>\n<p>You should see the contents of the added file from the first node node0. To check and be sure that we have a private network we can try to access our file by its CID from the public IPFS gateway. You can choose one of the public gateways from this list: <a title=\"https:\/\/ipfs.github.io\/public-gateway-checker\" href=\"https:\/\/ipfs.github.io\/public-gateway-checker\" rel=\"noopener noreferrer nofollow\">https:\/\/ipfs.github.io\/public-gateway-checker<\/a>.<\/p>\n<p>If you did everything right, then the file won\u2019t be accessible. Also, you can run the &#8220;ipfs swarm peers&#8221;command, and it will display a list of the peers in the network it\u2019s connected to. In our example, each peer sees the other one.<\/p>\n<p>From bootstrap node &#8211; node0<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ ipfs swarm peers\n\/ip4\/192.168.0.116\/tcp\/52784\/p2p\/12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG\n<\/pre>\n<p>From client node &#8211; node1<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ ipfs swarm peers\n\/ip4\/192.168.0.95\/tcp\/4001\/p2p\/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3\n<\/pre>\n<p>If the same file is uploaded on an other node, the same hash is generated, so the file is not stored twice on the network.<\/p>\n<p>To upload a complete directory, add the directory name and the <code>-r<\/code> option (recursive). The directory and the files in it are hashed:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ ipfs add directory_name -r\n<\/pre>\n<h4>Run IPFS daemon as a service in the background<\/h4>\n<p>Create systemctl service for ipfs on both nodes &#8211; node0 and node1:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ sudo nano \/etc\/systemd\/system\/ipfs.service\n<\/pre>\n<p>Add the following (The user is &#8220;ipfs&#8221;. Change here is using a different user):<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">&#x5B;Unit]\nDescription=IPFS Daemon\nAfter=syslog.target network.target remote-fs.target nss-lookup.target\n&#x5B;Service]\nType=simple\nExecStart=\/usr\/local\/bin\/ipfs daemon --enable-namesys-pubsub\nUser=ipfs\n&#x5B;Install]\nWantedBy=multi-user.target\n<\/pre>\n<p>restart systemctl daemon so it finds new service:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ sudo systemctl daemon-reload\n<\/pre>\n<p>tell systemctl that ipfs should be started on startup:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ sudo systemctl enable ipfs\n<\/pre>\n<p>Reboot both nodes and run below:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ sudo systemctl status ipfs\n<\/pre>\n<p>should see something like<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ sudo systemctl status ipfs\n&#x5B;sudo] password for ipfs:\n\u25cf ipfs.service - IPFS Daemon\n   Loaded: loaded (\/etc\/systemd\/system\/ipfs.service; enabled; vendor preset: ena\n   Active: active (running) since Thu 2021-06-10 09:23:46 CDT; 2min 24s ago\n Main PID: 387 (ipfs)\n    Tasks: 9 (limit: 1149)\n   Memory: 77.8M\n   CGroup: \/system.slice\/ipfs.service\n           \u2514\u2500387 \/usr\/local\/bin\/ipfs daemon --enable-namesys-pubsub\n\nJun 10 09:23:46 ipfs3 ipfs&#x5B;387]: Swarm listening on \/ip4\/192.168.0.95\/tcp\/4001\nJun 10 09:23:46 ipfs3 ipfs&#x5B;387]: Swarm listening on \/ip6\/::1\/tcp\/4001\nJun 10 09:23:46 ipfs3 ipfs&#x5B;387]: Swarm listening on \/p2p-circuit\nJun 10 09:23:46 ipfs3 ipfs&#x5B;387]: Swarm announcing \/ip4\/127.0.0.1\/tcp\/4001\nJun 10 09:23:46 ipfs3 ipfs&#x5B;387]: Swarm announcing \/ip4\/192.168.0.95\/tcp\/4001\nJun 10 09:23:46 ipfs3 ipfs&#x5B;387]: Swarm announcing \/ip6\/::1\/tcp\/4001\nJun 10 09:23:46 ipfs3 ipfs&#x5B;387]: API server listening on \/ip4\/127.0.0.1\/tcp\/5001\nJun 10 09:23:46 ipfs3 ipfs&#x5B;387]: WebUI: http:\/\/127.0.0.1:5001\/webui\nJun 10 09:23:46 ipfs3 ipfs&#x5B;387]: Gateway (readonly) server listening on \/ip4\/127\nJun 10 09:23:46 ipfs3 ipfs&#x5B;387]: Daemon is ready\n<\/pre>\n<p>Try to add the file from one node and access it from another as in above.<\/p>\n<p>On node0<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ echo IPFS Rocks! &gt; rocks.txt\n$ ipfs add rocks.txt\nadded QmQCzFx1YUpBjDStPczthtzKEoQY3gGDvSx1RJiz33abcR rocks.txt\n 12 B \/ 12 B &#x5B;=========================================================] 100.00%\n<\/pre>\n<p>On node1 check for file<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ ipfs cat QmQCzFx1YUpBjDStPczthtzKEoQY3gGDvSx1RJiz33abcR\nIPFS Rocks!\n<\/pre>\n<p>We have completed part of creating a private IPFS network and running its demons as a service. At this phase, you should have two IPFS nodes (node0 and node1) organized in one private network.<\/p>\n<p>Let\u2019s create our IPFS-CLUSTER for data replication.<\/p>\n<h4>Deploying IPFS-Cluster<\/h4>\n<p>After we create a private IPFS network, we can start deploying IPFS-Cluster on top of IPFS for automated data replication and better management of our data.<\/p>\n<p>There are two ways how to organize IPFS cluster, the first one is to set a fixed peerset (so you will not be able to increase your cluster with more peers after the creation) and the other one \u2013 to bootstrap nodes (you can add new peers after cluster was created). In this case we will be bootstrapping nodes.<\/p>\n<p>IPFS-Cluster includes two components:<\/p>\n<ul>\n<li><strong>ipfs-cluster-service<\/strong> mostly to initialize cluster peer and run its daemon<\/li>\n<li><strong>ipfs-cluster-ctl<\/strong> for managing nodes and data among the cluster<\/li>\n<\/ul>\n<p>Check the URL\u2019s for new versions at:<br \/>\nhttps:\/\/dist.ipfs.io\/#ipfs-cluster-service<br \/>\nhttps:\/\/dist.ipfs.io\/ipfs-cluster-ctl<br \/>\nhttps:\/\/dist.ipfs.io\/go-ipfs<\/p>\n<h4>Install IPFS cluster-service and IPFS Cluster-Ctl<\/h4>\n<p>Repeat this step for all of your nodes (node0 and node1).<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ wget https:\/\/dist.ipfs.tech\/ipfs-cluster-service\/v1.0.4\/ipfs-cluster-service_v1.0.4_linux-amd64.tar.gz\n<\/pre>\n<p>IPFS cluster-ctl<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ wget https:\/\/dist.ipfs.tech\/ipfs-cluster-ctl\/v1.0.4\/ipfs-cluster-ctl_v1.0.4_linux-amd64.tar.gz<\/pre>\n<p>Un-compress them.<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ tar xvfz ipfs-cluster-service_v1.0.4_linux-amd64.tar.gz\nipfs-cluster-service\/LICENSE\nipfs-cluster-service\/LICENSE-APACHE\nipfs-cluster-service\/LICENSE-MIT\nipfs-cluster-service\/README.md\nipfs-cluster-service\/ipfs-cluster-service\n<\/pre>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ tar xvfz ipfs-cluster-ctl_v1.0.4_linux-amd64.tar.gz\nipfs-cluster-ctl\/LICENSE\nipfs-cluster-ctl\/LICENSE-APACHE\nipfs-cluster-ctl\/LICENSE-MIT\nipfs-cluster-ctl\/README.md\nipfs-cluster-ctl\/ipfs-cluster-ctl\n<\/pre>\n<p>Install<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ sudo cp ipfs-cluster-service\/ipfs-cluster-service \/usr\/local\/bin\n$ sudo cp ipfs-cluster-ctl\/ipfs-cluster-ctl \/usr\/local\/bin\n<\/pre>\n<p>Confirm things are installed correctly:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ ipfs-cluster-service help\n<\/pre>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ ipfs-cluster-ctl help\n<\/pre>\n<h4>Generate and set up CLUSTER_SECRET variable<\/h4>\n<p>Now we need to generate CLUSTERSECRET <em>and set it as an environment variable for all peers participating in the cluster. Sharing the same<\/em> CLUSTERSECRET allow peers to understand that they are part of one IPFS-Cluster. We will generate this key on the bootstrap node (node0) and then copy it to all other nodes. This is a private key and the secret key which is 32-bit hex encoded random string is what keeps it private. Only peers that have this key can communicate with the cluster. Generate it and display:<\/p>\n<p>On your first node (bootstrap node , node0) run the following commands:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ export CLUSTER_SECRET=$(od -vN 32 -An -tx1 \/dev\/urandom | tr -d ' \\n')\n$ echo $CLUSTER_SECRET\n7d33cbf9b48845db5b8ba07eacb7898eea44f888576b9a19098fe33a7524d774\n<\/pre>\n<p>You should see something like this:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">7d33cbf9b48845db5b8ba07eacb7898eea44f888576b9a19098fe33a7524d774\n<\/pre>\n<p>In order for CLUSTER_SECRET to not disappear after you exit the console session, you must add it as a constant environment variable to the .bashrc file. Copy the printed key after echo command and add it to the end of .bashrc file on all of your nodes.Run this on node0 and node1.<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">export CLUSTER_SECRET=7d33cbf9b48845db5b8ba07eacb7898eea44f888576b9a19098fe33a7524d774\n<\/pre>\n<p>And don\u2019t forget to update your .bashrc file with command:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ source ~\/.bashrc\n<\/pre>\n<h4>Initialize and Start cluster<\/h4>\n<p>After we have installed IPFS-Cluster service and set a CLUSTER_SECRET environment variable, we are ready to initialize and start first cluster peer (Node0).<\/p>\n<p>Note: make sure that your ipfs daemon is running before you start the ipfs-cluster-service daemon.<\/p>\n<p>On node0 run:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ systemctl status ipfs\n\u25cf ipfs.service - IPFS Daemon\n   Loaded: loaded (\/etc\/systemd\/system\/ipfs.service; enabled; vendor preset: ena\n   Active: active (running) since Thu 2021-06-10 09:23:46 CDT; 41min ago\n Main PID: 387 (ipfs)\n    Tasks: 9 (limit: 1149)\n   Memory: 78.3M\n   CGroup: \/system.slice\/ipfs.service\n           \u2514\u2500387 \/usr\/local\/bin\/ipfs daemon --enable-namesys-pubsub\n<\/pre>\n<p>To initialize cluster peer, we need to run the command below on node0 only:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ ipfs-cluster-service init\n2021-06-10T10:06:36.240-0500    INFO    config  config\/config.go:481    Saving configuration\nconfiguration written to \/home\/ipfs\/.ipfs-cluster\/service.json.\n2021-06-10T10:06:36.242-0500    INFO    config  config\/identity.go:73   Saving identity\nnew identity written to \/home\/ipfs\/.ipfs-cluster\/identity.json\nnew empty peerstore written to \/home\/ipfs\/.ipfs-cluster\/peerstore.\n<\/pre>\n<p>You should see the output above in the console. Please note the following:<\/p>\n<p>&#8230;new identity written to \/home\/ipfs\/.ipfs-cluster\/identity.json<\/p>\n<p>Let display and note the identity as we will need this later. This is the cluser peer id. On node0 run:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ grep id \/home\/ipfs\/.ipfs-cluster\/identity.json\n    &quot;id&quot;: &quot;12D3KooWMHkMEccR9XXaJDnoWZtXb2zEdmoUtmbGCsM21DjfxHud&quot;,\n<\/pre>\n<p>The &#8220;id&#8221; is the cluster peer id.<\/p>\n<p>To start cluster peer, run below on node0 only:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ ipfs-cluster-service daemon\n<\/pre>\n<p>You should see the following:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ ipfs-cluster-service daemon\n2021-06-10T10:13:40.672-0500    INFO    service ipfs-cluster-service\/daemon.go:4\n6       Initializing. For verbose output run with &quot;-l debug&quot;. Please wait...\n2021-06-10T10:13:40.816-0500    INFO    cluster ipfs-cluster@v0.13.3\/cluster.go:\n136     IPFS Cluster v0.13.3 listening on:\n        \/ip4\/192.168.0.95\/tcp\/9096\/p2p\/12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2Rm                                                                                                             sknQePoMUxc\n        \/ip4\/127.0.0.1\/tcp\/9096\/p2p\/12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2Rmskn                                                                                                             QePoMUxc\n\n2021-06-10T10:13:40.817-0500    INFO    restapi rest\/restapi.go:521     REST API\n(HTTP): \/ip4\/127.0.0.1\/tcp\/9094\n2021-06-10T10:13:40.818-0500    INFO    ipfsproxy       ipfsproxy\/ipfsproxy.go:3\n20      IPFS Proxy: \/ip4\/127.0.0.1\/tcp\/9095 -&amp;gt; \/ip4\/127.0.0.1\/tcp\/5001\n2021-06-10T10:13:40.819-0500    INFO    crdt    go-ds-crdt@v0.1.20\/crdt.go:278 c\nrdt Datastore created. Number of heads: 0. Current max-height: 0\n2021-06-10T10:13:40.819-0500    INFO    crdt    crdt\/consensus.go:300   'trust a\nll' mode enabled. Any peer in the cluster can modify the pinset.\n2021-06-10T10:13:40.862-0500    INFO    cluster ipfs-cluster@v0.13.3\/cluster.go:\n651     Cluster Peers (without including ourselves):\n2021-06-10T10:13:40.862-0500    INFO    cluster ipfs-cluster@v0.13.3\/cluster.go:\n653         - No other peers\n2021-06-10T10:13:40.863-0500    INFO    cluster ipfs-cluster@v0.13.3\/cluster.go:\n666     ** IPFS Cluster is READY **\n<\/pre>\n<h4>Bootstrapping Additional Peers (adding them to cluster)<\/h4>\n<p>Open a new console window and connect to the client node (node1). Note: make sure that your ipfs daemon is running before you start the ipfs-cluster-service daemon.<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ systemctl status ipfs\n\u25cf ipfs.service - IPFS Daemon\n   Loaded: loaded (\/etc\/systemd\/system\/ipfs.service; enabled; vendor preset: ena\n   Active: active (running) since Thu 2021-06-10 09:23:53 CDT; 59min ago\n Main PID: 390 (ipfs)\n    Tasks: 8 (limit: 1149)\n   Memory: 78.3M\n   CGroup: \/system.slice\/ipfs.service\n           \u2514\u2500390 \/usr\/local\/bin\/ipfs daemon --enable-namesys-pubsub\n<\/pre>\n<p>Run the following commands to initialize IPFS-Cluster on node1.<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ ipfs-cluster-service init\n2021-06-10T10:24:20.276-0500    INFO    config  config\/config.go:481    Saving configuration\nconfiguration written to \/home\/ipfs\/.ipfs-cluster\/service.json.\n2021-06-10T10:24:20.278-0500    INFO    config  config\/identity.go:73   Saving identity\nnew identity written to \/home\/ipfs\/.ipfs-cluster\/identity.json\nnew empty peerstore written to \/home\/ipfs\/.ipfs-cluster\/peerstore.\n<\/pre>\n<p>Now we add the node1 to the cluster bootstrap it to node0 as follows:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ ipfs-cluster-service daemon \u2013bootstrap \/ip4\/first_node_IP\/tcp\/9096\/ipfs\/peer_id\n<\/pre>\n<p>So login to node0 <strong>on a new ssh console<\/strong>. The peer id can be found with the following Run this on node0:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ cd .ipfs-cluster\/\n$ cat identity.json\n{\n    &quot;id&quot;: &quot;12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2RmsknQePoMUxc&quot;,\n    &quot;private_key&quot;: &quot;CAESQBHGvM9TBWBRHcl8J4qiuQMk0ka4N8gcSyVCyDRkYgJ\/8+7znFeoKBw2Z+a6CQik\/\/4dKCX1REwF2Awrqh3B2uU=&quot;\n<\/pre>\n<p>Bear in mind that it should be IPFS-Cluster peer ID, not an IPFS peer ID.<\/p>\n<p>The ip can be displayed as:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">hostname -I\n192.168.0.116 2603:8081:2301:3b54:5054:ff:fe99:a8ad\n<\/pre>\n<p>Here is the full command in our case Run this on node1:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ ipfs-cluster-service daemon \u2013bootstrap \/ip4\/192.168.0.116\/tcp\/9096\/ipfs\/12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2RmsknQePoMUxc\n2021-06-10T10:40:51.361-0500    INFO    service ipfs-cluster-service\/daemon.go:4\n6       Initializing. For verbose output run with &quot;-l debug&quot;. Please wait...\n2021-06-10T10:40:51.485-0500    INFO    cluster ipfs-cluster@v0.13.3\/cluster.go:\n136     IPFS Cluster v0.13.3 listening on:\n        \/ip4\/192.168.0.116\/tcp\/9096\/p2p\/12D3KooWD6gwpVwW31p2Wan3BnYEkQy5X8QpL51aoiPdAR3X2wnZ\n        \/ip4\/127.0.0.1\/tcp\/9096\/p2p\/12D3KooWD6gwpVwW31p2Wan3BnYEkQy5X8QpL51aoiPdAR3X2wnZ\n\n\n2021-06-10T10:40:51.486-0500    INFO    restapi rest\/restapi.go:521     REST API\n(HTTP): \/ip4\/127.0.0.1\/tcp\/9094\n2021-06-10T10:40:51.486-0500    INFO    ipfsproxy       ipfsproxy\/ipfsproxy.go:3\n20      IPFS Proxy: \/ip4\/127.0.0.1\/tcp\/9095 -&amp;gt; \/ip4\/127.0.0.1\/tcp\/5001\n2021-06-10T10:40:51.487-0500    INFO    crdt    go-ds-crdt@v0.1.20\/crdt.go:278 c\nrdt Datastore created. Number of heads: 0. Current max-height: 0\n2021-06-10T10:40:51.487-0500    INFO    crdt    crdt\/consensus.go:300   'trust a\nll' mode enabled. Any peer in the cluster can modify the pinset.\n2021-06-10T10:40:51.545-0500    INFO    cluster ipfs-cluster@v0.13.3\/cluster.go:\n651     Cluster Peers (without including ourselves):\n2021-06-10T10:40:51.545-0500    INFO    cluster ipfs-cluster@v0.13.3\/cluster.go:\n653         - No other peers\n2021-06-10T10:40:51.546-0500    INFO    cluster ipfs-cluster@v0.13.3\/cluster.go:\n666     ** IPFS Cluster is READY **\n<\/pre>\n<p>To check that we have two peers in our cluster, run command on both nodes in a different terminal:<\/p>\n<p>On node0<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ ipfs-cluster-ctl peers ls\n12D3KooWD6gwpVwW31p2Wan3BnYEkQy5X8QpL51aoiPdAR3X2wnZ | node1| Sees 1 other peers\n  &amp;gt; Addresses:\n    - \/ip4\/127.0.0.1\/tcp\/9096\/p2p\/12D3KooWD6gwpVwW31p2Wan3BnYEkQy5X8QpL51aoiPdAR3X2wnZ\n    - \/ip4\/192.168.0.116\/tcp\/9096\/p2p\/12D3KooWD6gwpVwW31p2Wan3BnYEkQy5X8QpL51aoiPdAR3X2wnZ\n  &amp;gt; IPFS: 12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG\n    - \/ip4\/127.0.0.1\/tcp\/4001\/p2p\/12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG\n    - \/ip4\/192.168.0.116\/tcp\/4001\/p2p\/12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG\n    - \/ip6\/2603:8081:2301:3b54:5054:ff:fe99:a8ad\/tcp\/4001\/p2p\/12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG\n    - \/ip6\/::1\/tcp\/4001\/p2p\/12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG\n12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2RmsknQePoMUxc | node0 | Sees 1 other peers\n  &amp;gt; Addresses:\n    - \/ip4\/127.0.0.1\/tcp\/9096\/p2p\/12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2RmsknQePoMUxc\n    - \/ip4\/192.168.0.95\/tcp\/9096\/p2p\/12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2RmsknQePoMUxc\n  &amp;gt; IPFS: 12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3\n    - \/ip4\/127.0.0.1\/tcp\/4001\/p2p\/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3\n    - \/ip4\/192.168.0.95\/tcp\/4001\/p2p\/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3\n    - \/ip6\/2603:8081:2301:3b54:5054:ff:fe4c:c469\/tcp\/4001\/p2p\/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3\n    - \/ip6\/::1\/tcp\/4001\/p2p\/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3\n<\/pre>\n<p>On node1<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ ipfs-cluster-ctl peers ls\n12D3KooWD6gwpVwW31p2Wan3BnYEkQy5X8QpL51aoiPdAR3X2wnZ | node1 | Sees 1 other peers\n  &amp;gt; Addresses:\n    - \/ip4\/127.0.0.1\/tcp\/9096\/p2p\/12D3KooWD6gwpVwW31p2Wan3BnYEkQy5X8QpL51aoiPdAR3X2wnZ\n    - \/ip4\/192.168.0.116\/tcp\/9096\/p2p\/12D3KooWD6gwpVwW31p2Wan3BnYEkQy5X8QpL51aoiPdAR3X2wnZ\n  &amp;gt; IPFS: 12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG\n    - \/ip4\/127.0.0.1\/tcp\/4001\/p2p\/12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG\n    - \/ip4\/192.168.0.116\/tcp\/4001\/p2p\/12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG\n    - \/ip6\/2603:8081:2301:3b54:5054:ff:fe99:a8ad\/tcp\/4001\/p2p\/12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG\n    - \/ip6\/::1\/tcp\/4001\/p2p\/12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG\n12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2RmsknQePoMUxc | node0 | Sees 1 other peers\n  &amp;gt; Addresses:\n    - \/ip4\/127.0.0.1\/tcp\/9096\/p2p\/12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2RmsknQePoMUxc\n    - \/ip4\/192.168.0.95\/tcp\/9096\/p2p\/12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2RmsknQePoMUxc\n  &amp;gt; IPFS: 12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3\n    - \/ip4\/127.0.0.1\/tcp\/4001\/p2p\/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3\n    - \/ip4\/192.168.0.95\/tcp\/4001\/p2p\/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3\n    - \/ip6\/2603:8081:2301:3b54:5054:ff:fe4c:c469\/tcp\/4001\/p2p\/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3\n    - \/ip6\/::1\/tcp\/4001\/p2p\/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3\n<\/pre>\n<p>And you should see the list of cluster peers.<\/p>\n<h4>Run IPFS-Cluster daemon as a service<\/h4>\n<p>In the 2 terminal for each node that have the ipfs daemon running, hit &#8220;ctrl-c&#8221; to stop the daemon.<\/p>\n<p>Lets add the ipfs-cluster-service daemon as a service. On both nodes, run the following:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ sudo nano \/etc\/systemd\/system\/ipfs-cluster-service.service\n<\/pre>\n<p>Add the following:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">&#x5B;Unit]\nDescription=IPFS Cluster Service\nAfter=network.target\n\n&#x5B;Service]\nLimitNOFILE={{ ipfs_cluster_fd_max }}\nEnvironment=&quot;IPFS_CLUSTER_FD_MAX={{ ipfs_cluster_fd_max}}&quot;\nExecStart=\/usr\/local\/bin\/ipfs-cluster-service daemon\nRestart=on-failure\nUser=ipfs\n\n&#x5B;Install]\nWantedBy=multi-user.target\n<\/pre>\n<p>Restart systemctl daemon so it finds new service. Do this on both nodes.<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ sudo systemctl daemon-reload\n<\/pre>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ sudo systemctl enable ipfs-cluster-service.service\nCreated symlink \/etc\/systemd\/system\/multi-user.target.wants\/ipfs-cluster-service.service \u2192 \/etc\/systemd\/system\/ipfs-cluster-service.service.\n<\/pre>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ sudo systemctl start ipfs-cluster-service\n<\/pre>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ sudo systemctl status ipfs-cluster-service\n\u25cf ipfs-cluster-service.service - IPFS Cluster Service\n   Loaded: loaded (\/etc\/systemd\/system\/ipfs-cluster-service.service; enabled; ven\n   Active: active (running) since Thu 2021-06-10 11:04:23 CDT; 20s ago\n Main PID: 584 (ipfs-cluster-se)\n    Tasks: 6 (limit: 1149)\n   Memory: 39.7M\n   CGroup: \/system.slice\/ipfs-cluster-service.service\n           \u2514\u2500584 \/usr\/local\/bin\/ipfs-cluster-service daemon\n\nJun 10 11:04:23 ipfs3 ipfs-cluster-service&#x5B;584]: 2021-06-10T11:04:23.613-0500\nJun 10 11:04:23 ipfs3 ipfs-cluster-service&#x5B;584]:         \/ip4\/192.168.0.95\/tcp\/90\nJun 10 11:04:23 ipfs3 ipfs-cluster-service&#x5B;584]:         \/ip4\/127.0.0.1\/tcp\/9096\/\nJun 10 11:04:23 ipfs3 ipfs-cluster-service&#x5B;584]: 2021-06-10T11:04:23.672-0500\nJun 10 11:04:23 ipfs3 ipfs-cluster-service&#x5B;584]: 2021-06-10T11:04:23.672-0500\nJun 10 11:04:23 ipfs3 ipfs-cluster-service&#x5B;584]: 2021-06-10T11:04:23.673-0500\nJun 10 11:04:23 ipfs3 ipfs-cluster-service&#x5B;584]: 2021-06-10T11:04:23.673-0500\nJun 10 11:04:23 ipfs3 ipfs-cluster-service&#x5B;584]: 2021-06-10T11:04:23.674-0500\nJun 10 11:04:23 ipfs3 ipfs-cluster-service&#x5B;584]: 2021-06-10T11:04:23.674-0500\nJun 10 11:04:23 ipfs3 ipfs-cluster-service&#x5B;584]: 2021-06-10T11:04:23.674-0500\n<\/pre>\n<p>Reboot both nodes.<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ sudo shutdown -r now\n<\/pre>\n<p>Login after reboot and check that both IPFS and IPFS-Cluster services are running.<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ sudo systemctl status ipfs\n$ sudo systemctl status ipfs-cluster-service\n<\/pre>\n<h4>Test IPFS-Cluster and data replication<\/h4>\n<p>To test data replication, create the file on node0 and add it to the cluster:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ echo Hello World! &gt; myfile.txt\n$ cd ipfs-cluster-ctl\/\n$ ipfs-cluster-ctl add \/home\/ipfs\/myfile.txt\nadded QmfM2r8seH2GiRaC4esTjeraXEachRt8ZsSeGaWTPLyMoG myfile.txt\n<\/pre>\n<p>Take hash id of the recently added file and check its status:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ ipfs-cluster-ctl status QmfM2r8seH2GiRaC4esTjeraXEachRt8ZsSeGaWTPLyMoG\n<\/pre>\n<p>You should see that this file has been PINNED among all cluster nodes.<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">$ ipfs-cluster-ctl status QmfM2r8seH2GiRaC4esTjeraXEachRt8ZsSeGaWTPLyMoG\nQmfM2r8seH2GiRaC4esTjeraXEachRt8ZsSeGaWTPLyMoG:\n    &amp;gt; node1                : PINNED | 2021-06-10T16:18:20.744805693Z\n    &amp;gt; node0                : PINNED | 2021-06-10T11:18:20.740298488-05:00<\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Setting Up a Private IPFS Network with IPFS and IPFS-Cluster Create 2 New Vm&#8217;s with Debian. In this case, these are 2 kvm VM&#8217;s but you can use any ones. node0 bootstrap node, 192.168.0.95 node1 &#8211; client node, 192.168.0.116 Create a new user &#8220;ipfs&#8221;. Add sudo rights to the user ipfs. Installing IPFS through the &#8230; <a title=\"Setting Up a Private IPFS Network with IPFS and IPFS-Cluster\" class=\"read-more\" href=\"https:\/\/www.qbytes.cloud\/index.php\/2021\/06\/10\/setting-up-a-private-ipfs-network-with-ipfs-and-ipfs-cluster\/\" aria-label=\"Read more about Setting Up a Private IPFS Network with IPFS and IPFS-Cluster\">Read more<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[51],"tags":[139],"class_list":["post-5873","post","type-post","status-publish","format-standard","hentry","category-ipfs","tag-linux"],"_links":{"self":[{"href":"https:\/\/www.qbytes.cloud\/index.php\/wp-json\/wp\/v2\/posts\/5873","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.qbytes.cloud\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.qbytes.cloud\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.qbytes.cloud\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.qbytes.cloud\/index.php\/wp-json\/wp\/v2\/comments?post=5873"}],"version-history":[{"count":0,"href":"https:\/\/www.qbytes.cloud\/index.php\/wp-json\/wp\/v2\/posts\/5873\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.qbytes.cloud\/index.php\/wp-json\/wp\/v2\/media?parent=5873"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.qbytes.cloud\/index.php\/wp-json\/wp\/v2\/categories?post=5873"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.qbytes.cloud\/index.php\/wp-json\/wp\/v2\/tags?post=5873"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}