0G System is composed of multiple components, each with its own functionalities. Detailed steps are provided as a guideline to deploy the whole and complete system.
# Download the Go installerwgethttps://go.dev/dl/go1.22.0.linux-amd64.tar.gz# Extract the archivesudorm-rf/usr/local/go&&sudotar-C/usr/local-xzfgo1.22.0.linux-amd64.tar.gz# Add /usr/local/go/bin to the PATH environment variable by adding the following line to your ~/.profile.export PATH=$PATH:/usr/local/go/bin
cd0g-storage-nodegitsubmoduleupdate--init# Build in release modecargobuild--release
Update the run/config.toml
# p2p portnetwork_libp2p_port# rpc endpointrpc_listen_address# peer nodes, we provided two nodes, you can also modify to your own ipsnetwork_boot_nodes = ["/ip4/54.219.26.22/udp/1234/p2p/16Uiu2HAmTVDGNhkHD98zDnJxQWu3i1FL1aFYeh9wiQTNu4pDCgps","/ip4/52.52.127.117/udp/1234/p2p/16Uiu2HAkzRjxK2gorngB1Xq84qDrT4hSVznYDHj6BkbaE4SGx9oS"]
# flow contract addresslog_contract_address# mine contract addressmine_contract_address# layer one blockchain rpc endpointblockchain_rpc_endpoint# block number to start the synclog_sync_start_block_number# location for db, network logsdb_dirnetwork_dir# your private key with 64 length# do not include leading 0x# do not omit leading 0miner_key
Run the storage service
cdrun# consider using tmux in order to run in background../target/release/zgs_node --configconfig.toml
Storage KV
Second step is to launch the kv service.
Follow the same steps to install dependencies and rust in Stage 1
cd0g-storage-kvgitsubmoduleupdate--init# Build in release modecargobuild--release
Copy the config_example.toml to config.toml and update the parameters
# rpc endpointrpc_listen_address# ips of storage service, separated by ","zgs_node_urls ="http://ip1:port1,http://ip2:port2,..."# layer one blockchain rpc endpointblockchain_rpc_endpoint# flow contract addresslog_contract_address# block number to start the sync, better to align with the config in storage servicelog_sync_start_block_number
Run the kv service
cdrun# consider using tmux in order to run in background../target/release/zgs_kv --configconfig.toml
Note: The recommended system configuration is the same as the storage node.
Data Availability Service
Next step is to start the 0GDA service which is the primary service to send requests to.
Follow the same steps to install dependencies, go and rust in Stage 1
Update the Makefile under the 0g-data-avail/disperser folder
For encoder
# grpc port--disperser-encoder.grpc-port 34000# metric port--disperser-encoder.metrics-http-port 9109# number of workers, can be determined by the number of cores--kzg.num-workers# max concurrent request--disperser-encoder.max-concurrent-requests# size of request pool, can be larger than the number of cores--disperser-encoder.request-pool-size
For batcher
# layer one blockchain rpc endpoint--chain.rpc# private key of wallet account, can also set as environment variable--chain.private-key# modify the gas limit for different chains--chain.gas-limit# batch size limit, can be a relative large number like 1000--batcher.batch-size-limit# number of segments to upload in single rpc request--batcher.storage.upload-task-size# interval for disperse finality--batcher.finalizer-interval# aws configs, can be set into environment variables as well--batcher.aws.region--batcher.aws.access-key-id--batcher.aws.secret-access-key--batcher.s3-bucket-name--batcher.dynamodb-table-name# endpoints of storage services, for multiple endpoints, separate them one by one--batcher.storage.node-url--batcher.storage.node-url# endpoint of kv service--batcher.storage.kv-url# flow contract address--batcher.storage.flow-contract# timeout for encoding, set based on the instance capacitgy--encoding-timeout 10s
For disperser
# port to listen on the requests--disperser-server.grpc-port# aws configs, can be set into environment variables as well# note the keys are different from which in batcher--disperser-server.aws.region--disperser-server.aws.access-key-id--disperser-server.aws.secret-access-key--disperser-server.s3-bucket-name--disperser-server.dynamodb-table-name
Updated: Now you can build and run the server with one combined service.
makerun_combined
Note the configurations for the combined server is the same as the separated ones except that the prefix of certain parameters is set to combined-server. Please refer to the Makefile for detailed configurations.
We now also provide an option to use memory as metadata db instead of aws dynamodb. Set the --combined-server.use-memory-db to appoint which db you want to use.
Retrieve Service
Update the Makefile under 0g-data-avail/retriever folder
# grpc port to listen on requests--retriever.grpc-port# endpoints of storage services--retriever.storage.node-url--retriever.storage.node-url# endpoint of kv service--retriever.storage.kv-url# flow contract addres--retriever.storage.flow-contract
Build the source code
cd0g-data-avail/retrievermakebuild
Run retriever
makerun
Note: You can deploy all these services on one instance. The bottleneck is at the encoder which requires much cpu computation. As a result, the number of CPU cores is linearly related to the performance (Mbps). It is recommended to have at least 32 CPU cores for your da services. (c6i.8xlarge instance type if you want to deploy on AWS).
Also deploy storage node, kv and da services in the same region can increase the throughput. It 's experimented that on AWS, with m7i.xlarge storage instance and c6i.12xlarge da instance, the throughput can reach 15 Mbps.
Storage Node CLI
We provided a client tool if you want to directly interact with the storage node.
For the storage node rpc endpoint, you could use the team deployed https://rpc-storage-testnet.0g.ai or you could deploy yourself by following the above instructions.
Integration Test
If you want to conduct integration tests on the entire DA service, you could use the benchmark tool that we provided.
rps and max-out-standing are set to control the speed of the requests
url is the endpoint of the disperse service in Stage 3
block-size is the size of the total data in bytes
chunk-size is the same as the blob size in bytes of each request sent to the disperse service
target-chunk-num is the number of the chunks to define in 0GDA service. It's used to divide the blob into corresponding number of pieces. It's hard bounded by the blob size.