is a public self-sovereign identity (SSI) network for building secure 🔐 and private 🤫 self-sovereign identity systems on 💫. Our core vision is to add viable commercial models to decentralised digital 🆔
cheqd-node is the ledger/node component of the cheqd network tech stack, built using and .
▶️ Quick start for joining cheqd networks
Join our for help, questions, and support if you are looking to join our or .
Either the cheqd team, or one of your fellow node operators will be happy to offer some guidance.
✅ Mainnet
Getting started as a node operator on the cheqd network is as simple as...
Install of cheqd-node software (currently v3.x.x) on a hosting platform of your choice by .
Once you have acquired CHEQ tokens, .
If successfully configured, your node would become the latest validator on the cheqd mainnet. Welcome to the new digital ID revolution!
🚧 Testnet
Our is the easiest place for developers and node operators to get started if you're not quite ready yet to dive into building apps on our mainnet. To get started...
Install of cheqd-node software (currently v3.x.x) on a hosting platform of your choice by .
Acquire testnet CHEQ tokens through .
🧑💻 Using cheqd
Once installed, cheqd-node can be controlled using the .
📌 Currently supported functionality
Basic token functionality for holding and transferring tokens to other accounts on the same network
Creating, managing, and configuring accounts and keys on a cheqd node
Staking and participating in public-permissionless governance
🛠 Developing & contributing to cheqd
cheqd-node is written in Go and built using Cosmos SDK. The explains a lot of the of how the cheqd network functions.
If you want to build a node from source or contribute to the code, please read our guide to .
Creating a local network
If you are building from source, or otherwise interested in running a local network, we have for development purposes.
🐞 Bug reports & 🤔 feature requests
If you notice anything not behaving how you expected, or would like to make a suggestion / request for a new feature, please create a and let us know.
💬 Community
Our is our primary chat channel for the open-source community, software developers, and node operators.
Please reach out to us there for discussions, help, and feedback on the project.
Indexers, in a broad context, play a fundamental role in organising and optimising data retrieval within various systems. These tools act as navigational aids, allowing efficient access to specific information by creating structured indexes. In the realm of databases and information management, indexers enhance query performance by creating a roadmap to swiftly locate data entries.
In the context of blockchain and dApps, indexers go beyond traditional databases, facilitating streamlined access to on-chain data. This includes transaction histories, smart contract states, and event logs. In the dynamic and decentralised world of blockchain, indexers contribute to the efficiency of data queries, supporting real-time updates and ensuring the seamless functionality of diverse applications and platforms.
There are several indexer solutions available, each offering different levels of decentralisation, ease of development, and performance for you to consider. These solutions serve as intermediaries to assist in indexing Cheqd.
Command Line usage
Overview
There are two command line interface (CLI) tools for interacting with a running cheqd-node instance:
cheqd Cosmos CLI: This is intended for node operators. Typically for node configuration, setup, and Cosmos keys.
Identity SDKs: Such as , for identity transactions with DIDs, Verifiable Credentials, and DID-Linked Resources.
This document is focussed on providing guidance on how to use the cheqd Cosmos CLI.
cheqd Cosmos CLI commands by functionality
Upgrade Guides
Context
This section lists previous software upgrade proposals and specific instructions on how to execute them.
If you think you have discovered a security issue in any of cheqd projects, we'd love to hear from you.
We take all security bugs seriously. If confirmed upon investigation, we will patch it within a reasonable amount of time and release a public security bulletin discussing the impact and credit the discoverer.
There are two ways to report a security bug:
Email us at
Join and post a message on the #security channel
List of ADRs
This page lists the ADRs for cheqd-node that have been Accepted, Proposed, or in Draft stage.
Accepted
Network-wide Software Upgrades
Context
Updates to the ledger code running on cheqd mainnet/testnet is voted in via governance proposals on-chain for "breaking" software changes.
We use to define our software release versions. The latest software version running on chain is in the v2.x.x family. Any new release versions that bump only the minor/fix version digits (the second and the third part of release version numbers) is intended to be compatible within the family and does not require an on-chain upgrade proposal to be made.
Building from source
Building from source
Prerequisites:
Install (currently, our builds are done for Golang v1.18)
Upgrade to v3.1.x
100x increases in gas prices, with minimum gas price going up from 50ncheq to 5000ncheq (as )
This change incorporates looser restrictions on DID Document properties such as assertionMethod, allowing developers to specify additional details for keys which might not necessarily be used for authentication/controller purposes (e.g., BBS+ keys for credential issuance.)
Network-wide software upgrades are typically initiated by the core development team behind the cheqd project. The process followed for the network upgrade is defined in our guide on creating a Software Upgrade proposal via network governance.
build(deps): Sync go workspace dependencies, bump NPM packages, bump bufbuild/buf-setup-action from 1.45.0 to 1.47.2 by @dependabot in https://github.com/cheqd/cheqd-node/pull/814
A cheqd-node instance can be controlled and configured using the cheqd Cosmos CLI.
This document contains the commands for account management.
Account-related commands in cheqd CLI
Querying account balances
Command
Example
Transferring tokens
Command
Arguments
from can be either key alias or address. If it's an address, corresponding key should be in keychain.
Example
Configure State Sync
State Sync allows node operators to initialize their nodes quickly with much less storage consumed. This is really useful in case you don't need full (or significant) chain history and it significantly reduces costs. For comparison, our state DB snapshots published on snapshots.cheqd.net, typically have around 600GB of data, while after State Sync initialization, node consumes only around 540 MB of data.
Modifying config.toml file
In order to enable State Sync, you need to update the Comet BFT configuration file, like this:
#######################################################
### State Sync Configuration Options ###
#######################################################
[statesync]
# State sync rapidly bootstraps a new node by discovering, fetching, and restoring a state machine
# snapshot from peers instead of fetching and replaying historical blocks. Requires some peers in
# the network to take and serve state machine snapshots. State sync is not attempted if the node
# has any local state (LastBlockHeight > 0). The node will have a truncated block history,
# starting from the height of the snapshot.
enable = true
# RPC servers (comma-separated) for light client verification of the synced state machine and
# retrieval of state data for node bootstrapping. Also needs a trusted height and corresponding
# header hash obtained from a trusted source, and a period during which validators can be trusted.
#
# For Cosmos SDK-based chains, trust_period should usually be about 2/3 of the unbonding time (~2
# weeks) during which they can be financially punished (slashed) for misbehavior.
rpc_servers = "https://eu-rpc.cheqd.net:443,https://ap-rpc.cheqd.net:443"
trust_height = 20748000
trust_hash = "BD43534FB20E7C163917BC1A501349C68656A9C24EE48C92BB82FFEE687EEE14"
trust_period = "168h0m0s"
As you can see, first you need to enable the State Sync module. After that, you need to provide RPC endpoints that expose and serve state sync snapshots. You can use our public RPC endpoints, as we generate and serve state sync snapshots from them. After that, you need to add the trust_height - this is usually $CURRENT_BLOCK_HEIGHT - 2000 blocks (which is the interval at we generate new snapshots on our RPC nodes). Finally, you need to add the block ID hash, as trust_hash parameter. This can be fetched via RPC, like this - https://rpc.cheqd.net/block?height=20748000. You will find block hash under .result.block_id.hash.
Reference:
You can also use this simple bash script to update your node configuration:
ADR 004: Token fractions
This is the suggested template to be used for ADRs on the cheqd-node project.
Status
Category
Status
Authors
Alexandr Kolesov
ADR Stage
Summary
The aim of this ADR is to define the smallest fraction for CHEQ tokens.
Context
Cosmos SDK doesn't provide native support for token fractions. The lowest denomination out-of-the-box that can be used in transactions is 1token.
To address this issue, similar Cosmos networks assume that they use N digits after the decimal point and multiply all values by 10^(-N) in UI.
Examples of lowest token denominations in Cosmos
Popular Cosmos networks were compared to check how many digits after the decimal point are used by them:
Cosmos: 6
IRIS: 6
Fetch.ai: 18
Decision
Fractions of CHEQ tokens will be referred by their , based on the power of 10 of CHEQ tokens being referred to in context. This notation system is common across other Cosmos networks as well.
It was decided to go with 10^-9 as the smallest fraction, with the whole number token being 1 CHEQ. Based on the SI prefix system, the lowest denomination would therefore be called "nanocheq".
Consequences
Backward Compatibility
There is no backward compatibility. To adjust the number of digits after the decimal point (lowest token denomination), the network should be restarted.
Positive
The power of 10 chosen for the lowest denomination of CHEQ tokens is more precise than for Cosmos ATOMs, which allows transactions to be defined in smaller units.
Negative
This decision is hard to change in the future, as changes to denominations require significant disruption when a network is already up and running.
Neutral
N/A
References
ADR Template
This is the suggested template to be used for ADRs on the cheqd-node project.
Status
Category
Status
Authors
{Author or list of authors}
What is the status, such as proposed, accepted, rejected, deprecated, superseded, etc.?
Summary
A short (~100 word) description of the issue being addressed. "If you can't explain it simply, you don't understand it well enough." Provide a simplified and layman-accessible explanation of the ADR.
Context
This section describes the forces at play, such as business, technological, social, and project local. These forces are probably in tension, and should be called out as such. The language in this section is value-neutral. It is simply describing facts. It should clearly explain the problem and motivation that the proposal aims to resolve.
ADR-specific Details
This section describes the implementation and/or architecture approach for the proposed changes in detail.
Consequences
This section describes the resulting context, after applying the decision. All consequences should be listed here, not just the "positive" ones. A particular decision may have positive, negative, and neutral consequences, but all of them affect the team and project in the future.
Backwards Compatibility
All ADRs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. The ADR must explain how the author proposes to deal with these incompatibilities. ADR submissions without a sufficient backwards compatibility treatise may be rejected outright.
Positive
{positive consequences}
Negative
{negative consequences}
Neutral
{neutral consequences}
References
{reference link}
Unresolved questions
{list of questions or action items}
Architecture Decision Record (ADR) Process
This is a location to record all high-level architecture decisions for cheqd-node, the server/node portion of purpose-built network for decentralised identity.
An Architectural Decision (AD) is a software design choice that addresses a functional or non-functional requirement that is architecturally significant.
An Architectural Decision Record (ADR) captures a single AD, such as often done when writing personal notes or meeting minutes; the collection of ADRs created and maintained in a project constitute its decision log.
Rationale
ADRs are intended to be the primary mechanism for proposing new feature designs and new processes, for collecting community input on an issue, and for documenting the design decisions. An ADR should provide:
Context on the relevant goals and the current state
Proposed changes to achieve the goals
Summary of pros and cons
Note the distinction between an ADR and a spec. The ADR provides the context, intuition, reasoning, and justification for a change in architecture, or for the architecture of something new. The spec is much more compressed and streamlined summary of everything as it stands today.
If recorded decisions turned out to be lacking, convene a discussion, record the new decisions here, and then modify the code to match.
Creating new ADR
Use the when creating a new ADR.
Manage a node
Overview
A cheqd-node instance can be controlled and configured using the .
This document contains the commands for node operators that relate to node management, configuration, and status.
Re-enable pruning and recovering node db
This guide is specifically made for the the validators/node operators affected by the pruning issue encountered following our v4.x upgrade. This issue required some validators/node operators having to disable pruning entirely.
Re-enabling pruning (to default/custom from nothing) on the affected nodes will cause the nodes to halt again as the database pruning resumes its operation. To avoid this it is recommended to reset your node db and recover it using state sync or a db snapshot. The nodes that weren't affected by the pruning issue or that had recovered by the time the pruning fix was rolled out do not have to undergo this procedure.
Following this procedure will significantly reduce the disk space required for your node’s regular operations, thereby lowering operational costs (on the nodes we manage, we observed storage usage drop from 700+ GB to under 10 GB). Additionally, running a node with less disk usage will likely improve performance.
Upgrade Process
After the proposal status is marked as passed, the upgrade plan will become active. You can query the upgrade plan details using the following command:
This will return output similar to:
At block height 1000, the BeginBlocker will be triggered. At this point, the node will be out of consensus, awaiting the upgrade to the new version.
The log messages like these should be expected:
Once the new application version is installed and running and 1/3 of the voting power on the network is restored, the node will resume normal operations.
Node ID or node address is a part of peer info. It's calculated from node's pubKey as hex(address(nodePubKey)). To get node id run the following command on the node's machine:
Get validator address
Validator address is a function of validator's public key. To get bech32 encoded validator address run this command on node's machine:
There are several ways to get hex-encoded validator address:
Convert from bech32
Query node using CLI:
Look for "ValidatorInfo":{"Address":"..."}
Getting validator public key
Validator public key is used in create-validator transactions. To get bech32 encoded validator public key, run the following command on the node's machine:
Sharing peer information
Peer info is used to connect to peers when setting up a new node. It has the following format:
Example:
Using this information other participants will be able to join your node.
Stop the systemd service of the node. sudo systemctl stop cheqd-cosmovisor.service
Take a backup of the priv_validator_state.json. This step is very Important for validator nodes:
Turn the pruning strategy to default or custom based on your preference in the ~/.cheqdnode/config/app.toml file.
Reset the database:
Restore the priv_validator_state.json. This step is very Important for validator nodes:
Enable statesync on the node and provide the required variables:
Start the node:
The node should start looking for statesync chunks from it's peers and begin the restoration process in a few minutes. After some time, it should catch up with the network and continue signing blocks.
Snapshot
Stop the systemd service of the node:
Take a backup of the priv_validator_state.json. This step is very Important for validator nodes:
Turn the pruning strategy to default or custom based on your preference in the ~/.cheqdnode/config/app.toml file.
Reset the database:
Restore the priv_validator_state.json. This step is very Important for validator nodes:
If your node is configured to use Cosmovisor, the upgrade action will be performed automatically at the specified block height.
To check if your node is configured to run with Cosmovisor, run the following command:
If there's a running systemd service, running this sub-process /usr/bin/cosmovisor run start, then your node is using Cosmovisor.
Additionally, you should make sure that in your cheqd-cosmovisor systemd service configuration file, you have these environment variables set to true:
By default, the configuration file can be found at this location - /usr/lib/systemd/system/cheqd-cosmovisor.service;.
Manual Upgrades for Standalone Nodes
For standalone nodes, follow the instructions in our installation guide. Make sure to choose the release suggested in the software upgrade proposal.
Node Built from Source Code
If you are running a node built from source, you will need to:
Refer to the upgrade proposal details.
Check out the git tag corresponding to the latest release. This is important in cases code in our main branch doesn't match the latest release.
Build the updated binary.
Docker Users
Additionally, we're also publishing the docker images in our GitHub Container Registry, so in case you're running cheqd node in Docker, you can always find the latest image there.
cheqd-noded query bank balances <address>
cheqd-noded query bank balances cheqd1lxej42urme32ffqc3fjvz4ay8q5q9449f06t4v
Docker-based installations are useful when running non-validator (observer) nodes that can be auto-scaled according to demand, or if you're a developer who setup a localnet / access node CLI without running a node.
⚠️ It is NOT recommended to run a validator node using Docker since you need to be absolutely certain about not running two Docker containers as validator nodes simultaneously. If two Docker containers with the same validator keys are active at the same time, this could be perceived by the network as a validator compromise / double-signing infraction and result in jailing / slashing.
Pre-requisites
pre-requisites below, either as individual installs or using (if running on a developer machine):
Docker Engine v20.10.x and above (use docker -v to check)
Docker Compose v2.3.x and above (use docker compose version to check)
Our Docker Compose files . The primary difference in usage is that Docker Compose's new implementation uses docker compose commands (with a space), rather than the legacy docker-compose although they are supposed to be drop-in replacements for each other.
Special guidance for Mac OS running on Apple silicon (M-series chips)
Most issues with Docker that get raised with us are typically with for.
Other issues are due to developers . If your issues are specifically with Docker Compose, make sure the command used is docker compose (with a space).
Usage
Fetch the latest stable Docker image
Pull a (replace latest with a different version tag if you want to pull something other than the latest version):
Modify environment variables used by Docker Compose
We provide . This is broken down into three files that need to be modified with the configuration parameters:
: Docker Compose file
: Environment variables used in docker-compose.yml
: Environment variables used inside the
Both of the .env files are signposted with the REQUIRED and OPTIONAL parameters that can be defined. You must fill out the required configuration parameters.
Start Docker container
Using Docker Compose
Once the environment variable files are edited, bringing up a Docker container is as simple as:
Note: The file paths above for the -f and --env-file parameter are relative to the . Please modify the file paths for the correct relative/absolute paths on the system where you are executing the commands.
Using Docker
If you decide not to use the Docker Compose method, you'll need to configure node settings and volumes for the container manually.
Once you've configured these manually, start using
Alternatively, if you want to just start with the bash terminal without actually starting a node, you could use:
Stop running container
To stop a detached container that was started using Docker Compose, use:
If you also want to remove the container volumes when stopping, add the --volumes flag to the command:
Be careful with removing volumes, since critical data such as node/validator keys will also be removed when volumes are removed. There's no way to get these back, unless you've backed them up independently.
Advanced usage
We have additional guides for the following advanced usage scenarios:
to create custom images
with multiple nodes to simulate a network
Unjailing a jailed validator
Validator nodes can get "jailed" along with a penalty imposed (through its stake getting slashed). Unlike a proof-of-work (PoW) network (such as Ethereum or Bitcoin), proof-of-stake (PoS) networks (such as the cheqd network, built using Cosmos SDK) use stake slashing as a mechanism of enforcing good on-chain behaviour from validators.
Conditions that cause a validator to be jailed
There are two scenarios in which a validator could be jailed, one of which has more serious consequences than the other.
Temporary: Jailed due to downtime
When a validator "misses" blocks or doesn't participate in consensus, it can get temporarily jailed. By enforcing this check, PoS networks like ours ensure that validators are actively participating in the operation of the network, ensuring that their nodes remain secure and up-to-date with the latest software releases, etc.
The duration on how this is calculated is defined in the . Jailing occurs based on a sliding time window (called the ) calculated as follows.
The signed_blocks_window (set to 25,920 blocks on mainnet) defines the time window that is used to calculate downtime.
Within this window of 25,920 blocks, at least 50% of the blocks must be signed by a validator. This is defined in the genesis parameter min_signed_per_window (set to 0.5 for mainnet).
What happens when a validator is temporarily jailed for downtime
1% of all of the stake delegated to the node is slashed, i.e., burned and disappears forever. This includes any stake delegated to the node by external parties. (If a validator gets jailed, delegators may decide to switch whom they delegate to.) The percentage of stake to be slashed is defined in the slash_fraction_downtime genesis parameter.
Step 1: Check your Node is up to date
During the downtime of a Validator Node, it is common for the Node to miss important software upgrades, since they are no longer in the active set of nodes on the main ledger.
Therefore, the first step is checking that your node is up to date. You can execute the command
The expected response will be the latest cheqd-noded software release. At the time of writing, the expected response would be
Step 2: Upgrading to latest software
If your node is not up to date, please
Step 3: Confirming the Node is up to date
Once again, check if your node is up to date, following Step 1.
Expected response: In the output, look for the text latest_block_height and note the value. Execute the status command above a few times and make sure the value of latest_block_height has increased each time.
The node is fully caught up when the parameter catching_up returns the output false.
Additionally,, you can check this has worked:
It shows you a page and field "version": "0.6.0".
Step 4: Unjailing command
If everything is up to date, and the node has fully caught, you can now unjail your node using this command in the cheqd CLI:
Draft governance proposals
From Cosmos SDk v0.50+ versions the gov module provides a draft json file for all types of proposals. These json files can be generated using the draft-proposal subcommand present within the gov module.
It provides an interactive interface and provides a json output using user submitted information. The json file can be submitted on chain for voting using the following command:
Examples of proposal.json for a few commonly submitted proposal types
1) Text proposals
The main parameters here are:
proposal_title - name of the proposal.
proposal_description - proposal description; limited to 255 characters; you can use json markdown to provide links.
2) Community Pool Spend
The main parameters here are:
proposal_title - name of the proposal.
proposal_description - proposal description; limited to 255 characters; you can use json markdown to provide links.
recipient_address- cheqd address to which the community pool tokens should be sent.
3) Software upgrade
The main parameters here are:
proposal_title - name of the proposal.
proposal_name - name of proposal which will be used in UpgradeHandler in the new application,
proposal_description
4) IBC Recover Client
The main parameters here are:
proposal_title - name of the proposal.
proposal_description - proposal description; limited to 255 characters; you can use json markdown to provide links.
expired_client_id - IBC client id of the expired connection.
Expedited Proposals
Cosmos SDK v0.50+ also added support for expedited proposals. Expedited proposals have shorter a voting period and a higher tally threshold by default. If an expedited proposal fails to meet the threshold within the shorter voting period, it is then automatically converted to a regular proposal and restarts voting under regular voting conditions.
Submitting expedited proposals
Any and all proposals can be submitted as expedited proposals by switching the expedited field to true in proposal.json file. Eg;-
Build with Docker
Context
ℹ️ We provide if you just want to setup and use a Docker-based node.
These advanced instructions are intended for developers who want to build their own custom Docker image. You can also , or .
Creating a software upgrade proposal
Upgrade process includes 2 main parts:
Sending a SoftwareUpgradeProposal to the network
Moving to the new binary manually
Run a localnet with Docker
Context
This document provides instructions on how to run a localnet with multiple validator/non-validator nodes. This can be useful if you are developing applications to work on cheqd network, or in automated testing pipelines.
The techniques described here are used in CI/CD contexts, for example, in this repository itself in the .
Upgrade to v4.x
This upgrade moved cheqd to support Cosmos SDK v0.50 "Eden".
Cosmos SDK v0.50 is a long-term support version bringing major improvements in performance, developer experience, and modularity. Key features include ABCI++ for more flexible and efficient consensus and IAVL 1.0 for faster and more efficient data storage. It also adds Optimistic Execution to reduce block times and Sign Mode Textual for clearer, more secure transaction signing. This release sets a solid foundation for building faster, more customizable applications on cheqd.
We also enhanced our DIDs to support a more flexible service section, enabling direct connections to did:cheqd DIDs using DIDComm. This enhancement brings full cheqd support for DIDComm endpoint discovery, making it easier for apps and SDKs like ACA-Py and Credo to integrate seamless messaging and communication.
Make transactions
Overview
A cheqd-node instance can be controlled and configured using the .
This document contains the commands for reading and writing token transactions.
Contributor Guide
We would love for you to contribute to cheqd and help make it even better than it is today! As a contributor, here are the guidelines we would like you to follow.
🧑⚖️ Code of Conduct
Help us keep cheqd open and inclusive. Please read and follow our
cheqd-noded tx gov draft-proposal
Use the arrow keys to navigate: ↓ ↑ → ←
? Select proposal type:
▸ text
community-pool-spend
software-upgrade
cancel-software-upgrade
other
Therefore, if a validator misses 12,960 blocks within the last 25,920 blocks it meets the criteria for getting jailed.
To convert this block window to a time period, consider the block time of the network, i.e., at what frequency a new block is created. The latest block time can be found on our mainnet explorer or any other explorer configured for cheqd network (such as Ping Wallet).
Let's assume the block time was 6 seconds. This equates to 12,960 * 6 = 77,760 seconds = ~21.6 hours. This means if the validator is not participating in consensus for more than ~21.6 hours (in this example), it will get temporarily jailed.
Since the block time of the network is variable on the number of nodes participating, network congestion, etc it's always important to calculate the time period on latest block time figures.
amount - amount of tokens to be sent to the recipient address.
- proposal description; limited to 255 characters; you can use json markdown to provide links.
upgrade_height - height when upgrade process will be triggered. Keep in mind that this needs to be after voting period has ended.
upgrade_info - link to the upgrade info file, containing new binaries. Needs to contain sha256 checksum. See example - https://raw.githubusercontent.com/cheqd/cheqd-node/refs/heads/main/networks/mainnet/upgrades/upgrade-v3.json?checksum=sha256:5989f7d5bca686598c315eb74e8eb507d7f9f417d71008a31a6b828c48ce45eb
operator_alias - alias of a key which will be used for signing proposal.
<chain_id> - identifier of chain which will be used while creating the blockchain.
new_client_id - IBC client id of the replacement connection.
prop_submitter_address - Cheqd address of the user who will submit the proposal.
Pre-requisites
Install Docker pre-requisites below, either as individual installs or using Docker Desktop (if running on a developer machine):
Docker Engine v20.10.x and above (use docker -v to check)
Docker Compose v2.3.x and above (use docker compose version to check)
Our Docker Compose files use Compose v2 syntax. The primary difference in usage is that Docker Compose's new implementation uses docker compose commands (with a space), rather than the legacy docker-compose although they are supposed to be drop-in replacements for each other.
Special guidance for Mac OS running on Apple silicon (M-series chips)
Note: If you're building on a Mac OS system with Apple M-series chips, you should modify the FROM statement in the Dockerfile to FROM --platform=linux/amd64 golang:1.18-alpine AS builder. Otherwise, Docker will try to download the Mac OS darwin image for the base Golang image and fail during the build process.
Build the image
Using Docker Compose
If you're planning on passing/modifying a lot of build arguments from their defaults, you can modify the Docker Compose file and the associated environment files to define the build/run-time variables in a one place. This is the recommended method.
If you don't want to use docker compose build, or build using docker build and then run it using Docker Compose, a sample command you could use is (modify as necessary)
In general, this proposal is the document which includes some additional information for operators about improvements in new version of application or another additional remarks. There are not any requirements for proposal text, just recommendations and information for operators. And also, please make sure that this proposal will be stored in the ledger in case of success voting process in the future.
Steps for making proposal
The next steps are describing the general flow for making a proposal:
Send proposal command to the pool;
After getting it, ledger will be in the PROPOSAL_STATUS_DEPOSIT_PERIOD;
After sending the first deposit from one of other operators, proposal status will be moved to PROPOSAL_STATUS_VOTING_PERIOD and voting period (5 days for now) will be started;
Due to the voting period operators should send their votes to the pool, get new binary downloaded and got to be installed;
After voting period passing (for now it's 2 weeks) in case of success voting process proposal should be passed to PROPOSAL_STATUS_PASSED;
The next step is waiting for height which was suggested for upgrade.
On the proposed height current node will be blocked until new binary will be installed and set up.
Command for sending proposal
Where the contents of proposal.json are in the following format:
The main parameters here are:
proposal-title - name of the proposal.
proposal_name - name of proposal which will be used in UpgradeHandler in the new application,
proposal_description - proposal description; limited to 255 characters; you can use json markdown to provide links,
upgrade_height - height when upgrade process will be occurred. Keep in mind that this needs to be after voting period has ended.
upgrade_info - link to the upgrade info file, containing new binaries. Needs to contain sha256 checksum. See example - https://raw.githubusercontent.com/cheqd/cheqd-node/refs/heads/main/networks/mainnet/upgrades/upgrade-v3.json?checksum=sha256:5989f7d5bca686598c315eb74e8eb507d7f9f417d71008a31a6b828c48ce45eb
operator_alias - alias of a key which will be used for signing proposal,
<chain_id> - identifier of chain which will be used while creating the blockchain.
In case of successful submitting the next command can be used for getting proposal_id:
This command will return list of all proposals. It's needed to find the last one with corresponding name and title.
Voting process
Send votes
After getting deposit from the previous step, the VOTING_PERIOD will be started. For now, we have 2 weeks for making some discussions and collecting needed vote count. For setting vote, the next command can be used:
The main parameters here:
<proposal_id> - proposal identifier from [step](#Command for sending proposal)
<vote_option> - the actual vote (it can be yes, no, abstain, no_with_veto)
Votes can be queried by sending request:
Finish voting period
At the end of voting period, after voting_end_time, the state of proposal with <proposal_id> should be changed to PROPOSAL_STATUS_PASSED, if there was enough votes Also, deposits should be refunded back to the operators.
You can find more technical details about this upgrade on our product docs page.
Actions that should be taken
If you're running your node with Cosmovisor and have the binary auto-download feature enabled, no action is required.
In case you run standalone node, you can download the v4.1.1 binary from our GitHub Releases page and replace them manually on your node.
⚠️ Potential errors with binary auto download on Cosmovisor v1.7.1
In case you are running Cosmovisor v1.7.1, you might see one potential error when upgrade height is reached. Apparently, even though binary is correctly downloaded to /home/cheqd/.cheqdnode/cosmovisor/upgrades/v4, the wrong file (the LICENSE file) gets copied to bin/cheqd-noded. As a result, your node won’t start after the upgrade.
We suspect this issue is related to upstream Cosmos SDK x/upgrade module, and we will investigate separately of this release.
ℹ️ This does not present a danger to your node state, but node won't start until you replace binaries.
🛠️ How to Prevent This Issue
You have two options:
Re-run interactive installer to make sure you're running Cosmovisor v1.3.0 before upgrade height is reached. You will get asked if you want to roll back to Cosmovisor v1.3.0. If you already used interactive installer, just make sure to download the latest one from the link above, before the execution.
Alternatively, you can replace binaries when upgrade height is reached. You can achieve this with one simple command:
To confirm this resolved you issue, just run this:
Check Cosmovisor version
To determine whether you need to take any action, check your current Cosmovisor version:
If you're running version later than v1.3.0, you will see output like this:
In v1.3.0, these upgrade-related commands are not available (add-batch-upgrade, add-upgrade, prepare-upgrade):
To summarize, if you see the second (shorter) output above, you're good to go! ✅
If not, refer back to the previous section and either:
Roll back to v1.3.0, or
Manually replace the binaries at the upgrade height.
If anything's unclear or if you face any additional issues, you can always ask for support on our #mainnet-operators Discord channel! Also, follow the updates on our status page and 🔊mainnet-upgrades Discord channel.
Querying and declaring fees in transactions
Our v3.x upgrade introduced EIP-1559 style burns, and our v3.1.x upgrade bumped minimum gas price to 5000ncheq. Therefore, all wallets and applications are recommended to query real-time gas prices from the chain to ensure that they succeed.
Querying real-time gas prices via CLI
Arguments
denom: The denomination of the fee. For example, ncheq
--node: IP address or URL of node to send the request to
Example
Note: Use --output json to get the output in JSON format.
Querying real-time gas prices via REST API
You can also query real-time gas prices on chain using the REST API, which can be useful for applications that do not use the node CLI. You can fetch this by initiating a GET reqeust to:
Interpreting the real-time gas prices API response
There are two ways of interpreting the CLI/API response:
If specifying fees as gas x gas prices with auto gas calculation (recommended): Multiply the price.amount by 10^4 to derive the gas price in ncheq. In the example above, this results in gas price 0.5ncheq * (10^4) = 5000ncheq
Note: Consider 10^4 the fee offset multiplier for gas prices. This is because cheqd network uses ncheq as the base denomination for gas prices, whereas Cosmos SDK uses uatom for this feemarket implementation. The offset factor is not related to the conversion from ncheq to uatom, but rather to the way the gas prices are calculated in Cosmos SDK.
If specifying fixed fees as --fees: The price.amount value represents the exact fee to pay in CHEQ, whereas the price.denom gives the units it should be expressed in. If the network is congested, the fee may be higher than the example value shown. Therefore, it is recommended to multiply by a factor of 2 or more to ensure the transaction is processed when using --fees, as this method does not have the benefit of auto gas calculation. In the example above, this becomes 0.5 CHEQ * 2 * 10^9 = 1,000,000,000ncheq (1 CHEQ).
The Submitting transactions section below explains further how fees can be specified with transactions.
Suggested static gas price values
The most fool-proof method of ensuring your transaction succeeds is by querying the on-chain gas prices as described above. This fetches the real-time gas prices on the network, based on current congestion.
If, however, your application is unable to query on-chain prices for some reason, you may be able to use the following static gas price values:
Low (minimum gas price): 5000ncheq
Medium: 7500ncheq
High: 10000ncheq
Please note that without querying the real-time prices, these ranges might not satisfy requirements during periods of peak congestion, but will likely meet the required gas prices in 90% of scenarios at the peak.
Submitting transactions
In our documentation, you will come across terms like gas and gas prices. Fees are calculated as follows:
fees = gas x gas-adjustment x gas-prices
There are important changes to particular values for gas adjustment and gas prices since the v3.x and v3.1.x upgrade, which are reflected below.
Generic transaction command
Arguments
--chain-id: E.g., cheqd-mainnet-1 (for mainnet), cheqd-testnet-6 (for testnet). This parameter is typically mandatory
--gas: Either a specific value, or auto (recommended)
--gas-adjustment: Usually, the auto-calculated gas value fluctuates, so you're recommended to boost the gas value by this multiplication factor. Since our v3.x upgrade, the recommended value is 1.7 or more.
--gas-prices: From v3.1.x upgrade onwards, the minimum gas price is 5000ncheq. Recommendation is to either query real-time gas prices, or use the static values.
--node (optional): IP address or URL of node's RPC endpoint to send request to, e.g., https://rpc.cheqd.net, http:localhost:26657. This is not necessary when executing on a node itself.
Alternative method for setting fees
Instead of using --gas, --gas-adjustment, and --gas-prices, you can also specify fixed fees for a transaction using the --fees flag, which is the maximum fee limit that is allowed for the transaction.
--fees needs to be specified in ncheq units or 10^-9 CHEQ. For example, 5000ncheq is 0.000005 CHEQ. The fee is in ncheq units. Refer to the section above for obtaining gas prices that help determine the fee amount.
Status codes
Pay attention at return status code. It should be 0 if a transaction is submitted successfully. Otherwise, an error message may be returned.
Example
General ledger queries
For most general ledger queries, use the --help flag for any command/sub-command to understand the possible parameters and values.
Command
Arguments
--node: IP address or URL of node to send the request to
Available Commands:
add-batch-upgrade Add multiple upgrade binaries at specified heights to cosmovisor
add-upgrade Add APP upgrade binary to cosmovisor
completion Generate the autocompletion script for the specified shell
config Display cosmovisor config.
help Help about any command
init Initialize a cosmovisor daemon home directory.
prepare-upgrade Prepare for the next upgrade
run Run an APP command.
show-upgrade-info Display current upgrade-info.json from <app> data directory
version Display cosmovisor and APP version.
Available Commands:
completion Generate the autocompletion script for the specified shell
help Help about any command
init Initializes a cosmovisor daemon home directory.
run Run an APP command.
version Prints the version of Cosmovisor.
A cheqd-node binary to run the network config generation script below.
Instructions
Our localnet setup instructions are designed to set up a local network with the following node types:
3x validator nodes
1x non-validator/observer node
1x seed node
The definition for this network is described in a localnet Docker Compose file, which can be modified as required for your specific use case. Since it's not possible to cover all possible localnet setups, the following instructions describe the steps necessary to execute a setup similar to that using in our Github test workflow.
Modify the docker-compose.yml file if necessary, along with the per-container environment variables under the container-env folder.
Interacting with localnet
The default Docker localnet will configure a running network with a pre-built image or custom image.
Nodes
The five nodes and corresponding ports set up by the default Docker Compose setup will be:
Validator nodes
validator-0
P2P: 26656
RPC: 26657
validator-1
P2P: 26756
RPC: 26757
validator-2
P2P: 26856
RPC: 26857
validator-3
P2P: 26956
RPC: 26957
Seed node
seed-0
P2P: 27056
Observer node
observer-0
P2P: 27156
You can tests connection to a node using browser: http://localhost:<rpc_port>. Example for the first node: http://localhost:26657.
Accounts
Key and corresponding accounts will be placed in the network config folder by the import-keys.sh script, which are used within the nodes configured above.
When connecting using CLI, provide the --home parameter to any CLI command to point to the specific home directory of the corresponding node: --home network-config/validator-x.
You can request a new feature by submitting an issue to our GitHub Repository. If you would like to implement a new feature, please consider the size of the change in order to determine the right steps to proceed:
For a Major Feature, first open an issue and outline your proposal so that it can be discussed. This process allows us to better coordinate our efforts, prevent duplication of work, and help you to craft the change so that it is successfully accepted into the project.
Before you submit an issue, please search the issue tracker. An issue for your problem might already exist and the discussion might inform you of workarounds readily available.
We want to fix all the issues as soon as possible, but before fixing a bug, we need to reproduce and confirm it. In order to reproduce bugs, we require that you provide a minimal reproduction. Having a minimal reproducible scenario gives us a wealth of important information without going back and forth to you with additional questions.
A minimal reproduction allows us to quickly confirm a bug (or point out a coding problem) as well as confirm that we are fixing the right problem.
We require a minimal reproduction to save maintainers' time and ultimately be able to fix more bugs. Often, developers find coding problems themselves while preparing a minimal reproduction. We understand that sometimes it might be hard to extract essential bits of code from a larger codebase but we really need to isolate the problem before we can fix it.
Unfortunately, we are not able to investigate / fix bugs without a minimal reproduction, so if we don't hear back from you, we are going to close an issue that doesn't have enough info to be reproduced.
You can file new issues by selecting from our new issue templates and filling out the issue template.
Submitting a Pull Request (PR)
Before you submit your Pull Request (PR) consider the following guidelines:
Search GitHub for an open or closed PR that relates to your submission. You don't want to duplicate existing efforts.
Be sure that an issue describes the problem you're fixing, or documents the design for the feature you'd like to add. Discussing the design upfront helps to ensure that we're ready to accept your work.
In your forked repository, make your changes in a new git branch:
Create your patch, including appropriate test cases.
Check that all workflow actions for linting / build / test pass.
Commit your changes using a descriptive commit message that follows
Push your branch to GitHub:
In GitHub, send a pull request to cheqd-node:develop. The develop branch is where all pending PRs should be targetted for inclusion in the next release.
This document offers guidance for validators looking to move thier node instance to another one, for example in case of changing VPS provider or something like this.
Before completing the move, ensure the following checks are completed:
1. Stop the service on your current node
If you are using cosmosvisor, use systemctl stop cheqd-cosmovisor
For all other cases, use systemctl stop cheqd-noded.
2. Confirm that your previous node / service was stopped
This step is of the utmost important
If your node is not stopped correctly and two nodes are running with the same private keys, this will lead to a double signing infraction which results in your node being permemently jailed (tombstoned) resulting in a 5% slack of staked tokens.
You will also be required to complete a fresh setup of your node.
3. Copy config directory and data/priv_validator_state.json to safe place
Check that your config directory and data/priv_validator_state.json are copied to a safe place where they will cannot affected by the migration.
Installation
Only after you have completed the preparation steps to shut down the previous node, the installation should begin.
In general, the installer allows you to install the binary and download/extract the latest snapshot from .
Once this has been completed, you will be able to move your existing keys back and settings.
Installation with the latest snapshot
The answers for installer quiestion could be:
1. Select Version
Here you can pick up the version what you want.
2. Select Home directory
Set path for cheqd user's home directory [default: /home/cheqd]:.
This is essentialy a question about where the home directory, cheqdnode, is located or will be.
It is up to operator where they want to store data, config and log directories.
3. Setup node
Do you want to setup a new cheqd-node? (yes/no) [default: yes]:
Here the expected answer is No.
The main idea is that our old config directory will be used and data will be restored from the snapshot.
We don't need to setup the new one.
4. Select Network
Select cheqd network to join (testnet/mainnet) [default: mainnet]:
For now, we have 2 networks, testnet and mainnet.
Type whichever chain you want to use or just keep the default by clicking Enter.
5. Specify Cosmovisor option
Install cheqd-noded using Cosmovisor? (yes/no) [default: yes]:.
This is also up to the operator.
6. Specify if you are using a snapshot
CAUTION: Downloading a snapshot replaces your existing copy of chain data. Usually safe to use this option when doing a fresh installation. Do you want to download a snapshot of the existing chain to speed up node synchronisation? (yes/no) [default: yes].
On this question we recommend to answer Yes, cause it will help you to catchup with other nodes in the network. That is the main feature from this installer.
Example
Post-install steps
1. Copy your settings
If the installation process was successful, the next step is to get back the configurations from :
Copy config directory to the CHEQD_HOME_DIRECTORY/.cheqdnode/
Copy data/priv_validator_state.json to the CHEQD_HOME_DIRECTORY/.cheqdnode/data
Where CHEQD_HOME_DIRECTORY is the home directory for cheqd user. By default it's /home/cheqd or what you answered during the installation for the second question.
2. Setup external address
You need to specify here new external address by calling the next command under the cheqd user:
3. Check that service works
The latest thing in this doc is to run the service and check that all works fine.
where <service-name> is a name of service depending was Install Cosmovisor selected or not.
cheqd-cosmovisor if Cosmovisor was installed.
cheqd-noded in case of keeping cheqd-noded as was with debian package approach.
For checking that service works, please run the next command:
where <service-name> has the same meaning as above.
After this upgrade, Keplr Wallet unfortunately cannot support dynamic fee lookups for Cosmos SDK chains that are not natively integrated into their wallet. Therefore, we recommend users to .
Ensure your transaction fee configuration works after the upgrade
For non-identity transactions (e.g., standard transfers, staking, rewards, etc), this upgrade introduces EIP-1559 style burns, with the introduction of a feemarket module which introduces dynamic gas pricing. This might affect how you attach/calculate fees when submitting transactions.
Recommended settings for CLI
Use --gas auto and --gas-adjustment (with value of 1.7 to 1.8) for optimal results. Using fixed --fees is unlikely to work if you don't look up real-time gas prices on the network.
Monitoring real-time gas prices using CLI
cheqd-noded query feemarket gas-price ncheq
Monitoring real-time gas prices using REST API
Initiating a GET request at https://api.cheqd.net/feemarket/v1/gas_price/ncheq. The result will be an object as:
Interpreting the real-time gas prices API response
Interpret the value of price.amount as the fixed fee in price.denom units, in which case 5000000000000000ncheq using --fees OR calculate the gas prices necessary by multiplying the value of price.amount by (10^4) * (10^9) = 10^13, in which case resulting in 50ncheq. Any increase in the indicative gas price or gas limit will result in greater quantities than the suggested minimum base fee.
Features
We're excited to announce the latest protocol upgrade to our community. This release brings cutting-edge features designed to enhance transaction efficiency, fee flexibility, and cross-chain interoperability. Here's what's coming your way:
🔥 EIP-1559 Feemarket Implementation
Revolutionising the fee structure for a smoother, more efficient user experience. The advanced Additive Increase Multiplicative Decrease (AIMD) model ensures fair and efficient fee adjustments.
Base Fee: Dynamically adjusts based on network congestion. This fee is burnt, reducing token supply and enhancing the economic value of $CHEQ.
Priority Fee: Optional tip for validators to prioritise your transactions.
🌐 Fee Abstraction Module
Cross-Chain Token Support: Pay transaction fees in tokens from any IBC-enabled Cosmos chain.
IBC Hooks: Facilitate cross-chain smart contract calls, opening new use cases like multi-hop swaps.
Fee Conversions: Automatically convert IBC-denominated tokens to native fees before final settlement.
🔄 IBC Packet Forwarding Middleware
Streamlining cross-chain communication with robust packet forwarding:
Atomic Multi-Hop Flows: Secure and synchronized token transfers across multiple chains.
Asynchronous Acknowledgements: Track the outcome of multi-hop transfers from the initiating chain.
🔧 New Message Definitions for Governance and Token Management
Simplifying token minting and burning through governance-approved mechanisms mainly aimed at reducing complexity of $DOCK to $CHEQ token migration:
MsgBurn: Allows manual burning of native tokens with full validation.
MsgMint: Enables governance-controlled token minting and distribution to specific addresses.
🌟 What This Means for You
Improved User Experience: Transaction fees are simpler, predictable, and wallet-friendly.
Enhanced Token Utility: CHEQ’s role as the network’s core currency is strengthened.
Cross-Chain Opportunities: Broader support for IBC tokens ensures a seamless multi-chain experience.
🎯 Next Steps
If the software upgrade proposal is approved, this upgrade will go live at block height 16502390, which is expected to be around Thursday, 12th December 2024, 9am UTC.
We’re excited to bring these advancements to the ecosystem and look forward to your feedback. Let’s build the future of Decentralised Identity together!
Release Notes for cheqd-node v3.0.1
What's Changed
feat: Integrate feemarket + reworked ante, post handler decorator distinction by @vishal-kanna in
feat!: Integrate feemarket + reworked ante, post handler decorator distinction by @Eengineer1 in
feat: add fee abs module by @atheeshp in
Full Changelog:
Backup and restore node keys with Hashicorp Vault
If you're running a validator node, it's important to backup your validator's keys and state - especially before attempting any updates or shifting nodes.
What to backup from a validator node
Each validator node has three files/secrets that must be backed up, in case you want to restore or move a node. Anything not in scope listed below can be easily restored from snapshot or otherwise replaced with fresh copies, and as such this list is the bare minimum that needs to be backed up.
$CHEQD_HOME is the data directory for your node, which defaults to /home/cheqd/.cheqdnode
Validator private key
The validator private key is one of the most important secrets that uniquely identifies your validator, and what this node uses to sign blocks, participate in consensus etc. This file is stored under $CHEQD_HOME/config/priv_validator_key.json.
Node key
In the same folder as your validator private key, there's another key called $CHEQD_HOME/config/node_key.json. This key is used to derive the node ID for your validator.
Backing up this key means if you move or restore your node, you don't have to change the node ID in the configuration files any peers have. This is only relevant (usually) if you're running multiple nodes, e.g., a sentry or seed node.
For most node operators who run a singular validator node, this node key is NOT important and can be refreshed/created as new. It is only used for Tendermint peer-to-peer communication. Hypothetically, if you created a new node key (say when moving a node from one machine to another), and then restored the priv_validator_key.json, this is absolutely fine.
Validator private state
The validator private state is stored in the data folder, not the config folder where most other configuration files are kept - and therefore often gets missed by validator operators during backup. This file is stored at $CHEQD_HOME/data/priv_validator_state.json.
This file stores the last block height signed by the validator and is updated every time a new block is created. Therefore, this should only be backed up after stopping the node service, otherwise, the data stored within this file will be in an inconsistent state. An example validator state file is shown below:
If you forget to restore to the validator state file when restoring a node, or when restoring a node from snapshot, your validator will double-sign blocks it has already signed, and get jailed permanently ("tombstoned") with no way to re-establish the validator.
Upgrade height information
The software upgrades and block height they were applied at is stored in $CHEQD_HOME/data/upgrade-info.json. This file is used by Cosmovisor to track automated updates, and informs it whether it should attempt an upgrade/migration or not.
Manual backup and restore
The simplest way to backup your validator secrets listed above is to display them in your terminal:
You can copy the contents of the file displayed in terminal off the server and store it in a secure location.
To restore the files, open the equivalent file on the machine where you want to restore the files to using a text editor (e.g., nano) and paste in the contents:
Backing up and restoring via HashiCorp Vault
is an open-source project that allows server admins to run a secure, access-controlled off-site backup for secrets. You can either or .
Pre-requisites
Before you get started with this guide, make sure you've on the validator you want to run backups from.
You also need a running HashiCorp Vault server cluster you can use to proceed with this guide.
Setting up a HashiCorp Vault cluster is outside the scope of this documentation, since it can vary a lot depending on your setup. If you don't already have this set up, and is the best place to get started.
Setting up Vault CLI environment
Once you have Vault CLI set up on the validator, you need to set up environment variables in your terminal to configure which Vault server the secrets need to be backed up to.
Add the following variables to your terminal environment. Depending on which terminal you use (e.g., bash, shell, zsh, fish etc), you may need to modify the statements accordingly. You'll also need to modify the values according to your validator and Vault server configuration.
Backup procedure for HashiCorp Vault
Configure Vault backup script
Download the from Github:
Make the script executable:
We recommend that you open the script using an editor such as nano and confirm that you're happy with the environment variables and settings in it.
Stop the cheqd service
Before backing up your secrets, it's important to stop the cheqd node service or Cosmovisor service; otherwise, the validator private state will be left in an inconsistent state and result in an incorrect backup.
If you're running via Cosmovisor (the default option), this can be stopped using:
Or, if running as a standalone service:
Run the Vault backup script
Once you've confirmed the cheqd service is stopped, execute the Vault backup script:
We use HashiCorp Vault KV v2 secrets engine. Please make sure that it's enabled and mounted under cheqd path.
Restoring from HashiCorp Vault
To restore backed-up secrets from a Vault server, you can use the same script using the -r ("restore") flag:
Restoring secrets to a different machine
If you're restoring to a different machine than the original machine the backup was done from, you'll need to go through the pre-requisites, CLI setup step, and download the Vault backup script to the new machine as well.
In this scenario, you're also also recommended to disable the service (e.g., cheqd-cosmovisor) on the original machine. This ensures that if the (original) machine gets restarted, systemd does not try and start the node service as this can potentially result in two validators running with the same validator keys (which will result in tombstoning).
Once you've successfully restored, you can enable the service (e.g., cheqd-cosmovisor) on the new machine:
Upgrade to v0.6.x
This document offers the information and instructions required for node operators complete an upgrade with a fresh installation.
We decided to remove the debian package as an installtion artifact and use our own installation tool. The main reason for this is to help our current and future node operators join the cheqd network or complete an upgrade in a more intuitive, simpler and less time intensive way.
For this upgrade from 0.5.0 to 0.6.0 there are 2 possible scenario:
upgrade by installing Cosmovisor
upgrade only cheqd-noded binary.
is a tool from the cosmos-sdk team which is able to make an upgrade in a full-auto mode. It can download and exchange binary without any actions from a node operator. Beginning with version 0.6.0, and with all subsequent versions, we will leverage Cosmosvisor as a tool to handling our upgrade process.
For . You will find answers to common questions within this document, however of course feel free to reach out to the team on Slack or Discord.
Upgrade with installing Cosmovisor
As the installation and setting up the Cosmovisor can be difficult, and requires some additional steps for setting up the systemd service, we injected all this steps into our interactive installer.
The flow for installtion is:
Stop the systemd service
Make sure that it was definitely stopped by using:
The output should be:
the main focus here: Active: inactive (dead)
Remove current debian package
Download interactive installer and run it
The next answers are needed for upgrading cheqd-noded binary with the Cosmovisor installation
IMPORTANT For running an Upgrade scenario you'll be required to setup a current home directory for a cheqd user as an answer on question Set path for cheqd user's home directory [default: /home/cheqd]:. This is because the upgrade scenario will only be used if this directory exists.
WARNING Please make sure that you answered yes for questions about overwriting existing configuration. It's very important when making a new installation with Cosmovisor.
Start new systemd service
Upgrade only binary
If you are updating a current installation the next steps can be used:
Stop the systemd service
and make sure that it was really stopped by:
Output should be like:
the main focus here: Active: inactive (dead)
Remove current debian package
Download interactive installer and run it
Answers for interactive installer
IMPORTANT For running an Upgrade scenario you'll be required to setup a current home directory for a cheqd user as an answer on question Set path for cheqd user's home directory [default: /home/cheqd]:. This is because the upgrade scenario will only be used if this directory exists.
WARNING. IF you are keeping just standalone a cheqd-noded, without Cosmovisor, it's crucial you keep your systemd service files without overwriting them. Please make sure that your answers were no.
Similar to what was previously in place, for starting the service only the next command is needed
Code of Conduct
Our Pledge
We as members, contributors, and leaders pledge to make participation in the cheqd community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, country of origin, personal appearance, race, religion, or sexual identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
Our Standards
Examples of behaviour that contributes to a positive environment for our community include:
Demonstrating empathy and kindness toward other people
Being respectful of differing opinions, viewpoints, and experiences
Giving and gracefully accepting constructive feedback
Examples of unacceptable behaviour include:
The use of sexualized language or imagery, and sexual attention or advances of any kind
Use of inappropriate or non-inclusive language or other behaviour deemed unprofessional or unwelcome in the community
Trolling, insulting or derogatory comments, and personal or political attacks
Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of acceptable behaviour and will take appropriate and fair corrective action in response to any behaviour that they deem inappropriate, threatening, offensive, or harmful.
Community leaders have the right and responsibility to remove, edit, or reject comments, commits, messages, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
Scope
This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
Enforcement
Instances of abusive, harassing, or otherwise unacceptable behaviour may be reported to the community leaders responsible for enforcement at . All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the reporter of any incident.
Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
1. Correction
Community Impact: Use of inappropriate language or other behaviour deemed unprofessional or unwelcome in the community.
Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behaviour was inappropriate. A public apology may be requested.
2. Warning
Community Impact: A violation through a single incident or series of actions. Any Community Impact assessment should take into account:
The severity and/or number of incidents/actions
Non-compliance with previous private warnings from community leaders (if applicable)
Consequence: A warning with consequences for continued behaviour. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
3. Temporary Ban
Community Impact: A serious violation of community standards, including sustained inappropriate behaviour.
Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
4. Permanent Ban
Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behaviour, harassment of an individual, or aggression toward or disparagement of classes of individuals.
Consequence: A permanent ban from any sort of public interaction within the community.
Attribution
This Code of Conduct is adapted from the , version 2.0, available at .
Community Impact Guidelines were inspired by .
For answers to common questions about this code of conduct, see the FAQ at . Translations are available at .
ADR 006: Community tax
This is the suggested template to be used for ADRs on the cheqd-node project.
Status
Category
Status
Pre-Requisites & Requirements
Context
This document describes the hardware and software pre-requisites for setting up a new cheqd-node instance and joining the existing testnet/mainnet. The recommended installation method is to use the .
Alternative installation methods and a developer guide to building from scratch are covered at the end of this page.
Manage keys
Overview
can be used manage keys on a node.Keys are closely related to accounts and on-ledger authentication.
Account addresses are on a cheqd node are an encoded version of a public key. Each account is linked with at least one public-private key pair. Multi-sig accounts can have more than one key pair associated with them.
To submit a transaction on behalf of an account, it must be signed with an account's private key.
SubQuery
SubQuery is a leading blockchain data indexer that provides developers with fast, flexible, universal, open source and decentralised APIs for web3 projects. SubQuery SDK allows developers to get rich indexed data and build intuitive and immersive decentralised applications in a faster and more efficient way. SubQuery supports 150+ ecosystems including Cheqd, Cosmos, Ethereum, Near, Polygon, Polkadot, Algorand, and Avalanche.
Another one of SubQuery's competitive advantages is the ability to aggregate data not only within a chain but across multiple blockchains all within a single project. This allows the creation of feature-rich dashboard analytics, multi-chain block scanners, or projects that index IBC transactions across zones.
Useful resources
./docker/localnet/gen-network-config.sh
./docker/localnet/import-keys.sh
docker compose --env-file build-latest.env up --detach --no-build
You can publish it to SubQuery's enterprise-level Managed Service, where we'll host your SubQuery project in production ready services for mission critical data with zero-downtime blue/green deployments. There even is a generous free tier. Find out how.
You can publish it to the decentralised SubQuery Network, the most open, performant, reliable, and scalable data service for dApp developers. The SubQuery Network indexes and services data to the global community in an incentivised and verifiable way and supports Cheqd from launch.
Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
Focusing on what is best not just for us as individuals, but for the overall community
Public or private harassment
Publishing others' private information, such as a physical or email address, without their explicit permission
Other conduct which could reasonably be considered inappropriate in a professional setting
For most nodes, the RAM/vCPU requirements are relatively static and do not change over time. However, the disk storage space needs to grow as the chain grows and will evolve over time.
It is recommended to mount disk storage for blockchain data as an expandable volume/partition separate from your root partition. This allows you mount the node data/configuration path on /home (for example) and increase the storage if necessary independent of the root / partition since hosting providers typically force an increase in CPU/RAM specifications to grow the root partition.
Extended information on recommended hardware requirements is available in Tendermint documentation. The figures below have been updated from the default Tendermint recommendations to account for current cheqd network chain size, real-world usage accounting for requests nodes need to handle, etc.
Recommended hardware specifications
4-8 GB RAM (2 GB RAM minimum)
x64 2.0 GHz 2-4 vCPU or equivalent (x64 1.4 GHz 1 vCPU or equivalent minimum)
Storage requirements
Storage requirements may vary, depending on your specific needs:
If you need more historic data and you plan to initialize your node by using state DB snapshot, you will need at least 625 GB of SSD storage.
If you don't need specific historic data, you can use state sync to initialize your node. In this case, your node will start with less than 1GB of disk space consume, but it will keep to grow with time. It's recommended to provide at least 10GB of SSD.
In case you need full chain history and you want to run an archive node, you will need 2.7 TB of disk storage. For obtaining full chain history snapshot, reach out to our team on Discord.
⚠️ Storage requirements for the blockchain grows with time. Therefore, these minimum storage figures are expected to increase over time. Read our validator guide for "pruning" settings to optimise storage consumed.
Storage volumes
We recommend using a storage path that can be kept persistent and restored/remounted (if necessary) for the configuration, data, and log directories associated with a node. This allows a node to be restored along with configuration files such as node keys and for the node's copy of the ledger to be restored without triggering a full chain sync.
The default directory location for cheqd-node installations is $HOME/.cheqdnode, which computes to /home/cheqd/.cheqdnode when using the interactive installer. Custom paths can be defined if desired.
Operating system
Our packaged releases are currently compiled and tested for Ubuntu 20.04 LTS, which is the recommended operating system for installation using interactive installer or binaries.
We plan on supporting other operating systems in the future, based on demand for specific platforms by the community.
Networking configuration
Ports
To function properly, cheqd-node requires two types of ports to be configured. Depending on the setup, you may also need to configure firewall rules to allow the correct ingress/egress traffic.
Node operators should ensure there are no existing services running on these ports before proceeding with installation.
P2P port
The P2P port is used for peer-to-peer communication between nodes. This port is used for your node to discover and connect to other nodes on the network. It should allow traffic to/from any IP address range.
By default, the P2P port is set to 26656.
Inbound TCP connections on port 26656 (or your custom port) should be allowed from any IP address range.
Outbound TCP connections must be allowed on all ports to any IP address range.
The default P2P port can be changed in $HOME/.cheqdnode/config/config.toml.
The RPC port is intended to be used by client applications as well as the cheqd-node CLI. Your RPC port must be active and available on localhost to be able to use the CLI. It is up to a node operator whether they want to expose the RPC port to public internet.
The RPC endpoints for a node provide REST, JSONRPC over HTTP, and JSONRPC over WebSockets. These API endpoints can provide useful information for node operators, such as healthchecks, network information, validator information etc.
By default, the RPC port is set to 26657
Inbound and outbound TCP connections should be allowed from destinations desired by the node operator. The default is to allow this from any IPv4 address range.
TLS for the RPC port can also be setup separately. Currently, TLS setup is not automatically carried out in the install process described below.
The default RPC port can be changed in $HOME/.cheqdnode/config/config.toml.
Firewall configuration
In addition to the P2P/RPC ports above, you need to allow the following ports in terms of firewall access for the node to function correctly:
Domain Name System (DNS): Allow outbound traffic on TCP & UDP port 53 which allows your node to make DNS queries. Without this, your node will fail to make DNS lookups necessary to reach the peer-to-peer traffic ports for other nodes.
Tendermint allows more complex setups in production, where the ingress/egress to a validator node is proxied behind a "sentry" node.
While this setup is not compulsory, node operators with higher stakes or a need to have more robust network security may consider setting up a sentry-validator node architecture.
Alternative installation methods
The interactive installer is designed to setup/configure node installation as a service that runs on a virtual machine. This is the recommended setup for most scenarios when running as a validator. A validator node is expected to run 24/7 for network stability and security, and therefore cannot be autoscaled up/down across multiple instances.
If you're not running a validator node, or if you want more advanced control on your setup, installing with a Docker image is also supported. This method is also useful for running a localnet or when running a node on non-Linux systems for development purposes.
Make sure that permissions are cheqd:cheqd for CHEQD_HOME_DIRECTORY/.cheqdnode directory. For setting it the next command can help $ sudo chown -R cheqd:cheqd CHEQD_HOME_DIRECTORY/.cheqdnode
Block Elasticity: Dynamic block sizes to address congestion, improving transaction times while mitigating volatility in fees and maximising block utilisation.
Congestion-Responsive: Adjustments maintain a balance between supply and demand for network resources.
Governance and Transparency: New tools for community-driven token supply management.
The community pool gets community_tax * fees, plus any remaining dust after validators get their rewards that are always rounded down to the nearest integer value.
Community tax distribution
To spend tokens from the community pool:
community-pool-spend proposal can be submitted on the network.
Recipient address and amount of tokens should be specified.
The purpose for which the requested community pools tokens will be spent should be described.
If proposal is approved using the voting process, the recipient address specified will receive the requested tokens.
The expectation on the recipient is that they spend the tokens for the purpose specified in their proposal.
Cosmos supports multiple keyring backends for the storage and management of keys. Each node operator is free to use the key management method they prefer.
By default, the cheqd-noded binary is configured to use the os keyring backend, as it is a safe default compared to file-based key management methods.
For test networks or local networks, this can be overridden to the test keyring backend which is less secure and uses a file-based key storage mechanism where the keys are stored un-encrypted. To use the test keyring backend, append --keyring-backend test to each command that is related to key management or usage.
Types of keys on a cheqd node
Each cheqd validator node has at least two keys.
Node key
Default location is $HOME/config/node_key.json
Used for peer-to-peer communication
Validator key
Default location is $HOME/config/priv_validator_key.json
Used to sign consensus messages
Node-related commands in cheqd CLI
Creating a key
When a new key is created, an account address and a mnemonic backup phrase will be printed. Keep mnemonic safe. This is the only way to restore access to the account if they keyring cannot be recovered.
Command
Restoring a key from backup mnemonic phrase
Allows restoring a key from a previously-created BIP39 mnemonic phrase.
Command
Listing available keys on a node
Command
Using a key for transaction signing
Most transactions will require you to use --from <key-alias> param which is a name or address of private key with which to sign a transaction.
1) v0.6.0
2) v0.6.0-rc3
3) v0.6.0-rc2
4) v0.6.0-rc1
5) v0.5.0
Choose the appropriate list option number above to select the version of cheqd-node to install [default: 1]:
********* Latest stable cheqd-noded release version is Name: v0.6.0
********* List of cheqd-noded releases:
1) v0.6.0
2) v0.6.0-rc3
3) v0.6.0-rc2
4) v0.6.0-rc1
5) v0.5.0
Choose list option number above to select version of cheqd-node to install [default: 1]:
1
Set path for cheqd user's home directory [default: /home/cheqd]:
Do you want to setup a new cheqd-node? (yes/no) [default: yes]:
no
Select cheqd network to join (testnet/mainnet) [default: mainnet]:
mainnet
********* INFO: Installing cheqd-node with Cosmovisor allows for automatic unattended upgrades for valid software upgrade proposals.
Install cheqd-noded using Cosmovisor? (yes/no) [default: yes]:
yes
CAUTION: Downloading a snapshot replaces your existing copy of chain data. Usually safe to use this option when doing a fresh installation. Do you want to download a snapshot of the existing chain to speed up node synchronisation? (yes/no) [default: yes]:
yes
sudo su cheqd
cheqd-noded configure p2p external-address <your-new-external-address>
1) v0.5.0
2) v0.6.0
3) v0.6.0-rc2
4) v0.6.0-rc1
5) v0.5.0-rc2
Choose list option number above to select version of cheqd-node to install [default: 1]:
2
Set the path for cheqd user's home directory [default: /home/cheqd]:
Existing cheqd-node configuration folder detected. Do you want to upgrade an existing cheqd-node installation? (yes/no) [default: no]:
y
********* INFO: Installing cheqd-node with Cosmovisor allows for automatic unattended upgrades for valid software upgrade proposals.
Install cheqd-noded using Cosmovisor? (yes/no) [default: yes]:
y
Overwrite existing configuration for cheqd-node logging? (yes/no) [default: yes]:
y
Overwrite existing configuration for logrotate? (yes/no) [default: yes]:
y
sudo systemctl start cheqd-cosmovisor
sudo systemctl stop cheqd-noded
systemctl status cheqd-noded
● cheqd-noded.service - Service for running cheqd-node daemon
Loaded: loaded (/lib/systemd/system/cheqd-noded.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Thu 2022-07-07 12:31:12 UTC; 27s ago
Docs: https://docs.cheqd.io/node
Process: 13427 ExecStart=/usr/bin/cheqd-noded start (code=exited, status=0/SUCCESS)
Main PID: 13427 (code=exited, status=0/SUCCESS)
1) v0.5.0
2) v0.6.0
3) v0.6.0-rc2
4) v0.6.0-rc1
5) v0.5.0-rc2
Choose list option number above to select version of cheqd-node to install [default: 1]:
2
Set path for cheqd user's home directory [default: /home/cheqd]:
Existing cheqd-node configuration folder detected. Do you want to upgrade an existing cheqd-node installation? (yes/no) [default: no]:
y
********* INFO: Installing cheqd-node with Cosmovisor allows for automatic unattended upgrades for valid software upgrade proposals.
Install cheqd-noded using Cosmovisor? (yes/no) [default: yes]:
n
Overwrite existing systemd configuration for cheqd-node? (yes/no) [default: yes]:
n
Overwrite existing configuration for cheqd-node logging? (yes/no) [default: yes]:
n
Overwrite existing configuration for logrotate? (yes/no) [default: yes]:
n
Ubuntu's standard support forUbuntu 20.04 LTS("Long Term Support") version ends in April 2025. Ubuntu 20.04 LTS is currently the default Linux operating system used to build and run the binaries for our cheqd-node. To ensure continued stability and security, we need to upgrade to the latest Long-Term Support (LTS) version, Ubuntu 24.04.
Additionally, to ensure compatibility with Ubuntu 24.04, you'll need to upgrade to cheqd-noded v2.0.2. This upgrade will address any existing security vulnerabilities and ensure that we are running on the most up-to-date software and distribution, providing a secure and reliable environment.
Please follow the guide to transition both the operating system and our node software smoothly.
Prerequisites
Backup critical data (e.g., keys, config files).
Be prepared for the expected downtime of your node (approx 1 hour)
Ensure all dependenciesand software running on your node (except cheqd-node) support the latest Ubuntu LTS.
Step 1: Prepare the Current System
Stop and disable the cheqd-cosmovisor.service:
Update sources list and upgrade existing packages:
You'll probably need to restart your server after this step (you'll see the message if you run the do-release-upgrade command):
Step 2: Upgrade to the Ubuntu 22.04
Trigger distributionupgrade
During the process, you'll be prompted for a lot of answers, here are the most important ones:
Feel free to continue over ssh, since this shouldn't cause any issues for most of the cloud providers.
This is required, so continue.
After you agree with this, the actual upgrade process will
After you ssh again, you can check your distribution version, which should match Ubuntu 22.04 LTS if the upgrade was performed successfully.
Step 3: Upgrade to Ubuntu 24.04 LTS
Trigger another distribution upgrade:
The remaining steps should be the same as in Step 2, although you won't be asked the same questions as in the first upgrade.
Note that it's possible that you won't be able to proceed with this due to some package incompatibility:
On one of our nodes, it was the postgres-client-15
Step 4: Upgrade cheqd-node to v2.0.2
Download the latest version of the interactive installer:
Run the installer:
Select the first option UPGRADE existing installation and proceed by choosing the v2.0.2 version from the list.
Step 5: Confirm your node caught up with the network
The final step would be to make sure that your node is up and running and that it managed to sync with the network.
Run cheqd-noded status and check for "catching_up":false and also check if the latest_block_height and latest_block_time looks good to you.
ADR 007: Revocation registry
Status
Category
Status
Authors
Renata Toktar
ADR Stage
DRAFT
Summary
Issued credentials need to be revocable by their issuers. Revocation needs to be straightforward and fast. Testing of revocation needs to preserve privacy (be non-correlating), and it should be possible to do without contacting the issuer.
Context
This has obvious use cases for professional credentials being revoked for fraud or misconduct, e.g., a driver’s license could be revoked for criminal activity. However, it’s also important if a credential gets issued in error (e.g., has a typo in it that misidentifies the subject). The latter case is important even for immutable and permanent credentials such as a birth certificate.
In addition, it seems likely that the data inside credentials will change over time (e.g., a person’s mailing address or phone number updates). This is likely to be quite common, revocation can be used to guarantee currency of credential data when it happens. In other words, revocation may be used to force updated data, not just to revoke authorization.
Decision
REVOC_REG_DEF
Adds a Revocation Registry Definition, that Issuer creates and publishes for a particular Credential Definition. It contains public keys, maximum number of credentials the registry may contain, reference to the Credential Definition, plus some revocation registry specific data.
value (dict):
Dictionary with Revocation Registry Definition's data:
max_cred_num (integer): The maximum number of credentials the Revocation Registry can handle
The Revocation Registry Entry contains the new accumulator value and issued/revoked indices. This is just a delta of indices, not the whole list. It can be sent each time a new credential is issued/revoked.
💬 Cosmovisor is a process manager for Cosmos SDK application binaries that automates application binary switch at chain upgrades. It polls the upgrade-info.json file that is created by the x/upgrade module at upgrade height, and then can automatically download the new binary, stop the current binary, switch from the old binary to the new one, and finally restart the node with the new binary.
This guide explains the key configuration options for Cosmovisor when running a cheqd node. You can configure these settings via:
Environment variables
A config.toml file under $DAEMON_HOME/cosmovisor/ (by default) or any other location passed to cosmovisor with --cosmovisor-config flag
Note: Environment variables always take precedence over values set in the config file, if the --cosmovisor-config flag is not passed.
The cheqd node's sets most of these parameters for you in both the daemon service configuration file (cheqd-cosmovisor.service) and as a system-wide environment variable. It will also create a config.toml file, for consistency purposes. Understanding these settings helps with troubleshooting and advanced setups.
Configuration Parameters
Parameter
Default Value
Required
Description
Set by Installer
📝 Additional Notes
Backups: Enabling backups can use significant disk space and time. Use with caution, especially on non-pruned nodes.
Custom Pre-upgrade Scripts: Use COSMOVISOR_CUSTOM_PREUPGRADE for advanced automation (e.g., state export).
Log Time Format: kitchen is human-readable. See for more options.
🔧 Example: systemd Service File
To set Cosmovisor parameters at the service level, you can edit the systemd service file, typically located at /usr/lib/systemd/system/cheqd-cosmovisor.service. Here is an example with custom values:
If you decide to use config.toml file instead, feel free to remove environment variables from daemon service file and add --cosmovisor-config to your config file, i.e. ExecStart=/usr/bin/cosmovisor run start --cosmovisor-config /home/cheqd/.cheqdnode/cosmovisor/config.toml.
⚠️ Important: If you manually modify this file, the cheqd installer may overwrite your changes. When prompted during future installs, decline the update to preserve your custom settings.
🛠 Example: config.toml File
⚠️ Reminder: Like the service file, custom config.toml changes can be overwritten by the installer. Decline updates if you’ve made manual modifications.
For further details, refer to the official .
Guide for validators
This document provides guidance on how configure and promote a cheqd node to validator status. Having a validator node is necessary to participate in staking rewards, block creation, and governance.
Preparation steps
ADR 003: Command Line Interface (CLI) tools
Status
Category
Status
License
Creative Commons Attribution-ShareAlike 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-ShareAlike 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.
Section 1 – Definitions
a. Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License.
c. BY-SA Compatible License means a license listed at creativecommons.org/compatiblelicenses, approved by Creative Commons as essentially the equivalent of this Public License.
d. Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights.
e. Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements.
f. Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material.
g. License Elements means the license attributes listed in the name of a Creative Commons Public License. The License Elements of this Public License are Attribution and ShareAlike.
h. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License.
i. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license.
j. Licensor means the individual(s) or entity(ies) granting rights under this Public License.
k. Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them.
l. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world.
m. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning.
Section 2 – Scope
a. License grant.
Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to:
A. reproduce and Share the Licensed Material, in whole or in part; and
B. produce, reproduce, and Share Adapted Material.
Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions.
Term. The term of this Public License is specified in Section 6(a).
Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material.
Downstream recipients.
A. Offer from the Licensor – Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License.
B. Additional offer from the Licensor – Adapted Material. Every recipient of Adapted Material from You automatically receives an offer from the Licensor to exercise the Licensed Rights in the Adapted Material under the conditions of the Adapter’s License You apply.
C. No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.
No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i).
b. Other rights.
Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise.
Patent and trademark rights are not licensed under this Public License.
To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties.
Section 3 – License Conditions
Your exercise of the Licensed Rights is expressly made subject to the following conditions.
a. Attribution.
If You Share the Licensed Material (including in modified form), You must:
A. retain the following if it is supplied by the Licensor with the Licensed Material:
i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of warranties;
v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable;
B. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and
C. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License.
You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information.
If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable.
b. ShareAlike.
In addition to the conditions in Section 3(a), if You Share Adapted Material You produce, the following conditions also apply.
The Adapter’s License You apply must be a Creative Commons license with the same License Elements, this version or later, or a BY-SA Compatible License.
You must include the text of, or the URI or hyperlink to, the Adapter's License You apply. You may satisfy this condition in any reasonable manner based on the medium, means, and context in which You Share Adapted Material.
You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, Adapted Material that restrict exercise of the rights granted under the Adapter's License You apply.
Section 4 – Sui Generis Database Rights
Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database;
b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material, including for purposes of Section 3(b); and
c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights.
Section 5 – Disclaimer of Warranties and Limitation of Liability
a. Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You.
b. To the extent possible, in no event will the Licensor be liable to You on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this Public License or use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. Where a limitation of liability is not allowed in full or in part, this limitation may not apply to You.
c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability.
Section 6 – Term and Termination
a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically.
b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates:
automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or
upon express reinstatement by the Licensor.
For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License.
c. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License.
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License.
Section 7 – Other Terms and Conditions
a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License.
Section 8 – Interpretation
a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions.
c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor.
d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority.
Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” The text of the Creative Commons public licenses is dedicated to the public domain under the CC0 Public Domain Dedication. Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at creativecommons.org/policies, Creative Commons does not authorize the use of the trademark “Creative Commons” or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses.
Creative Commons may be contacted at creativecommons.org
issuance_type (string enum): Defines credential revocation strategy. Can have the following values:
ISSUANCE_BY_DEFAULT: All credentials are assumed to be issued and active initially, so that Revocation Registry needs to be updated (REVOC_REG_ENTRY transaction sent) only when revoking. Revocation Registry stores only revoked credentials indices in this case. Recommended to use if expected number of revocation actions is less than expected number of issuance actions.
ISSUANCE_ON_DEMAND: No credentials are issued initially, so that Revocation Registry needs to be updated (REVOC_REG_ENTRY transaction sent) on every issuance and revocation. Revocation Registry stores only issued credentials indices in this case. Recommended to use if expected number of issuance actions is less than expected number of revocation actions.
public_keys (dict): Revocation Registry's public key
id (string): Revocation Registry Definition's unique identifier (a key from state trie is currently used) owner:cred_def_id:revoc_def_type:tag
cred_def_id (string): The corresponding Credential Definition's unique identifier (a key from state tree is currently used)
revoc_def_type (string enum): Revocation Type. CL_ACCUM (Camenisch-Lysyanskaya Accumulator) is the only supported type now.
tag (string): A unique tag to have multiple Revocation Registry Definitions for the same Credential Definition and type issued by the same DID.
prev_accum (string): The previous accumulator value. It is compared with the current value, and transaction is rejected if they don't match. This is necessary to avoid dirty writes and updates of accumulator.
issued (list of integers): An array of issued indices (may be absent/empty if the type is ISSUANCE_BY_DEFAULT). This is delta, and will be accumulated in state.
revoked (list of integers): An array of revoked indices. This is delta; will be accumulated in state)
revoc_reg_def_id (string): The corresponding Revocation Registry Definition's unique identifier (a key from state trie is currently used)
revoc_def_type (string enum): Revocation Type. CL_ACCUM (Camenisch-Lysyanskaya Accumulator) is the only supported type now.
There will be some configuration changes and you'll probably be asked if you want to install a new version of certain software (see the screenshots and examples below). In most cases, it is completely fine to overwrite configuration changes, but be cautious if you have some custom configurations, since these actions will overwrite them:
screenshot
After that, you'll be asked if you want to remove obsolete packages. It's always a good idea to check details (by pressing d ), especially if you have other software running on your node. In our case, we agreed to remove unused packages:
Finally, you'll be asked to restart your system to apply the changes. Your server should automatically reboot, but it's good to check with your cloud provider documentation and make sure this action won't cause any data loss.
It's probable that SSH fingerprint is going to change after the upgrade, so you won't be able to ssh to your server immediately. To resolve that, you'll need to run the following command on your localhost:
Check the distribution version:
.
We resolved this issue by removing this package and installing it again after a successful upgrade:
After you complete the upgrade and restart your node again, you should ssh to your server again and check the distribution version. If everything was successful, you should get output like this:
For the remaining questions, answer based on your previous setup and complete the binary upgrade.
Check the version of your cheqd-node:
Re-enable cheqd-cosmovisor.service and start your node:
Step 1: Ensure you have a cheqd node installed as a service
You must already have a running cheqd-node instance installed using one of the supported methods.
Please also ensure the node is fully caught up with the latest ledger updates.
When you create a new key, a new account address and mnemonic backup phrase will be printed. Keep the mnemonic phrase safe as this is the only way to restore access to the account if they keyring cannot be recovered.
P.S. in case of using Ledger Nano device it would be helpful to use this instructions
The validator account address is generated in Step 1 above when a new key is added. To show the validator account address, follow the cheqd CLI guide on managing accounts.
(The assumption above is that there is only one account / key that has been added on the node. In case you have multiple addresses, please jot down the preferred account address.)
Promote a node to validator after acquiring CHEQ tokens for staking
Ensure your account has a positive balance
Follow the guidance on using cheqd CLI to manage accounts to check that your account is correctly showing the CHEQ testnet tokens provided to you.
Get your node's validator public key
The node validator public key is required as a parameter for the next step. More details on validator public key is mentioned in the cheqd CLI guide on managing nodes.
Promote your node to validator status by staking your token balance
You can decide how many tokens you would like to stake from your account balance. For instance, you may want to leave a portion of the balance for paying transaction fees (now and in the future).
To promote to validation, first prepare a json file with all the validator information named validator.json:
Parameters required in the json file above are:
amount: Amount of tokens to stake. You should stake at least 1 CHEQ (= 1,000,000,000ncheq) to successfully complete a staking transaction.
from: Key alias of the node operator account that makes the initial stake
min-self-delegation: Minimum amount of tokens that the node operator promises to keep bonded
pubkey: Node's bech32-encoded validator public key from the previous step
commission-rate: Validator's commission rate. The minimum is set to 0.05.
commission-max-rate: Validator's maximum commission rate, expressed as a number with up to two decimal points. The value for this cannot be changed later.
commission-max-change-rate: Maximum rate of change of a validator's commission rate per day, expressed as a number with up to two decimal points. The value for this cannot be changed later.
Please note the parameters mentioned are just an “example”.
You will see the commission they set, the max rate they set, and the rate of change. Please use this as a guide when thinking of your own commission configurations. This is important to get right, because the commission-max-rate and commission-max-change-rate cannot be changed after they are initially set.
Submit the create-validator transaction to the chain:
Check that your validator node is bonded
Checking that the validator is correctly bonded can be checked via any node:
Find your node by moniker and make sure that status is BOND_STATUS_BONDED.
Check that your validator node is signing blocks and taking part in consensus
Find out your and look for "ValidatorInfo":{"Address":"..."}:
Query the latest block. Open <node-address:rpc-port/block in a web browser. Make sure that there is a signature with your validator address in the signature list.
Using Ledger Nano device
To use your Ledger Nano you will need to complete the following steps:
Set-up your wallet by creating a PIN and passphrase, which must be stored securely to enable recovery if the device is lost or damaged.
Connect your device to your PC and update the firmware to the latest version using the Ledger Live application.
Install the Cosmos application using the software manager (Manager > Cosmos > Install).
Adding a new key In order to use the hardware wallet address with the cli, the user must first add it via cheqd-noded. This process only records the public information about the key.
To import the key first plug in the device and enter the device pin. Once you have unlocked the device navigate to the Cosmos app on the device and open it.
To add the key use the following command:
Note
The --ledger flag tells the command line tool to talk to the ledger device and the --index flag selects which HD index should be used.
When running this command, the Ledger device will prompt you to verify the genereated address. Once you have done this you will get an output in the following form:
Next steps
On completion of the steps above, you would have successfully bonded a node as validator to the cheqd testnet and participating in staking/consensus.
Learn more about what you can do with your new validator node in the cheqd CLI guide.
Implementation Status
Implemented
Start Date
2021-09-10
Summary
Due to the nature of the cheqd project merging concepts from the Cosmos blockchain framework and self-sovereign identity (SSI), there are two potential options for creating Command Line Interface (CLI) tools for developers to use:
Cosmos-based CLI: Most likely route for Cosmos projects for their node application. Most existing Cosmos node validators will be familiar with this method of managing their node.
VDR CLI: Traditionally, a lot of SSI networks have used Hyperledger Indy and therefore the Indy CLI tool for managing and interacting with the ledger. This has now been renamed to Verifiable Data Registry (VDR) Tools CLI and is the tool that most existing SSI node operators ("stewards") would be familiar with.
Ideally, the cheqd-node project would provide a consistent set of CLI tools rather than two separate tools with varying feature sets between them.
This ADR will focus on the CLI tool architecture choice for cheqd-node.
Context
Assumptions / Considerations
Likelihood of introducing bugs or security vulnerabilities
Any CLI tool architecture chosen should not increase the likelihood of introducing bugs, security vulnerabilities, or design pattern deviations from upstream Cosmos SDK.
Actions that are carried out on ledger through a CLI tool in cheqd-node now include token functionality as well as identity functionality. E.g., if a DID gets compromised, there could be mechanisms to recover or signal that fact to issuers/verifiers. If tokens or the staking balance of node operators get compromised, this may potentially have more severe consequences for them.
User / Node Operator preferences
Would node operators want a single CLI to manage everything?
This might be the case with node operators from an SSI / digital identity background, or node operators familiar with Hyperledger Indy CLI / VDR Tools CLI.
A “single CLI” could be a single tool as far as the user sees, but actually consist of multiple modules beneath it in how it’s implemented.
Would node operators be okay with having two separate CLIs?
One for Cosmos-ledger functions, and one for identity-specific functions.
Unlike existing Hyperledger Indy networks, it is anticipated that some of the node operators on the cheqd network will have experience running Cosmos validator nodes. For this group, having to learn a new “single” CLI tool could cause a steeper learning curve and a worse user experience than what they have now.
Options considered
1. Keep both Cosmos CLI and VDR Tools CLI, but use them for different purposes
Pros:
Simple to do, no changes needed in code developed.
Differences in functionality between the two CLIs can be explained in documentation.
Node operators with good technical skills will understand the difference.
Cosmos CLI design patterns would be consistent the wider Cosmos open source ecosystem.
No steep learning curve for potential node operators who only want to run a node, without implementing SSI functionality in apps.
Cons:
Key storage for Cosmos accounts may need to be done in two different keystores.
Potentially confusing for node operators who use both CLIs to know which one to use for what purpose.
Potentially a steeper learning curve for existing SSI node operators.
2. Implement overlapping functionality in both CLI tools
Pros:
Both Cosmos CLI and VDR Tools CLI would have native support for identity as well as token transactions.
Node operators/developers could pick their preferred CLI tool.
Cons:
Significant development effort required to implement very similar functionality two separate times, with little difference to the end user in actions that can be executed.
VDR Tools CLI has DID / VC modules that would take significant effort to recreate in Cosmos CLI
Cosmos CLI has token related functionality that would take significant development effort to replicate in VDR Tools CLI, and opens up the possibility that errors in implementation could introduce security vulnerabilities.
3. Create aliases for commands in one of the CLI tools in the other CLI tool
Commands in the Cosmos CLI could be made available as aliases in the VDR Tools CLI, or vice versa.
Pros:
Single CLI tool to learn and understand for node operators.
Development effort is simplified, as overlapping functionality is not implemented in two separate tools.
Cons:
Less development effort required than Option 2, but greater than Option 1.
Opens up the possibility that there's deviation in feature coverage between the two CLIs if aliases are not created to make 1:1 feature parity in both tools.
Decision
Based on the options considerations above and an analysis of development required, the decision was taken to maintain two separate CLI tools:
cheqd-node Cosmos CLI: Any Cosmos-specific features, such as network & node management, token functionality required by node operators, etc.
VDR Tools CLI: Any identity-specific features required by issuers, verifiers, holders on SSI networks.
Faster time-to-market on the CLI tools, while freeing up time to build out user-facing functionality.
Negative
Cosmos account keys may need to be replicated in two separate key storages. A potential resolution for this in the future is to integrate the ability to use a single keyring for both CLI tools.
Neutral
Seek feedback from cheqd's open source community and node operators during testnet phase on whether the documentation and user experience is easy to understand and appropriate tools are available.
References
Authors
Alexandr Kolesov
ADR Stage
ACCEPTED
FAQs for validators
How do I stake more tokens after setting up a validator node?
When you set up your Validator node, it is recommended that you only stake a very small amount from the actual Validator node. This is to minimise the tokens that could be locked in an unbonding period, were your node to experience signficiant downtime.
You should delegate the rest of your tokens to your Validator node from a different key alias.
How do I do this?
You can add (as many as you want) additional keys you want using the function:
When you create a new key, a mnemonic phrase and account address will be printed. Keep the mnemonic phrase safe as this is the only way to restore access to the account if they keyring cannot be recovered.
You can view all created keys using the function:
You are able to transfer tokens between key accounts using the function.
You can then delegate to your Validator Node, using the function
We use a second/different Virtual Machine to create these new accounts/wallets. In this instane, you only need to install cheqd-noded as a binary, you don't need to run it as a full node.
And then since this VM is not running a node, you can then append the --node parameter to any request and target the RPC port of the VM running the actual node.
That way:
The second node doesn't need to sync the full blockchain; and
You can separate out the keys/wallets, since the IP address of your actual node will be public by definition and people can attack it or try to break in
How much storage should I provision?
I’d recommend at least 250 GB at the current chain size. You can choose to go higher, so that you don’t need to revisit this. Within our team, we set alerts on our cloud providers/Datadog to raise alerts when nodes reach 85-90% storage used which allows us to grow the disk storage as and when needed, as opposed to over-provisioning.
Is there any way to use less storage?
Yes, you can. You can do this by to more aggressive parameters in the app.toml file.
Here’s the relevant section in the file:
Please also see this thread on the trade-offs involved. This will help to some extent, but please note that this is a general property of all blockchains that the chain size will grow. E.g., out of the gate. We recommend using alerting policies to grow the disk storage as needed, which is less likely to require higher spend due to over-provisioning.
How do I withdraw Validator Rewards including Commission?
Validators can withdraw their rewards, including commission, directly via the command-line interface (CLI). This feature is essential for managing earned rewards efficiently.
Command for Withdrawing Rewards with Commission
Explanation of Command Parameters
cheqdvaloper...: Insert your validator operator address.
--commission: Ensures that commission rewards are included in the withdrawal.
--from <wallet-name>: Specifies the wallet from which the transaction will be initiated.
How do I monitor the status of my node?
One of the simplest ways to do this is to , and with a more detailed view on the per-validator page (, for example). The condition is scored based on :
Green: 90-100% blocks signed
Amber: 70-90% blocks signed
Red: 1-70% blocks signed
We have also internally that takes the output of this from condition score from the block explorer GraphQL API and makes it available as a simple REST API that can be used to send alerts on Slack, Discord etc which we have and set up on our Slack/Discord.
Please join the channel 'mainnet-alerts' on the cheqd community slack.
In addition to that, (for those who already use it for monitoring/want to set one up) that has metrics for monitoring node status (and a lot more).
Are there any other ways to optimise?
Yes! Here are a few other suggestions:
You can check the current status of disk storage used on all mount points manually through the output of df -hT
The default storage path for cheqd-node is on /home/cheqd. By default, most hosting/cloud providers will set this up on a single disk volume under the / (root) path. If you move and mount /home on a separate disk volume, this will allow you to expand the storage independent of the main volume. This can sometimes make a difference, because if you leave /home
What is Commission rate and is it important?
As a Validator Node, you should be familiar with the concept of commission. This is the percentage of tokens that you take as a fee for running the infrastructure on the network. Token holders are able to delegate tokens to you, with an understanding that they can earn staking rewards, but as consideration, you are also able to earn a flat percentage fee of the rewards on the delegated stake they supply.
There are three commission values you should be familiar with:
The first is the maximum rate of commission that you will be able to move upwards to.
Please note that this value cannot be changed once your Validator Node is set up, so be careful and do your research.
The second parameter is the maximum amount of commission you will be able to increase by within a 24 hour period. For example if you set this as 0.01, you will be able to increase your commission by 1% a day.
The third value is your current commission rate.
Points to note: lower commission rate = higher likelihood of more token holders delegating tokens to you because they will earn more rewards. However, with a very low commission rate, in the future, you might find that the gas fees on the Network outweight the rewards made through commission.
higher commission rate = you earn more tokens from the existing stake + delegated tokens. But the tradeoff being that it may appear less desirable for new delegators when compared to other Validators.
You can have a look at other projects on Cosmos to get an idea of the percentages that nodes set as commission.
What is Gas and Gas Prices?
When setting up the Validator, the Gas parameter is the amount of tokens you are willing to spend on gas.
For simplicity, we suggest setting:
AND setting:
These parameters, together, will make it highly likely that the transaction will go through and not fail. Having the gas set at auto, without the gas adjustment will endanger the transaction of failing, if the gas prices increase.
Gas prices also come into play here too, the lower your gas price, the more likely that your node will be considered in the active set for rewards.
We suggest the set:
should fall within this recommended range:
Low: 25ncheq
Medium: 5000ncheq
High: 100ncheq
How do I change my public name and description
Your public name, is also known as your moniker.
You are able to change this, as well as the description of your node using the function:
Should I set my firewall port 26656 open to the world?
Yes, this is how you should do it. Since it's a public permissionless network, there's no way of pre-determining what the set of IP addresses will be, as entities may leave and join the network. We suggest using a TCP/network load balancer and keeping your VM/node in a private subnet though for security reasons. The LB then becomes your network edge which if you're hosting on a cloud provider they manage/patch/run.
Use fee abstraction
Overview
A cheqd-node instance can be controlled and configured using the .
This document contains the Fee Abstraction commands for using governance-approved alternative tokens, e.g., a stablecoin such as USDC. The Fee Abstraction module routes requests for supported IBC denominations to , and uses existing liquidity pools (e.g., the ) to convert the tokens on the fly. The underlying transaction on cheqd network is always funded in CHEQ tokens.
Optimising disk storage with pruning
Context
Cosmos SDK and Tendermint has a concept of pruning, which allows reducing the disk utilisation and .
There are two kinds of pruning controls available on a node:
sudo lsb_release -a
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 24.04.1 LTS
Release: 24.04
Codename: noble
Reading cache
Checking package manager
Continue running under SSH?
This session appears to be running under ssh. It is not recommended
to perform a upgrade over ssh currently because in case of failure it
is harder to recover.
If you continue, an additional ssh daemon will be started at port
'1022'.
Do you want to continue?
Continue [yN]
No valid mirror found
While scanning your repository information no mirror entry for the
upgrade was found. This can happen if you run an internal mirror or
if the mirror information is out of date.
Do you want to rewrite your 'sources.list' file anyway? If you choose
'Yes' here it will update all 'jammy' to 'noble' entries.
If you select 'No' the upgrade will cancel.
Continue [yN]
Do you want to start the upgrade?
1 installed package is no longer supported by Canonical. You can
still get support from the community.
50 packages are going to be removed. 135 new packages are going to be
installed. 587 packages are going to be upgraded.
You have to download a total of 1,030 M. This download will take
about 3 minutes with a 40Mbit connection and about 27 minutes with a
5Mbit connection.
Fetching and installing the upgrade can take several hours. Once the
download has finished, the process cannot be canceled.
Continue [yN] Details [d]
sudo do-release-upgrade
Calculating the changes
Calculating the changes
Could not calculate the upgrade
An unresolvable problem occurred while calculating the upgrade.
The package 'postgresql-client-15' is marked for removal but it is in
the removal deny list.
If none of this applies, then please report this bug using the
command 'ubuntu-bug ubuntu-release-upgrader-core' in a terminal. If
you want to investigate this yourself the log files in
'/var/log/dist-upgrade' will contain details about the upgrade.
Specifically, look at 'main.log' and 'apt.log'.
Restoring original system state
Configuration file '/etc/systemd/timesyncd.conf'
==> Modified (by you or by a script) since installation.
==> Package distributor has shipped an updated version.
What would you like to do about it ? Your options are:
Y or I : install the package maintainer's version
N or O : keep your currently-installed version
D : show the differences between the versions
Z : start a shell to examine the situation
The default action is to keep your current version.
*** timesyncd.conf (Y/I/N/O/D/Z) [default=N] ?
Configuration file '/etc/sudoers'
==> Modified (by you or by a script) since installation.
==> Package distributor has shipped an updated version.
What would you like to do about it ? Your options are:
Y or I : install the package maintainer's version
N or O : keep your currently-installed version
D : show the differences between the versions
Z : start a shell to examine the situation
The default action is to keep your current version.
*** sudoers (Y/I/N/O/D/Z) [default=N] ?
Remove obsolete packages?
84 packages are going to be removed.
Continue [yN] Details [d]
System upgrade is complete.
Restart required
To finish the upgrade, a restart is required.
If you select 'y' the system will be restarted.
Continue [yN]
cheqd-noded tx bank send <from> <to-address> <amount> --node <url> --chain-id <chain> --gas auto --gas-adjustment 1.4
cheqd-noded tx staking delegate <validator address> <amount to stake> --from <key alias> --gas auto --gas-adjustment 1.4 --gas-prices 5000ncheq
default: the last 100 states are kept in addition to every 500th state; pruning at 10 block intervals
nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node)
everything: all saved states will be deleted, storing only the current state; pruning at 10 block intervals
custom: allow pruning options to be manually specified through 'pruning-keep-recent', 'pruning-keep-every', and 'pruning-interval'
pruning = "default"
These are applied if and only if the pruning strategy is custom.
pruning-keep-recent = "0"
pruning-keep-every = "0"
pruning-interval = "0"
cheqd-noded tx distribution withdraw-rewards cheqdvaloper... --commission --from <wallet-name> --gas auto --gas-adjustment 1.7 --gas-prices 5000ncheq --chain-id cheqd-mainnet-1
cheqd-noded tx staking edit-validator --from validator1-eu --moniker "cheqd" --details "cheqd is building a private and secure decentralised digital identity network on the Cosmos ecosystem" --website "https://www.cheqd.io" --identity "F0669B9ACEE06ADC" --security-contact [email protected] --gas auto --gas-adjustment 1.4 --gas-prices 5000ncheq -chain-id cheqd-mainnet-1
Various commands are available for declaring fees in transactions, along with querying and configuring allowed denominations through governance proposals.
Supported IBC denominations
Fees for transactions cannot be denominated in arbitrary alternative tokens, when attempting to pay for transactions with non-CHEQ tokens. Supported IBC denominations must be approved using decentralised governance.
Prerequisites for using Fee Abstraction
The equivalent IBC denomination amount is required to pay for transactions. Ensure this amount is available in the equivalent cheqd account related to the cheqd account/key provided in the transaction, bridged from any Osmosis account.
Bridging from Osmosis
You can find out if you've got sufficient balance in supported IBC denominations using the following methods:
Once you've added the cheqd wallet account to Leap Wallet, use the network switcher to switch to Osmosis. This will allow you to see the balances you have on Osmosis chain, including native OSMO as well as any IBC denominations such as USDC.
Sending tokens over IBC
If you have an existing Osmosis account, you can send tokens over IBC to your cheqd account. This is done by sending the tokens from your Osmosis account to the address of your cheqd account. Use the Swap flow to send tokens over IBC. The tokens will be sent to the cheqd account and will be available for use in transactions. Otherwise, you can use the cheqd-noded tx ibc-transfer transfer command to send tokens over IBC from your Osmosis account to your cheqd account. The command will look like this:osmosisd tx ibc-transfer transfer <src-port> <src-channel> <to_address> <amount> --from <key_or_address> --node <url> or specifically using the osmosisd tx ibc-transfer transfer transfer
Getting sufficient balance of supported IBC denominations on equivalent Osmosis account
If you do not have sufficient balances in supported IBC denominations on Osmosis, you need to top-up the specific token you want in your Osmosis account
For USDC
If you already have USDC (regardless of which chain it is on), use Noble Express Transfer to transfer it first as Cosmos-native Noble USDC into Osmosis, before bridging it to cheqd. This is the fastest way to get USDC on Osmosis.
Acquire USDC on Osmosis by swapping existing tokens to USDC. You can use any token you have on Osmosis to swap for USDC. The most common tokens to swap for USDC are OSMO and ATOM. You can use the Osmosis DEX to swap tokens.
Once you have sufficient USDC on Osmosis, you can use the CLI or Leap Wallet to send the USDC over IBC to your cheqd account. The command will look like this:osmosisd tx ibc-transfer transfer <src-port> <src-channel> <to_address> <amount> --from <key_or_address> --node <url>
Once the USDC is sent over IBC, you can use the cheqd-noded query bank balances <address> command to check the balance of your cheqd account. This will show you the balance of USDC that was sent over IBC. Refer to for the IBC denomination of USDC on Osmosis.
You can also use the cheqd-noded query feeabs osmo-arithmetic-twap <ibc_denom> command to check the real-time rate for USDC on cheqd. This will show you the exchange rate in USDC that is required to pay for transactions on cheqd.
Interacting with Fee Abstraction
Querying allowed IBC denominations via CLI
As tokens supported for fee abstraction can only be allowlisted by governance, use the following command to find out which IBC denominations are supported:
Arguments
--node: IP address or URL of node to send the request to
--output json (optional): Provides the output in JSON format
Example
Response
The ibc_denom is the IBC denomination of the supported asset on cheqd (not the IBC denomination of the asset on Osmosis).
The osmosis_pool_token_denom_in is the IBC denomination of the asset on Osmosis.
The pool_id is the ID of the liquidity pool on Osmosis that is used for the conversion.
If the status is 0, the IBC denomination is allowed for transactions. If the status is 1, the IBC denomination is not allowed for transactions. Also, if the IBC denomination is not included in the response, it is not allowed for transactions. The status can vary based on the relaying channel and whether outdated and / or frozen or not.
Querying allowed IBC denominations via REST API
You can also query allowed IBC denominations using the REST API, which can be useful for applications that do not use the node CLI. You can fetch this by initiating a GET request to:
Note: The real-time gas prices are used to calculate the fees for transactions. Denominations for fees are specified in the chain-native denomination, which is ncheq. You'll need to convert the gas price amount to the desired IBC denomination, as described within the next section.
Converting gas prices from ncheq to IBC denomination and vice versa
Arguments
ibc_denom: The IBC denomination to convert the gas price to
--node: IP address or URL of node to send the request to
--output json (optional): Provides the output in JSON format
Example
If the desired IBC denomination is not configured, the response will error out with a message indicating that the IBC denomination is not allowed for transactions.
Declaring fees in transactions
Arguments
from_key_or_address: The key or address of the sender. If the key is used, the key must be unlocked in the keyring.
to_address: The address of the recipient
amount: The amount to send, in the format 1000ncheq. The amount must be in the chain-native denomination, which is ncheq
gas: The amount of gas to use or auto to calculate the gas automatically
gas_prices: The gas prices to use, in the format 5000ncheq or 0.00000016ibc/F5FABF52B54E65064B57BF6DBD8E5FAD22CEE9F4B8A57ADBB20CCD0173AA72A4
fees: The fees to pay, in the format 1000000000ncheq or 0.032ibc/F5FABF52B54E65064B57BF6DBD8E5FAD22CEE9F4B8A57ADBB20CCD0173AA72A4
--node: IP address or URL of node to send the request to
Note: Use either --gas and --gas-prices or --fees to declare fees in transactions. If --fees is used, the --gas and --gas-prices flags are ignored.
Example
Declaring fees in transactions using --gas and --gas-prices
Declaring fees in transactions using --fees
Note: Use with the -y flag to skip the confirmation prompt. Any fees declared in transactions are deducted from the sender's account by first converting the fees to the chain-native denomination, which is ncheq. Use with any compatible transaction type declared in the tx command.
Declaring fees for identity transactions
Identity transactions require fees to be declared using the --fees flag. The fees are deducted from the sender's account by first converting the fees to the chain-native denomination, which is ncheq.
Similarly, declare fees for DID-Linked Resource transactions using the --fees flag.
Configuring allowed IBC denominations through governance proposals
Submitting a governance proposal to allow an IBC denomination
Arguments
proposal-file: The JSON file containing the proposal details
key: The key of the proposer to submit the proposal on behalf of
--node: IP address or URL of node to send the request to
Example
Note: Use with the add-hostzone-config proposal type to allow an IBC denomination for transactions. The proposal must be submitted by a proposer with the required deposit amount in the chain-native denomination, which is ncheq. The proposal details are specified in the JSON file. Fees can be declared using either the --gas and --gas-prices flags or the --fees flag.
Appendix
ibc/498A0751C798A0D9A389AA3691123DADA57DAA4FE165D5C75894505B876BA6E4: The $USDC IBC denomination for the Osmosis chain
ibc/F5FABF52B54E65064B57BF6DBD8E5FAD22CEE9F4B8A57ADBB20CCD0173AA72A4: The $USDC IBC denomination for the cheqd chain (bridged from Osmosis)
ibc/92AE2F53284505223A1BB80D132F859A00E190C6A738772F0B3EF65E20BA484F: The $EURe IBC denomination for the Osmosis chain
ibc/7A08C6F11EF0F59EB841B9F788A87EC9F2361C7D9703157EC13D940DC53031FA: The $CHEQ IBC denomination for the Osmosis chain
Tendermint pruning: This impacts the ~/.cheqdnode/data/blockstore.db/ folder by only retaining the last n specified blocks. Controlled by the min-retain-blocks parameter in ~/.cheqdnode/config/app.toml.
Cosmos SDK pruning: This impacts the ~/.cheqdnode/data/application.db/ folder and prunes Cosmos SDK app-level state (a logical layer higher than Tendermint, which is just peer-to-peer). These are set by the pruning parameters in the ~/.cheqdnode/config/app.toml file.
This can be done by modifying the pruning parameters inside /home/cheqd/.cheqdnode/config/app.toml file.
You can check which version of cheqd-noded you're running using:
The output should be a version higher than v1.3.0. If you're on a lower version, you either manually upgrade the node binary or use the interactive installer to execute an upgrade while retaining settings.
The instructions below assume that the home directory for the cheqd user is set to the default value of /home/cheqd. If this is not the case for your node, please modify the commands below to the correct path.
(Substitute with cheqd-noded.service if you're running a standlone node rather than with Cosmovisor.)
Switch to the cheqd user and configuration directory
Switch to the cheqd user and then the .cheqdnode/config/ directory.
Display current directory usage/size for the node data folder
Before you make changes to pruning configuration, you might want to capture the existing usage first (only copy the command bit, not the full line):
The du -h -d 1 ... command above prints the disk usage for the specified folder down to one folder level depth (-d 1 parameter) and prints the output in GB/MB (-h parameter, which prints in human-readable values).
Open the app.toml file for editing
Open the app.toml file once you've switched to the ~/.cheqdnode/config/ folder using your preferred text editor, such as nano:
Instructions on how to use text editors such as nano is out of the scope of this document. If you're unsure how to use it, consider following guides on how to use nano.
Choose a Cosmos SDK pruning strategy
⚠️ If your node was configured to work with release version v1.2.2 or earlier, you may have been advised to run in pruning="nothing" mode due to a bug in Cosmos SDK.
Ensure you've upgraded to the latest stable release (using the installer) or otherwise. When running a validator node, you're recommended to change this value to pruning="default".
The file should already be populated with values. Edit the pruning parameter value to one of following:
pruning="nothing" (highest disk usage): This will disable Cosmos SDK pruning and set your node to behave like an "archive" node. This mode consumes the highest disk usage.
pruning="default" (recommended, moderate disk usage): This keeps the last 100 states in addition to every 500th state, and prunes on 10-block intervals. This configuration is safe to use on all types of nodes, especially validator nodes.
pruning="everything" (lowest disk usage): This mode is not recommended when running validator nodes. This will keep the current state and also prune on 10 blocks intervals. This settings is useful for nodes such as seed/sentry nodes, as long as they are not used to query RPC/REST API requests.
pruning="custom" (custom disk usage): If you set the pruning parameter to custom, you will have to modify two additional parameters:
pruning-keep-recent: This will define how many recent states are kept, e.g., 250 (contrast this against default
Although the paramters named pruning-* are only supposed to take effect if the pruning strategy is custom, in practice it seems that in Cosmos SDK v0.46.10 these settings still impact pruning. Therefore, you're advised to comment out these lines when using default pruning.
Example configuration file with recommended settings:
Choose a Tendermint pruning strategy
Configuring min-retain-blocks parameter to a non-zero value activates Tendermint pruning, which specifies minimum block height to retain. By default, this parameter is set to 0, which disables this feature.
Enabling this feature can reduce disk usage significantly. Be careful in setting a value, as it must be at least higher than 250,000 as calculated below:
Unbonding time (14 days) converted to seconds = 1,210,000 seconds
...divided by average block time = approx. 6s / block
= approx. 210,000 blocks
Adding a safety margin (in case average block time goes down) = approx. 250,000 blocks
Therefore, this setting must always be updated to carefully match a valid value in case the unbonding time on the network you're running on is different. (E.g., this value is different on mainnet vs testnet due to different unbonding period.)
Using the recommended values, on the current cheqd mainnet this section would look like the following:
Save changes in the file
Save and exit from the app.toml file. Working with text editors is outside the scope of this document, but in general under nano this would be Ctrl+X, "yes" to Save modified buffer, then Enter.
Switch user and restart service
ℹ️ NOTE: You need root or at least a user with super-user privileges using the sudo prefix to the commands below when interact with systemd.
If you switched to the cheqd user, exit out to a root/super-user:
Usually, this will switch you back to root or other super-user (e.g., ubuntu).
Restart systemd service:
(Substitute with cheqd-noded.service above if you're running without Cosmovisor)
Check the systemd service status and confirm that it's running:
Our installer guide has a section on how to check service status.
Next steps
If you activate/modify any pruning configuration above, the changes to disk usage are NOT immediate. Typically, it may take 1-2 days over which the disk usage reduction is progressively applied.
If you've gone from a higher disk usage setting to a lower disk usage setting, re-run the disk usage command to comapre the breakdown of disk usage in the node data directory:
The output shown should show a difference in disk usage from the previous run before settings were changed for the application.db folder (if the pruning parameters were changed) and/or the blockstore.db folder (if min-retain-blocks) was changed.
Third-Party Snapshots for Reference
Instead of syncing a full node from scratch, validators can leverage pruned snapshots provided by trusted third-party services. These snapshots are substantially smaller (usually 3-10 GB compared to 100s of GBs for a full node), allowing faster setup and reduced storage requirements.
ADR 001: Payment mechanism for issuing credentials
Status
Category
Status
Authors
Ankur Banerjee
ADR Stage
Summary
The protocol describes a payment mechanism that can used to pay for the issuance of credentials.
It is necessary to establish which public APIs from Hyperledger Aries can be implemented in cheqd-node to provide an implementation of payments using CHEQ tokens using a well-understood SSI protocol.
Decision
Hyperledger Aries protocol has the concept of payment "decorators" ~payment_request and ~payment_receipt in requests, that can be used to pay using tokens for credential issuance.
Step 1: Credential Offer
A message is sent by the Issuer to the potential Holder, describing the credential they intend to offer and optionally, the price the issuer would be expected to be paid for said credential. This is based on the .
A payment request can then be defined using the to add information about an issuing price and address where payment should be sent.
details.id field contains an invoice number that unambiguously identifies a credential for which payment is requested. When paying, this value should be placed in memo field for the cheqd payment transaction.
payeeId field contains a cheqd account address in the correct format for cheqd network.
Step 2: Payment transaction flow
The payment flow can be broken down into five steps:
Build a request for transferring tokens. Example: cheqd_ledger::bank::build_msg_send(from_account, to_account, amount_for_transfer, denom)
from_account: The prospective credential holder's cheqd account address
Response format
Key fields in the response above are:
hash: Transaction hash
height: Ledger height
Step 3: Credential Request
This is a message sent by the potential Holder to the Issuer, to request the issuance of a credential after tokens are transferred to the nominated account using a Payment Transaction.
request_id should be the same as details.id from Payment Request and memo from Payment Transaction.
Step 4: Check payment_receipt
Issuer receives Credential Request + payment_receipt with payment transaction_id. It allows the Issuer to:
Get the payment transaction by hash from cheqd network ledger using get_tx_by_hash(hash) method, where hash is transaction_id from previous steps.
Check that memo field from received transaction contains the correct request_id.
Step 5: Credential issuing
If steps 1-4 are successful, the Issuer is able to confirm that the requested payment has been made using CHEQ tokens. The credential issuing process can then proceed using standard Hyperledger Aries protocol procedures.
Overview of steps 1-5
REPLACE WITH PNG
UML version
Editable version available on or as text for compatible UML diagram generators below:
Consequences
Backward Compatibility
Credential issuance outside of the payment flow is compatible with and carried out using existing Hyperledger Aries protocol procedures. This should provide a level of compatibility with existing apps/SDKs that implement Aries protocol.
Defining the transaction in CHEQ tokens is specific to the cheqd network.
Positive
By defining the payment mechanism using Hyperledger Aries protocols, this allows the possibility in the future to support payments on multiple networks.
Existing SSI app developers should already be familiar with Hyperledger Aries (if building on Hyperledger Indy) and provides a transition path to add new functionality.
Negative
Hyperledger Aries may not be a familiar protocol for other Cosmos projects.
Using the Payment Decorator in practice means there could be interoperability challenges at in implementations that impact credential issuance and exchange.
Neutral
N/A
References
Setup a new cheqd node
Context
This document describes how to use install and configure a new instance of cheqd-node using an interactive installer which supports the following functionality:
ADR 002: Importing/exporting mnemonic keys from Cosmos
pruning = "default"
# These are applied if and only if the pruning strategy is custom.
#pruning-keep-recent = "0"
#pruning-interval = "0"
# Note: Tendermint block pruning is dependant on this parameter in conunction
# with the unbonding (safety threshold) period, state pruning and state sync
# snapshot parameters to determine the correct minimum value of
# ResponseCommit.RetainHeight.
min-retain-blocks = 250000
exit
sudo systemctl restart cheqd-cosmovisor.service
systemctl status cheqd-cosmovisor.service
du -h -d 1 /home/cheqd/.cheqdnode/data/ 2>/dev/null
command. The
src-port
and
src-channel
are the port and channel used for IBC transfers. Use
channel-108
from Osmosis to cheqd route. The
to_address
is the address of your cheqd account, and the
amount
is the amount of tokens you want to send. The
key_or_address
is the key or address of your Osmosis account that you are sending the tokens from. The
url
is the URL of the node you are sending the request to.
Once the tokens are sent over IBC, you can use the cheqd-noded query bank balances <address> command to check the balance of your cheqd account. This will show you the balance of the tokens that were sent over IBC. Refer to Appendix for the IBC denomination of the tokens.
to_account: Same as payeeId from the Payment Request
amount_for_transfer: Price of credential issuance defined as details.total.amount.value from the Payment Request
denom: Defined in details.total.amount.currency from the Payment Request
Build a transaction with the request from the previous step Example: cheqd_ledger::auth::build_tx(pool_alias, pub_key, builded_request, account_number, account_sequence, max_gas, max_coin_amount, denom, timeout_height, memo)
memo: This should be the same as details.id from the Payment Request
Sign the transaction Example:cheqd_keys::sign(wallet_handle, key_alias, tx).
Broadcast the signed transaction Example: cheqd_pool::broadcast_tx_commit(pool_alias, signed).
This ADR describes how cheqd/Cosmos account keys can be imported/exported into identity wallet applications built on Evernym VDR Tools SDK.
Context
Client SDK applications such as Evernym VDR Tools need to work with cheqd accounts in identity wallets to be able to interact with the cheqd network ledger.
Using these pre-existing libraries, cheqd accounts can be recovered using the standard BIP44 HDPath for Cosmos SDK chains described below:
Consequences
Functionality will be added to VDR Tools SDK to import/export cheqd accounts using mnemonics paired with the --recover flag as done with Cosmos wallets.
Backwards Compatibility
Not applicable, since this is an entirely new feature in VDR Tools SDK for integration with the new blockchain framework.
Positive
Adding/recovering cheqd accounts in VDR Tools SDK will follow a similar, familiar process that users have for Cosmos wallets.
This document specifies the CPU/RAM requirements, firewall ports, and operating system requirements for running cheqd-node.
The interactive installer is written in Python 3 and is designed to work on Ubuntu Linux 24.04 LTS systems. The script has been written to work pre-installed Python 3.x libraries generally available on Ubuntu 24.04.
Software installed by installer
Cosmovisor (default, but can be skipped): The installer configures Cosmovisor by default, which is a standard Cosmos SDK tool that makes network upgrades happen in an automated fashion. This makes the process of upgrading to new releases for network-wide upgrades easier.
cheqd-noded binary (mandatory): This is the main piece of ledger-side code each node runs.
Dependencies: In case you request the installer to restore from a snapshot, dependencies such as pv will be installed so that a progress bar can be shown for snapshot extraction. Otherwise, no additional software is installed by the installer.
External URLs accessed by the installer
Github.com: Fetch latest releases, configuration files, and network settings.
Cloudflare DNS (optional): Used to fetch an externally-resolvable IP address, if this option is selected during install.
Network snapshot server (optional): If requested by the user, the script will fetch latest network snapshots published on snapshots.cheqd.net and then download snapshot files from the snapshot CDN endpoint (snapshots-cdn.cheqd.net)
Usage
⚠️ The guidance below is intended for straightforward new installations or upgrades.
By default, the installer will attempt to create a backup of the ~/.cheqdnode/config/ directory and important files under ~/.cheqdnode/data/ before making any destructive changes. These backups are created under the cheqd user's home directory in a folder called backup (default location: /home/cheqd/backup). However, for safety, you're recommended to also make manual backups when upgrading a node.
If you're setting up a new node from scratch, you can safely ignore the advice above.
Stop any running services related to node services
Stop the running services related to your node. If running via Cosmovisor:
Or if running standalone:
Download and start interactive installer
To get started, download the interactive installer script:
Then, start the interactive installer:
ℹ️ NOTE: You need to execute this as root or at least a user with super-user privileges using the sudo prefix to the command.
(Fresh install) Answer prompts for installing from scratch (or overwriting existing)
The interactive installer guides you through setting up and configuring a node installation by asking as series of questions.
All the questions specify the default answer/value for that question in square ([]) brackets, for example, [default: 1]. If a default value exists, you can just press Enter without needing to type the whole answer.
1. Choose release version
Binary release version to install, automatically fetched from Github. The first release displayed in the list will always be the latest stable version. Other versions displayed below it are pre-release/beta versions.
2. Set home directory for cheqd user
By default, a new user/group called cheqd will be created and a home directory created for it. The default location is /home/cheqd and any configuration data directories being created under this path at /home/cheqd/.cheqdnode.
3. Select network to join
Join either the existing mainnet (chain ID: cheqd-mainnet-1) or testnet (chain ID: cheqd-testnet-6) network.
Install cheqd-noded using Cosmovisor? (yes/no) [default: yes]: Use Cosmovisor to run node
Do you want Cosmovisor to automatically download binaries for scheduled upgrades? (yes/no) [default: yes]: By default, Cosmovisor will attempt to automatically download new binaries that have passed software upgrade proposals voted on the network. You can choose to do this manually if you want more control.
Do you want Cosmovisor to automatically restart after an upgrade? (yes/no) [default: yes]: By default, Cosmovisor will automatically restart the node after an is reached and an upgrade carried out.
Do you want to overwrite existing configuration (or create a new one) for cosmovisor, with the values you provided? (yes/no) [default: yes]: By default, Cosmovisor will create config.toml file under /home/cheqd/.cheqdnode/cosmovisor, with values you provided previously. However, the environment variables set in systemd file will take precedence over this file, unless you modify the file and pass --config flag to cosmovisor.
You can also choose no to installing with Cosmovisor on the first question, in which case a standalone binary installation is carried out.
5. Define node configuration
The next set of questions sets common node configuration parameters. These are the minimal configuration parameters necessary for a node to function, but advanced users can later customise other settings.
Answers to these prompts are saved in the app.toml and config.toml files, which are written under /home/cheqd/.cheqdnode/config/ by default (but can be different if a different home directory was set above). An explanation of some these settings are available in requirements for running a node and the validator guide. See more details about app.toml and config.toml configuration parameters on this page.
Provide a moniker for your cheqd-node [default: <hostname>]:: Moniker is a human-readable name for your cheqd-node. This is NOT the same as your validator name, and is only used to uniquely identify your node for Tendermint P2P address book.
What is the externally-reachable IP address or DNS name for your cheqd-node? [default: Fetch automatically via DNS resolver lookup]:: External address is the publicly accessible IP address or DNS name of your cheqd-node. This is used to advertise your node's P2P address to other nodes in the network. If you are running your node behind a NAT, you should set this to your public IP address or DNS name. If you are running your node on a public IP address, you can leave this blank to automatically fetch your IP address via DNS resolver lookup. (Automatic fetching sends a dig request to whoami.cloudflare.com)
Specify your node's P2P port [default: 26656]:
Specify your node's RPC port [default: 26657]:
Specify persistent peers [default: none]: Persistent peers are nodes that you want to always keep connected to. Values for persistent peers should be specified in format: <nodeID>@<IP>:<port>,<nodeID>@<IP>:<port>.
Specify minimum gas price [default: 5000ncheq]: Minimum gas prices is the price you are willing to accept as a validator to process a transaction. Values should be entered in format <number>ncheq (e.g., )
Specify log level (trace|debug|info|warn|error|fatal|panic) [default: error]:: The default log level of error is generally recommended for normal operation. You may temporarily need to change to more verbose logging levels if trying to diagnose issues if the node isn't behaving correctly.
Specify log format (json|plain) [default: json]:: JSON log format allows parsing log files more easily if there's an issue with your node, hence it's set as the default.
6. Choose whether to init from State Sync
In case you don't need full (or significant) blockchain history, you can start your initialize your node via State Sync:
[INFO]: State sync rapidly bootstraps a node without downloading full snapshot and uses less storage. You can still choose snapshot (slower, much larger storage) if you decline state sync.
Initialize chain via State Sync? (yes/no) [default: yes]: This will massively reduce your node operational costs, as required disk space in this case is less than 10GB. Installer will update config.toml file with RPC endpoints to fetch state sync snapshots from, trusted height and trusted block hash. It will also enable state sync, so after starting the cheqd-cosmovisor service, the node will start discovering snapshots, restoring them and start catching up with the network.
If you skip this step, you will be asked if you want to proceed with state DB snapshot restoration.
When setting up a new node, you typically need to download all past blocks on the network, including any upgrades that were done along the way with the specific binary releases those upgrades went through.
Since this can be quite cumbersome and take a really long time, the installer offers the ability to download a recent blockchain snapshot for the selected network from snapshots.cheqd.net.
If you skip this step, you'll need to manually synchronise with the network.
⚠️ Chain snapshots can range from 10 GBs (for testnet) to 100 GBs (for mainnet). Therefore, this step can take a long time.
If you choose this option, you can step away and return to the installer while it works in the background to complete the rest of the installation. You might want to change settings in your SSH client / server to keep SSH connections alive, since some hosts terminate connection due to inactivity.
(Alternative path) Answer prompts for upgrading an existing installation
If you're running the installer on a machine where an existing installation is already present, you'll be prompted whether you want to update/upgrade the existing installation:
If you choose no, this will treat the installation as if installing from scratch and prompt with the questions in section above.
If you choose yes, this will retain existing node configuration and prompt with a different set of questions as outlined below. Choosing "yes" is the default since in most cases, you would want to retain the existing configuration while updating the node binary to a newer version.
If Cosmovisor is detected as installed, you'll be offered the option to bump it to the latest default version. Otherwise, you will be given the option of installing it.
3. Configure Cosmovisor settings
The next section allows you to customise Cosmovisor settings. The explanations of the options are same those given above.
4. Update systemd configuration
By default, the installer will update the systemd system service settings for the following:
cheqd-cosmovisor.service (if installed with Cosmovisor) or cheqd-noded.service (if installed without Cosmovisor): This is the service that runs the node in the background 24/7.
rsyslog.service: Configures node-specific logging directories and settings.
logrotate.service and logrotate.timer: Configures log rotation for node service to limit the duration/size of logs retained to sensible values. By default, this keeps 7 days worth of logs, and compresses logs if they grow larger than 100 MB in size.
Actions taken by the installer after prompts
Once all prompts have been answered, the installer attempts to carry out the changes requested. This includes:
Setting up a new cheqd user/group.
Downloading cheqd-noded and Cosmovisor binaries, as applicable.
Setting environment variables required for node binary / Cosmovisor to function.
Creating directories for node data and configuration.
If present, backing up existing node directories and configuration.
Downloading and extracting snapshots (if requested).
The installer is designed to terminate the installation process and stop making changes if it encounters an error. If this happens, please reach out to us on our community Slack or Discord for how to proceed and recover from errors (if any).
What to do after the installer finishes
⚠️ The guidance below is intended for straightforward new installations or upgrades.
If the installer finishes successfully, it will exit with a success message:
Otherwise, if the installation failed, it will exit with an error message which elaborates on the specific error encountered during setup.
The following steps are only recommended if installation has been successful.
Check systemd service is enabled
Check that the node-related systemd service is enabled. This ensures that the node service automatically restarted, even if the service fails or if the machine is rebooted.
If installed with Cosmovisor:
The output line after the systemctl status cheqd-cosmovisor.service command should say enabled after Loaded path and vendor preset.
If installed without Cosmovisor (standalone binary install):
The output line after the systemctl status cheqd-noded.service command should say enabled after Loaded path and vendor preset.
Start systemd service
Once the node is installed/upgraded, restart the systemd service to get the node running. These steps require root or super-user privileges as a pre-requisite.
If installed with Cosmovisor:
If installed without Cosmovisor:
Check node service is running (and stays running)
The command above should start the node service. Ideally, the node service should start running and remain running. You can check this by running the command below a couple of times in succession and checking that the output line remains as Active: running rather than any other status.
(Previous commands can be recalled in bash by pressing the up arrow key on your keyboard to repeat or cycle through previous commands.)
If installed with Cosmovisor:
The output line after the systemctl status cheqd-cosmovisor.service command should say enabled after Loaded path and vendor preset.
If installed without Cosmovisor (standalone binary install):
Confirm that node is gaining new blocks
Once the systemd service is confirmed as running, check that the node is catching up on new blocks by repeating this command 3-5 times:
(Previous commands can be recalled in bash by pressing the up arrow key on your keyboard to repeat or cycle through previous commands.)
Note: The cheqd-noded status may not return a successful response immediately after starting the systemd service. For instance, you might get the following output:
If you encounter the output above, as long as systemctl status ... returns Active, this "error" above is completely normal. This is because it takes a few minutes after systemctl start for the node services to properly start running. Please wait for a few minutes, and then re-run the cheqd-noded status command.
The output might say catching_up: true if the node is still catching up, or catching_up: false if it's fully caught up.
If the node is catching up, the time needed to fully catch up will depend on how far behind your node is. The latest_block_height value in the output shown above will indicate how far behind the node is. This number should display a larger value every time you re-run the command.
❓ The absolute newest block height across the entire network is displayed in the block explorer. Check the mainnet explorer or the testnet explorer (depending on which network you've joined) to understand the network-wide latest block height vs your node's delta.
Next steps
If you're configuring a validator, check out our validator guide for further configuration steps to carry out.
Additonally, you read here about cosmovisor-specific configuration parameters and how to change them.
Latest stable cheqd-noded release version is Name: v4.1.4
List of cheqd-noded releases:
1. v4.1.4
2. v4.1.4-develop.1
3. v4.1.3
4. v4.1.3-develop.1
5. v4.1.2
Choose list option number above to select version of cheqd-node to install [default: 1]:
Existing cheqd-node binary detected.
Do you want to upgrade an existing cheqd-node installation? (yes/no) [default: yes]:
[INFO]: Installation steps completed successfully
[INFO]: Installation of cheqd-noded v1.3.0 completed successfully!
[INFO]: Please review the configuration files manually and use systemctl to start the node.
[INFO]: Documentation: https://docs.cheqd.io/node
root@hostname ~# systemctl status cheqd-cosmovisor.service
● cheqd-cosmovisor.service - Service for running cheqd-node daemon
Loaded: loaded (/lib/systemd/system/cheqd-cosmovisor.service; enabled; vendor preset: enabled)
root@hostname ~# systemctl status cheqd-noded.service
● cheqd-noded.service - Service for running cheqd-node daemon
Loaded: loaded (/lib/systemd/system/cheqd-noded.service; enabled; vendor preset: enabled)
sudo systemctl start cheqd-cosmovisor.service
sudo systemctl start cheqd-cosmovisor.service
root@hostname ~# systemctl status cheqd-cosmovisor.service
● cheqd-cosmovisor.service - Service for running cheqd-node daemon
Loaded: loaded (/lib/systemd/system/cheqd-cosmovisor.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2023-03-13 14:48:57 GMT; 1 weeks 0 days ago
root@hostname ~# systemctl status cheqd-noded.service
● cheqd-noded.service - Service for running cheqd-node daemon
Loaded: loaded (/lib/systemd/system/cheqd-noded.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2023-03-13 14:48:57 GMT; 1 weeks 0 days ago
cheqd-noded status
root@hostname ~# cheqd-noded status
Error: post failed: Post "http://localhost:26657": dial tcp [::1]:26657: connect: connection refused
root@hostnames ~# cheqd-noded status
{"NodeInfo":{"protocol_version":{"p2p":"8","block":"11","app":"0"},"id":"<node-id>","listen_addr":"<external-address>:26656","network":"cheqd-mainnet-1","version":"0.34.26","channels":"40202122233038606100","moniker":"<moniker>","other":{"tx_index":"on","rpc_address":"tcp://0.0.0.0:26657"}},"SyncInfo":{"latest_block_hash":"082E9ACC41E72BCB566DDF9132B9011C470629F54E9F0F32B0A619265A42F1F6","latest_app_hash":"302F62AB35B8D463D398E83CF7C0597A2DCC27DF7C3C5165F7AEC4CA6B4C3C7F","latest_block_height":"7153781","latest_block_time":"2023-03-20T18:02:41.952748008Z","earliest_block_hash":"A20320D647F4EC8E6D7EC95A3F25B13B86807907BD8231A24BF3EA8171E645D3","earliest_app_hash":"2B7DA548AFF3FA8886E602993BB13CFCD0E97E7F943AF129B5DBA92284B9284A","earliest_block_height":"6903781","earliest_block_time":"2023-03-03T16:12:02.371081673Z","catching_up":false},"ValidatorInfo":{"Address":"9CE4242862F7CB95B22D2EC60CB625F6B09C9562","PubKey":{"type":"tendermint/PubKeyEd25519","value":"CGe/qgutfjKuPSFQK3ZBraNRjcyKzEvy6hd2JX73Vns="},"VotingPower":"35373490103"}}
Troubleshooting consistently high CPU/memory loads
Context
Blockchain applications (especially when running validator nodes) are a-typical from "traditional" web server applications because their performance characteristics tend to be different in the way specified below:
Tend to be more disk I/O heavy: Traditional web apps will typically offload data storage to persistent stores such as a database. In case of a blockchain/validator node, the database is on the machine itself, rather than offloaded to separate machine with a standalone engine. Many blockchains use for their local data copies. (In Cosmos SDK apps, such as cheqd, this is the , but can also be , , etc.) The net result is the same as if you were trying to run a database engine on a machine: the system needs to have fast read/write performance characteristics.
Validator nodes cannot easily be auto-scaled: Many traditional applications can be horizontally (i.e., add more machines) or vertically (i.e., make current machine beefier) scaled. While this is possible for validator nodes, it must be done with extreme caution to ensure there aren't two instances of the same validator active simultaneously. This can be perceived by network consensus as a sign of compromised validator keys and lead to the . These concerns are less relevant for non-validating nodes, since they have a greater tolerance for missed blocks and can be scaled horizontally/vertically.
Docker/Kubernetes setups are not recommended for validators (unless you really know what you're doing): Primarily due to the double-signing risk, it's (../setup-and-configure/docker.md) unless you have a strong DevOps practice. The other reason is related to the first point, i.e., a Docker setup adds an abstraction layer between the actual underlying file-storage vs the Docker volume engine. Depending on the Docker (or similar abstraction) storage drivers used, you may need to for optimal performance.
Diagnosing a CPU/memory leak
⚠️ Please ensure you are running the since they may contain fixes/patches that improve node performance.
What does a CPU/memory leak look like?
If you've got monitoring built in for your machine, a memory (RAM) leak would look like a graph where memory usage grows to 100%, falls off a cliff, grows to 100% again (the process repeats itself).
Normal memory usage may grow over time, but will not max out the available memory up to 100%. The graph below is taken from a server run by the cheqd team, over a 14-day period:
Figure 1: Graph showing normal memory usage on a cheqd-node server
What does a CPU leak look like?
A "CPU leak", i.e., where one or more process(es) consume increasing amounts of CPU is rarer, but could also happen if your machine has too few vCPUs and/or underpowered CPUs.
Figure 2: Graph showing normal CPU usage on a cheqd-node server
There's a catch here: depending on your monitoring tool, "100% CPU" could be measured differently! The graph above is from .
Other monitoring tools, such as , count each CPU as "100%", thus making the overall figure displayed in the graph (shown below) add up to number of CPUs x 100%.
Figure 4: Graph showing CPU usage on Hetzner cloud, adding up to more than 100%
Check what accounting metric your monitoring tool uses to get a realistic idea of whether your CPU is overloaded or not.
, regardless of the CPU usage.
Determining CPU/memory usage with command-line tools
If you don't have a monitoring application installed, you could use the built-in top or htop command.
Figure 2: Output of htop showing CPU and memory usage
htop is visually easier to understand than top since it breaks down usage per-CPU, as well as memory usage.
Unfortunately, this only provides the real-time usage, rather than historical usage over time. Historical usage typically requires an external application, which many cloud providers provide, or through 3rd party monitoring tools such as , etc.
, in case you already have a Prometheus instance you can use or comfortable with using the software. This can allow alerting based on actual metrics emitted by the node, rather than just top-level system metrics which are a blunt instrument / don't go into detail.
Troubleshooting system clock synchronisation issues
If your , this could cause Tendermint peer-to-peer connections to be rejected. This is similar to in a normal browser when accessing secure (HTTPS) sites.
The net result of your system clock being out of sync is that your node:
Constantly tries to dial peers to try and fetch new blocks
Connection gets rejected by some/all of them
Keeps retrying the above until CPU/memory get exhausted, or the node process crashes
To check if your system clock is synchronised, use the following command (note: only copy the command, not the sample output):
The timezone your machine is based in doesn't matter. You should check whether it reports System clock synchronized: yes and NTP service: active.
Resolving system clock issues
If either of these are not true, chances are that your system clock has fallen out of sync, and may be the root cause of CPU/memory leaks. Follow to resolve the issue, and then monitor whether it fixes high utilisation.
NTP firewall rules
You may also need to allow outbound UDP traffic on port 123 explicitly, depending on your firewall settings. This port is used by the Network Time Protocol (NTP) service.
Troubleshooting node connectivity issues
Properly-configured nodes should have bidirectional connectivity for network traffic. To check whether this is the case, open <node-ip-address-or-dns-name:rpc-port>/net_info in your browser, for example, .
Accessing this endpoint via your browser would only work and/or you're accessing from an allowed origin. If this is not the case, you can also view the results for this endpoint from the same machine where your node service is running through the command line:
The JSON output should be similar to below:
Look for the n_peers value at the beginning: this shows the number of peers your node is connected. A healthy node would typically be connected to anywhere between 5-50 nodes.
Next, search the results for the term is_outbound. The number of matches for this term should exactly be the same as the value of n_peers, since this is printed once per peer. The value of is_oubound may either be true or false.
A healthy node should have a mix of is_outbound: trueas well asis_outbound: false. If your node reports only one of these values, it's a strong indication that your node is unidirectionally connected/reachable, rather than bidirectionally reachable.
Unidirectional connectivity may cause your node to work overtime to stay synchronised with latest blocks on the network. You may fly by just fine - until there's a loss of connectivity to critical mass of peers and then your node goes offline.
Furthermore, your node might fetch the address book from seed nodes, and then try to resolve/contact them (and fail) due to connectivity issues.
Is your node's external address reachable?
Ideally, the IP address or DNS name set in external_address property in your config.toml file should be externally reachable.
To determine whether this is true, from a machine other than your node, . Unlike ping which uses ICMP packets, tcptraceroute uses TCP, i.e., the actual protocol used for Tendermint P2P to see if the destination is reachable. Success or failure in connectivity using ping doesn't prove whether your node is reachable, since firewalls along the path may have different rules for ICMP vs TCP.
Once you have tcptraceroute installed, from this external machine you can execute the following command in tcptraceroute <hostname> <port> format (note: only copy the actual command, not sample output):
A successful run would result in tcptraceroute reaching the destination server on the required port (e.g., 26656) and then hanging up. If the connection times out consistently at any of the hops, this could indicate there's a firewall / router in the path dropping or blocking connections.
Resolving connectivity issues due to blocked firewall ports
Your firewall rules on the machine and/or infrastructure (cloud) provider could cause connectivity issues. Ideally, :
Inbound TCP traffic on at least port 26656 (or custom P2P port)
Optionally, inbound TCP traffic on other ports (RPC, gRPC, gRPC Web)
Outbound TCP traffic on all ports
Router vs firewall issues
Besides firewalls, depending on your network infrastructure, your connectivity issue instead might lie in a router or Network Address Translation (NAT) gateway.
Outbound TCP traffic is the default mode on many systems, since the port through which traffic gets routed out is dynamically determined during TCP connection establishment. In some cases, e.g., when , you may require more complex configuration (outside the scope of this document).
Operating system firewalls
In addition to infrastructure-level firewalls, Ubuntu machines also come with firewall on the machine itself. Typically, this is either disabled or set to allow all traffic by default.
Configuring OS-level firewalls is outside the scope of this document, but can generally be :
If ufw status reports active, follow to allow traffic on the required ports (customise the ports to the required ports).
Connectivity issues due to blocked DNS traffic
Another common reason for unidirectional node connectivity occurs when the correct P2P inbound/outbound traffic is allowed in firewalls, but DNS traffic is blocked by a firewall.
Your node needs the ability to lookup DNS queries to resolve nodes with DNS names as their external_address property to IP addresses, since other peers may advertise their addresses as a DNS name. Seed nodes set in config.toml are a common example of this, since these are advertised as DNS names.
Your node may still scrape by if DNS resolution is blocked, for example, by obtaining an address book from a peer that has already done DNS -> IP resolution. However, this approach can be liable to break down if the resolution is incorrect or entries outdated.
Firewall rules to allow DNS traffic
To enable DNS lookups, your infrastructure/OS-level firewalls should allow:
Outbound UDP traffic on port 53: This is the most commonly-used port/protocol.
Outbound TCP traffic on port 853 (explicit rule not needed if you already allow TCP outbound on all ports): Modern DNS servers also allow , which secures the connection using TLS to the DNS server. This can prevent malicious DNS servers from intercepting queries and giving spurious responses.
Outbound TCP traffic on port 443 (explicit rule not needed if you already allow TCP outbound on all ports): Similar to above, this enables
Checking whether DNS resolution works
To check DNS resolution work, try to run a DNS query and see if it returns a response. The following command will use the dig utility to look up and report your node's externally resolvable IP address via (note: only copy the command, not the sample output):
If the lookup fails, that could indicate DNS queries or blocked, or there are no externally-resolvable IPs where the node can be reached.
Other troubleshooting steps
Is your machine underpowered?
If your machine is provisioned with , you might find that the node struggles during times of high load, or slowly degrades over time. The minimum figures are recommended for a developer setup, rather than a production-grade node.
Typically, this problem is seen if you (non-exhaustive list):
Have only one CPU (bump to at least two CPU)
Only 1-2 GB of RAM (bump to at least 4 GB)
Most cloud providers should allow dynamically scaling these two factors without downtime. Monitor - especially over a period of days/weeks - whether this improves the situation or not. If the CPU/memory load behaviour remains similar, that likely indicates the issue is different.
Scaling CPU/memory without downtime may be different you're running a physical machine, or if your cloud provider doesn't support it. Please follow the guidance of those hosting platforms.
ADR 011: AnonCreds
Status
Category
Status
Authors
Renata Toktar, Alexander Kolesov, Ankur Banerjee
ADR Stage
Summary
This ADR will define how Verifiable Credential schemas can be represented through the use of a DID URL, which when dereferenced, fetches the credential schemas a resource. The identity entities and transactions for the cheqd network are designed to support usage scenarios and functionality currently supported by .
Context
Hyperledger Indy is a verifiable data registry (VDR) built for DIDs with a strong focus on privacy-preserving techniques. It is one of the most widely-adopted SSI blockchain ledgers. Most notably, Indy is used by the .
Identity-domain transaction types in Hyperledger Indy
Our aim is to support the functionality enabled by into cheqd-node. This will partly enable the goal of allowing use cases of existing SSI networks on Hyperledger Indy to be supported by the cheqd network.
The following identity-domain transactions from Indy were considered:
NYM: Equivalent to "DIDs" on other networks
ATTRIB: Payload for DID Document generation
SCHEMA
Revocation registries for credentials are not covered under the scope of this ADR. This topic is discussed separately in as there is ongoing research by the cheqd project on how to improve the privacy and scalability of credential revocations.
Decision
CL Schema
CL-Schema resource can be created via CreateResource transaction with the follow list of parameters:
MsgCreateResource:
Collection ID: UUID ➝ (did:cheqd:...:) ➝ Parent DID identifier without a prefix
ID: UUID ➝ specific to resource, also effectively a version number (supplied client-side)
Name: String (e.g., CL-Schema1 ) ➝ Schema name
CLI Example:
Credential Definition
[TODO: explain that a Cred Def is simply an additional property inside of the Issuer's DID Doc]
Adds a Credential Definition (in particular, public key), which is created by an Issuer and published for a particular Credential Schema.
It is not possible to update Credential Definitions. If a Credential Definition needs to be evolved (for example, a key needs to be rotated), then a new Credential Definition needs to be created for a new Issuer DIDdoc. Credential Definitions is added to the ledger in as verification method for Issuer DIDDoc
id: DID as base58-encoded string for 16 or 32 byte DID value with Cheqd DID Method prefix did:cheqd:<namespace>: and a resource type at the end.
value (dict): Dictionary with Credential Definition's data if signature_type is CL:
Credential Definition is a set of Issuer keys. So storing them in Issuer's DIDDoc reasonable.
Negative
Credential Definition name means that it contains more than just a key and value field provides this flexibility.
Adding all Cred Defs to Issuer's DIDDoc makes it too large. For every DIDDoc or Cred Def request a client will receive the whole list of Issuer's Cred Defs.
Impossible to put a few controllers for Cred Def.
References
official project background on Hyperledger Foundation wiki
GitHub repository: Server-side blockchain node for Indy ()
GitHub repository: Plenum Byzantine Fault Tolerant consensus protocol; used by indy-node
root@hostname ~# timedatectl status
Local time: Wed 2023-03-29 20:31:56 CEST
Universal time: Wed 2023-03-29 18:31:56 UTC
RTC time: Wed 2023-03-29 18:31:57
Time zone: Europe/Berlin (CEST, +0200)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
user@hostname ~> sudo tcptraceroute seed1.eu.cheqd.net 26656
Selected device en0, address 192.168.4.42, port 53088 for outgoing packets
Tracing the path to seed1.eu.cheqd.net (116.202.176.48) on TCP port 26656, 30 hops max
1 192.168.4.1 3.049 ms 2.186 ms 5.693 ms
2 * * *
3 hari-core-2a-xe-806-0.network.virginmedia.net (94.173.50.205) 27.455 ms 16.619 ms 23.925 ms
4 * hari-core-2b-ae1-0.network.virginmedia.net (81.96.16.210) 33.225 ms 25.725 ms
5 * * *
6 * * *
7 tele-ic-7-ae2-0.network.virginmedia.net (62.253.175.34) 34.680 ms 19.670 ms 17.274 ms
8 ae15-0.lon10.core-backbone.com (80.255.14.105) 19.708 ms 26.629 ms 21.323 ms
9 ae6-2011.nbg40.core-backbone.com (80.255.14.246) 33.451 ms 30.159 ms 31.193 ms
10 core-backbone.hetzner.com (81.95.15.6) 33.430 ms 33.701 ms 31.949 ms
11 core11.nbg1.hetzner.com (213.239.229.161) 33.887 ms 34.907 ms 34.535 ms
12 spine11.cloud1.nbg1.hetzner.com (213.133.112.66) 66.511 ms 36.853 ms 32.539 ms
13 spine4.cloud1.nbg1.hetzner.com (213.133.108.150) 37.238 ms 43.259 ms 28.669 ms
14 * * *
15 15629.your-cloud.host (49.12.139.7) 27.337 ms 46.956 ms 33.213 ms
16 static.48.176.202.116.clients.your-server.de (116.202.176.48) [open] 39.811 ms 34.168 ms 1019.051 ms
Data: Byte[] ➝ JSON string with the follow structure:
attrNames: Array of attribute name strings (125 attributes maximum)
primary (dict): Primary credential public key
revocation (dict, optional): Revocation credential public key
schemaId (string): id of a Schema the credential definition is created for.
signatureType (string): Type of the credential definition (that is credential signature). CL-Sig-Cred_def (Camenisch-Lysyanskaya) is the only supported type now. Other signature types are being explored for future releases.
tag (string, optional): A unique tag to have multiple public keys for the same Schema and type issued by the same DID. A default tag tag will be used if not specified.
controller: DIDs list of strings or only one string of a credential definition controller(s). All DIDs must exist.
In theory, we need to make Credential Definitions mutable.
The aim of this document is to define the genesis parameters that will be used in cheqd network testnet and mainnet.
Cosmos v0.44.3 parameters are described.
Context
Genesis consists of Tendermint consensus engine parameters and Cosmos app-specific parameters.
Tendermint consensus parameters
Tendermint requires to be defined for basic consensus conditions on any Cosmos network.
Block parameters
Parameter
Description
Mainnet
Testnet
Evidence parameters
Parameter
Description
Mainnet
Testnet
Validator key parameters
Parameter
Description
Mainnet
Testnet
Cosmos SDK module parameters
Cosmos application is divided . Each module has parameters that help to adjust the module's behaviour.
auth module
Parameter
Description
Mainnet
Testnet
bank module
Parameter
Description
Mainnet
Testnet
cheqd module (DID module)
Parameter
Description
Mainnet
Testnet
crisis module
Parameter
Description
Mainnet
Testnet
distribution module
Parameter
Description
Mainnet
Testnet
gov module
Parameter
Description
Mainnet
Testnet
mint module
Parameter
Description
Mainnet
Testnet
resource module
Parameter
Description
Mainnet
Testnet
slashing module
Parameter
Description
Mainnet
Testnet
staking module
Parameter
Description
Mainnet
Testnet
ibc module
Parameter
Description
Mainnet
Testnet
ibc-transfer module
Parameter
Description
Mainnet
Testnet
Decision
The parameters above were agreed separate the cheqd mainnet and testnet parameters. We have bolded the testnet parameters that differ from mainnet.
Consequences
Backward Compatibility
The token denomination has been changed to make the smallest denomination 10^-9 CHEQ (= 1 ncheq) instead of 1 CHEQ. This is a breaking change from the earliest versions of the cheqd testnet that which required issuing new tokens to be transferred and issued to testnet node operators.
Positive
Inflation allows fees to be collected from block rewards in addition to transaction fees.
In production/mainnet, parameters can only be changed via a majority vote without veto defeat according to the cheqd network governance principles. This allows for more democratic governance frameworks to be created for a self-sovereign identity network.
Negative
Existing node operators will need to re-establish staking with new staking denomination and staking parameters.
Neutral
Unbonding period, and deposit period have all been reduced to 2 weeks to balance the speed at which decisions can be reached vs giving enough time to validators to participate.
References
sig_verify_cost_ed25519
Cost of ed25519 signature verification
590
590
sig_verify_cost_secp256k1
Cost of secp256k1 signature verification
1,000
1,000
burn_factor
The percentage of the transaction fee for create_did, update_did or deactivate_did that is burnt, i.e., destroyed from the cheqd network
50%
50%
withdraw_addr_enabled
Whether withdrawal address can be changed or not. By default, it's the delegator's address.
True
True
voting_params
voting_period
The defined period for an on-ledger vote from start to finish.
259,200s (3 days)
86.40s (1.2 minutes)
tally_params
quorum
Minimum percentage of total stake needed to vote for a result to be considered valid.
0.334 (33.4%)
0.334 (33.4%)
threshold
Minimum proportion of Yes votes for proposal to pass.
0.5 (50%)
0.5 (50%)
veto_threshold
The minimum value of veto votes to total votes ratio for proposal to be vetoed. Default value: 1/3.
0.334 (33.4%)
0.334 (33.4%)
inflation_min
Inflation aims to this value if bonded_ratio > bonded_goal
0.01 (1%)
0.01 (1%)
goal_bonded
Percentage of bonded tokens at which inflation rate will neither increase nor decrease
0.60 (60%)
0.60 (60%)
blocks_per_year
Number of blocks generated per year
3,155,760 (1 block every ~10 seconds)
3,155,760 (1 block every ~10 seconds)
burn_factor
The percentage of the transaction fee for image, json or default DID-Linked Resources that are burnt, i.e., destroyed from the cheqd network
50%
50%
slash_fraction_double_sign
Slashed amount as a percentage for a double sign infraction
0.05 (5%)
0.05 (5%)
slash_fraction_downtime
Slashed amount as a percentage for downtime
0.01 (1%)
0.01 (1%)
historical_entries
Amount of unbound/redelegate entries to store
10,000
10,000
bond_denom
Denomination used in staking
ncheq
ncheq
Implementation Status
Implemented
Start Date
2021-09-15
Last Updated
2022-12-08
max_bytes
This sets the maximum size of total bytes that can be committed in a single block. This should be larger than max_bytes for evidence.
200,000 (~200 KB)
200,000 (~200 KB)
max_gas
This sets the maximum gas that can be used in any single block.
200,000 (~200 KB)
200,000 (~200 KB)
time_iota_ms
Unused. This has been deprecated and will be removed in a future version of Cosmos SDK.
1,000 (1 second)
max_age_num_blocks
Maximum age of evidence, in blocks. The basic formula for calculating this is: MaxAgeDuration / {average block time}.
12,100
25,920
max_age_duration
Maximum age of evidence, in time. It should correspond with an app's "unbonding period".
1,209,600,000,000,000 (expressed in nanoseconds, ~2 weeks)
59,200,000,000,000 (expressed in nanoseconds, ~72 hours)
max_bytes
This sets the maximum size of total evidence in bytes that can be committed in a single block and should fall comfortably under max_bytes for a block.
50,000 (~ 50 KB)
pub_key_types
Types of public keys supported for validators on the network.
Ed25519
Ed25519
max_memo_characters
Maximum number of characters in the memo field
512
512
tx_sig_limit
Max number of signatures
7
7
tx_size_cost_per_byte
Gas cost of transaction byte
10
default_send_enabled
The default send enabled value allows send transfers for all coin denominations
True
True
create_did
The specified transaction fee for creating a Decentralized Identifier (DID) on the cheqd network
50,000,000,000 ncheq (50 CHEQ)
50,000,000,000 ncheq (50 CHEQ)
update_did
The specified transaction fee for updating an existing Decentralized Identifier (DID) on the cheqd network
25,000,000,000 ncheq (25 CHEQ)
25,000,000,000 ncheq (25 CHEQ)
deactivate_did
The specified transaction fee for deactivating an existing Decentralized Identifier (DID) on the cheqd network
The maximum period for Atom holders to deposit on a proposal. Initial value: 2 months.
604,800s (1 week)
mint_denom
Name of the cheq smallest denomination
ncheq
ncheq
inflation_rate_change
Maximum inflation rate change per year. In Cosmos Hub they use 1.0. Formula: inflationRateChangePerYear = (1 - BondedRatio / GoalBonded) * MaxInflationRateChange
0.045 (4.5%)
0.045 (4.5%)
inflation_max
Inflation aims to this value if bonded_ratio < bonded_goal
0.04 (4%)
image
The specified transaction fee for creating an image as a DID-Linked Resource on the cheqd network
10,000,000,000 ncheq (10 CHEQ)
10,000,000,000 ncheq (10 CHEQ)
json
The specified transaction fee for creating a JSON file as a DID-Linked Resource on the cheqd network
2,500,000,000 ncheq (2.5 CHEQ)
2,500,000,000 ncheq (2.5 CHEQ)
default
The specified transaction fee for creating any other type of DID-Linked Resource on the cheqd network, other than images or JSON files
5,000,000,000 ncheq (5 CHEQ)
signed_blocks_window
Number of blocks a validator can miss signing before it is slashed.
25,920 (expressed in blocks, equates to 259,200 seconds or ~3 days)
17,280 (expressed in blocks, equates to 172,800 seconds or ~2 days)
min_signed_per_window
This percentage of blocks must be signed within the window.
0.50 (50%)
0.50 (50%)
downtime_jail_duration
The minimal time validator have to stay in jail
600s (~10 minutes)
unbonding_time
A delegator must wait this time after unbonding before tokens become available
1,210,000s (~2 weeks)
259,200s (~3 days)
max_validators
The maximum number of validators in the network
125
125
max_entries
Max amount of unbound/redelegation operations in progress per account
7
max_expected_time_per_block
Maximum expected time per block, used to enforce block delay. This parameter should reflect the largest amount of time that the chain might reasonably take to produce the next block under normal operating conditions. A safe choice is 3-5x the expected time per block.
30,000,000,000 (expressed in nanoseconds, ~ 30 seconds)
30,000,000,000 (expressed in nanoseconds, ~ 30 seconds)
allowed_clients
Defines the list of allowed client state types. We allow connections from other chains using the Tendermint client, and with light clients using the Solo Machine client.
[ "06-solomachine", "07-tendermint" ]
[ "06-solomachine", "07-tendermint" ]
send_enabled
Enables or disables all cross-chain token transfers from this chain
true
true
receive_enabled
Enables or disables all cross-chain token transfers to this chain
There are two most important files when it comes to cheqd node configuration - app.toml and config.toml, located under your node's home directory (tipically /home/cheqd/.cheqdnode), in config/ directory.
These files get generated by cheqd-noded init command and most likely, and the whole process is covered by our interactive installer.
For more details, see this section
For most operators, the default settings should be fine, but understanding these parameters should help for advanced setups and troubleshootings.
Also, note that some node operators need to update their config files with some new parameters, especially if they have been running their nodes for a long time.
app.toml Configuration File
This is Application-related configuration file, generated by cosmos-sdk.
It should be used to configure your app, such as state pruning strategies, telemetry, gRPC and REST servers configuration, state sync and so on.
Here's an example file with most recent configuration keys and values:
Here are some of the most important parameters set in the config file above:
Set the minimal acceptable gas prices (5000ncheq).
Keep the default pruning. See more pruning options .
Enable REST and gRPC servers.
See more details about cosmos-SDK configuration .
config.toml Configuration file
config.toml file is used to configure CometBFT.
CometBFT is a blockchain application platform; it provides the equivalent of a web-server, database, and supporting libraries for blockchain applications written in any programming language. Like a web-server serving web applications, CometBFT serves blockchain applications.
Let's take a look at some of the most important CometBFT configuration parameters:
moniker - This is the public friendly name for your node. You will be asked to set this for your node during installation by interactive installer. If skipped, your hostname will be used instead.
log_level and log_format - Also set by interactive installer. We strongly suggest to keep log_level at error, to reduce noise and save up on storage. For log_format
See more details about CometBFT configuration .
Enable Prometheus server and reports some basic app-layer metrics.
Enable fee sugestion feature and applies correct config paramters. This is useful feature to enforce if you plan to serve API traffic from your nodes.
Keep the state sync off, unless you want to serve statesync snapshots from your node.
Set the mempool limit to 5000 txs. This should be sufficient to most use-cases.
, we suggest using
json
.
priv_validator_* and node_key - In most cases, you should keep these as default. These contains paths to most critical files for your node's proper functioning. Renaming, removing, compromising integrity of those files and setting incorrect values here could result in data loss and validator jailing.
[rpc].laddr - Listen address for the RPC server. Make sure to match ports exposed on your firewall (if applied). Set host to 0.0.0.0 instead of 127.0.0.1 to expose it publicly.
[p2p].laddr - P2P listen address. 26656 is the default port.
external_address - External address for your node. Set by interactive installer to address provided by user (public IP lookup performed for default value).
seeds - Populated by interactive installer. Here are lists of and seeds.
persistent_peers - Useful in multi-node setup. For example, you have sentry node exposed to the rest of the network (and internet) and keep persistent connection to your validator here, to make sure no blocks are missed. The format should be {node-ID}@{node-IP-address}:{port}, separated by commas.
unconditional_peer_ids - Useful for same use-case as above. Format is comma-separated node IDs.
private_peer_ids - Useful for same use-case as above. Format is comma-separated node IDs.
[statesync].* - Statesync configuration, useful when starting node from scratch (to save up on storage) or recovering. We have statesync servers available at https://eu-rpc.cheqd.net:443 and https://ap-rpc.cheqd.net:443 for mainnet and https://eu-rpc.cheqd.network:443 and https://ap-rpc.cheqd.network:443 for testnet.
consensus.double_sign_check_height - Some node operators experienced some issues during upgrades if this was set to any other value than 0.
###############################################################################
### Base Configuration ###
###############################################################################
# The minimum gas prices a validator is willing to accept for processing a
# transaction. On cheqd mainnet and testnet, this value should be set to 5000ncheq.
minimum-gas-prices = "5000ncheq"
# default: the last 362880 states are kept, pruning at 10 block intervals
# nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node)
# everything: 2 latest states will be kept; pruning at 10 block intervals.
# custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval'
pruning = "default"
# These are applied if and only if the pruning strategy is custom.
pruning-keep-recent = "0"
pruning-interval = "0"
# HaltHeight contains a non-zero block height at which a node will gracefully
# halt and shutdown that can be used to assist upgrades and testing.
#
# Note: Commitment of state will be attempted on the corresponding block.
halt-height = 0
# HaltTime contains a non-zero minimum block time (in Unix seconds) at which
# a node will gracefully halt and shutdown that can be used to assist upgrades
# and testing.
#
# Note: Commitment of state will be attempted on the corresponding block.
halt-time = 0
# MinRetainBlocks defines the minimum block height offset from the current
# block being committed, such that all blocks past this offset are pruned
# from Tendermint. It is used as part of the process of determining the
# ResponseCommit.RetainHeight value during ABCI Commit. A value of 0 indicates
# that no blocks should be pruned.
#
# This configuration value is only responsible for pruning Tendermint blocks.
# It has no bearing on application state pruning which is determined by the
# "pruning-*" configurations.
#
# Note: Tendermint block pruning is dependant on this parameter in conunction
# with the unbonding (safety threshold) period, state pruning and state sync
# snapshot parameters to determine the correct minimum value of
# ResponseCommit.RetainHeight.
min-retain-blocks = 0
# InterBlockCache enables inter-block caching.
inter-block-cache = true
# IndexEvents defines the set of events in the form {eventType}.{attributeKey},
# which informs Tendermint what to index. If empty, all events will be indexed.
#
# Example:
# ["message.sender", "message.recipient"]
index-events = []
# IavlCacheSize set the size of the iavl tree cache (in number of nodes).
# In most cases, use default value.
iavl-cache-size = 781250
# IAVLDisableFastNode enables or disables the fast node feature of IAVL.
# Default is false.
iavl-disable-fastnode = false
# IAVLLazyLoading enable/disable the lazy loading of iavl store.
# Default is false.
iavl-lazy-loading = false
# AppDBBackend defines the database backend type to use for the application and snapshots DBs.
# An empty string indicates that a fallback will be used.
# The fallback is the db_backend value set in Tendermint's config.toml.
app-db-backend = ""
###############################################################################
### Telemetry Configuration ###
###############################################################################
[telemetry]
# Prefixed with keys to separate services.
service-name = "cheqd-mainnet-1"
# Enabled enables the application telemetry functionality. When enabled,
# an in-memory sink is also enabled by default. Operators may also enabled
# other sinks such as Prometheus.
enabled = true
# Enable prefixing gauge values with hostname.
enable-hostname = true
# Enable adding hostname to labels.
enable-hostname-label = true
# Enable adding service to labels.
enable-service-label = true
# PrometheusRetentionTime, when positive, enables a Prometheus metrics sink.
prometheus-retention-time = 60
# GlobalLabels defines a global set of name/value label tuples applied to all
# metrics emitted using the wrapper functions defined in telemetry package.
#
# Example:
# [["chain_id", "cosmoshub-1"]]
global-labels = [
]
###############################################################################
### API Configuration ###
###############################################################################
[api]
# Enable defines if the API server should be enabled.
enable = true
# Swagger defines if swagger documentation should automatically be registered.
swagger = true
# Address defines the API server to listen on.
# Use 0.0.0.0 make it publicly available, or localhost to allow only local connections
address = "tcp://0.0.0.0:1317"
# MaxOpenConnections defines the number of maximum open connections.
max-open-connections = 1000
# RPCReadTimeout defines the Tendermint RPC read timeout (in seconds).
rpc-read-timeout = 10
# RPCWriteTimeout defines the Tendermint RPC write timeout (in seconds).
rpc-write-timeout = 0
# RPCMaxBodyBytes defines the Tendermint maximum request body (in bytes).
rpc-max-body-bytes = 1000000
# EnableUnsafeCORS defines if CORS should be enabled (unsafe - use it at your own risk).
enabled-unsafe-cors = false
###############################################################################
### Rosetta Configuration ###
###############################################################################
[rosetta]
# Enable defines if the Rosetta API server should be enabled.
enable = false
# Address defines the Rosetta API server to listen on.
address = ":8080"
# Network defines the name of the blockchain that will be returned by Rosetta.
blockchain = "app"
# Network defines the name of the network that will be returned by Rosetta.
network = "network"
# Retries defines the number of retries when connecting to the node before failing.
retries = 3
# Offline defines if Rosetta server should run in offline mode.
offline = false
# EnableDefaultSuggestedFee defines if the server should suggest fee by default.
# If 'construction/medata' is called without gas limit and gas price,
# suggested fee based on gas-to-suggest and denom-to-suggest will be given.
enable-fee-suggestion = true
# GasToSuggest defines gas limit when calculating the fee
gas-to-suggest = 360000
# DenomToSuggest defines the defult denom for fee suggestion.
minimum-gas-prices = "50ncheq"
denom-to-suggest = "ncheq"
###############################################################################
### gRPC Configuration ###
###############################################################################
[grpc]
# Enable defines if the gRPC server should be enabled.
enable = true
# Address defines the gRPC server address to bind to.
# Use 0.0.0.0 make it publicly available, or localhost to allow only local connections.
address = "0.0.0.0:9090"
# MaxRecvMsgSize defines the max message size in bytes the server can receive.
# The default value is 10MB.
max-recv-msg-size = "10485760"
# MaxSendMsgSize defines the max message size in bytes the server can send.
# The default value is math.MaxInt32.
max-send-msg-size = "2147483647"
###############################################################################
### gRPC Web Configuration ###
###############################################################################
[grpc-web]
# GRPCWebEnable defines if the gRPC-web should be enabled.
# NOTE: gRPC must also be enabled, otherwise, this configuration is a no-op.
enable = false
# Address defines the gRPC-web server address to bind to.
address = "localhost:9091"
# EnableUnsafeCORS defines if CORS should be enabled (unsafe - use it at your own risk).
enable-unsafe-cors = false
###############################################################################
### State Sync Configuration ###
###############################################################################
# State sync snapshots allow other nodes to rapidly join the network without replaying historical
# blocks, instead downloading and applying a snapshot of the application state at a given height.
[state-sync]
# snapshot-interval specifies the block interval at which local state sync snapshots are
# taken (0 to disable).
snapshot-interval = 0
# snapshot-keep-recent specifies the number of recent snapshots to keep and serve (0 to keep all).
snapshot-keep-recent = 2
###############################################################################
### Store / State Streaming ###
###############################################################################
[store]
streamers = []
[streamers]
[streamers.file]
keys = ["*", ]
write_dir = ""
prefix = ""
# output-metadata specifies if output the metadata file which includes the abci request/responses
# during processing the block.
output-metadata = "true"
# stop-node-on-error specifies if propagate the file streamer errors to consensus state machine.
stop-node-on-error = "true"
# fsync specifies if call fsync after writing the files.
fsync = "false"
###############################################################################
### Mempool ###
###############################################################################
[mempool]
# Setting max-txs to 0 will allow for a unbounded amount of transactions in the mempool.
# Setting max_txs to negative 1 (-1) will disable transactions from being inserted into the mempool.
# Setting max_txs to a positive number (> 0) will limit the number of transactions in the mempool, by the specified amount.
#
# Note, this configuration only applies to SDK built-in app-side mempool
# implementations.
max-txs = 5000
# NOTE: Any path below can be absolute (e.g. "/var/myawesomeapp/data") or
# relative to the home directory (e.g. "data"). The home directory is
# "$HOME/.cometbft" by default, but could be changed via $CMTHOME env variable
# or --home cmd flag.
#######################################################################
### Main Base Config Options ###
#######################################################################
# TCP or UNIX socket address of the ABCI application,
# or the name of an ABCI application compiled in with the CometBFT binary
proxy_app = "tcp://127.0.0.1:26658"
# A custom human readable name for this node
moniker = ""
# Database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb
# * goleveldb (github.com/syndtr/goleveldb)
# - UNMAINTAINED
# - stable
# - pure go
# - stable
# * cleveldb (uses levigo wrapper)
# - fast
# - requires gcc
# - use cleveldb build tag (go build -tags cleveldb)
# * boltdb (uses etcd's fork of bolt - github.com/etcd-io/bbolt)
# - EXPERIMENTAL
# - may be faster is some use-cases (random reads - indexer)
# - use boltdb build tag (go build -tags boltdb)
# * rocksdb (uses github.com/tecbot/gorocksdb)
# - EXPERIMENTAL
# - requires gcc
# - use rocksdb build tag (go build -tags rocksdb)
# * badgerdb (uses github.com/dgraph-io/badger)
# - EXPERIMENTAL
# - use badgerdb build tag (go build -tags badgerdb)
db_backend = "goleveldb"
# Database directory
db_dir = "data"
# Output level for logging, including package level options
log_level = "error"
# Output format: 'plain' (colored text) or 'json'
log_format = "json"
##### additional base config options #####
# Path to the JSON file containing the initial validator set and other meta data
genesis_file = "config/genesis.json"
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
priv_validator_key_file = "config/priv_validator_key.json"
# Path to the JSON file containing the last sign state of a validator
priv_validator_state_file = "data/priv_validator_state.json"
# TCP or UNIX socket address for CometBFT to listen on for
# connections from an external PrivValidator process
priv_validator_laddr = ""
# Path to the JSON file containing the private key to use for node authentication in the p2p protocol
node_key_file = "config/node_key.json"
# Mechanism to connect to the ABCI application: socket | grpc
abci = "socket"
# If true, query the ABCI app on connecting to a new peer
# so the app can decide if we should keep the connection or not
filter_peers = false
#######################################################################
### Advanced Configuration Options ###
#######################################################################
#######################################################
### RPC Server Configuration Options ###
#######################################################
[rpc]
# TCP or UNIX socket address for the RPC server to listen on
laddr = "tcp://0.0.0.0:26657"
# A list of origins a cross-domain request can be executed from
# Default value '[]' disables cors support
# Use '["*"]' to allow any origin
cors_allowed_origins = []
# A list of methods the client is allowed to use with cross-domain requests
cors_allowed_methods = ["HEAD", "GET", "POST", ]
# A list of non simple headers the client is allowed to use with cross-domain requests
cors_allowed_headers = ["Origin", "Accept", "Content-Type", "X-Requested-With", "X-Server-Time", ]
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
grpc_laddr = ""
# Maximum number of simultaneous connections.
# Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
# 1024 - 40 - 10 - 50 = 924 = ~900
grpc_max_open_connections = 900
# Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool
unsafe = false
# Maximum number of simultaneous connections (including WebSocket).
# Does not include gRPC connections. See grpc_max_open_connections
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
# 1024 - 40 - 10 - 50 = 924 = ~900
max_open_connections = 900
# Maximum number of unique clientIDs that can /subscribe
# If you're using /broadcast_tx_commit, set to the estimated maximum number
# of broadcast_tx_commit calls per block.
max_subscription_clients = 100
# Maximum number of unique queries a given client can /subscribe to
# If you're using GRPC (or Local RPC client) and /broadcast_tx_commit, set to
# the estimated # maximum number of broadcast_tx_commit calls per block.
max_subscriptions_per_client = 5
# Experimental parameter to specify the maximum number of events a node will
# buffer, per subscription, before returning an error and closing the
# subscription. Must be set to at least 100, but higher values will accommodate
# higher event throughput rates (and will use more memory).
experimental_subscription_buffer_size = 200
# Experimental parameter to specify the maximum number of RPC responses that
# can be buffered per WebSocket client. If clients cannot read from the
# WebSocket endpoint fast enough, they will be disconnected, so increasing this
# parameter may reduce the chances of them being disconnected (but will cause
# the node to use more memory).
#
# Must be at least the same as "experimental_subscription_buffer_size",
# otherwise connections could be dropped unnecessarily. This value should
# ideally be somewhat higher than "experimental_subscription_buffer_size" to
# accommodate non-subscription-related RPC responses.
experimental_websocket_write_buffer_size = 200
# If a WebSocket client cannot read fast enough, at present we may
# silently drop events instead of generating an error or disconnecting the
# client.
#
# Enabling this experimental parameter will cause the WebSocket connection to
# be closed instead if it cannot read fast enough, allowing for greater
# predictability in subscription behavior.
experimental_close_on_slow_client = false
# How long to wait for a tx to be committed during /broadcast_tx_commit.
# WARNING: Using a value larger than 10s will result in increasing the
# global HTTP write timeout, which applies to all connections and endpoints.
# See https://github.com/tendermint/tendermint/issues/3435
timeout_broadcast_tx_commit = "10s"
# Maximum size of request body, in bytes
max_body_bytes = 10485760
# Maximum size of request header, in bytes
max_header_bytes = 1048576
# The path to a file containing certificate that is used to create the HTTPS server.
# Might be either absolute path or path related to CometBFT's config directory.
# If the certificate is signed by a certificate authority,
# the certFile should be the concatenation of the server's certificate, any intermediates,
# and the CA's certificate.
# NOTE: both tls_cert_file and tls_key_file must be present for CometBFT to create HTTPS server.
# Otherwise, HTTP server is run.
tls_cert_file = ""
# The path to a file containing matching private key that is used to create the HTTPS server.
# Might be either absolute path or path related to CometBFT's config directory.
# NOTE: both tls-cert-file and tls-key-file must be present for CometBFT to create HTTPS server.
# Otherwise, HTTP server is run.
tls_key_file = ""
# pprof listen address (https://golang.org/pkg/net/http/pprof)
pprof_laddr = "localhost:6060"
#######################################################
### P2P Configuration Options ###
#######################################################
[p2p]
# Address to listen for incoming connections
laddr = "tcp://0.0.0.0:26656"
# Address to advertise to peers for them to dial. If empty, will use the same
# port as the laddr, and will introspect on the listener to figure out the
# address. IP and port are required. Example: 159.89.10.97:26656
external_address = "159.65.62.91:26656"
# Comma separated list of seed nodes to connect to
seeds = ""
# Comma separated list of nodes to keep persistent connections to
persistent_peers = ""
# Path to address book
addr_book_file = "config/addrbook.json"
# Set true for strict address routability rules
# Set false for private or local networks
addr_book_strict = true
# Maximum number of inbound peers
max_num_inbound_peers = 50
# Maximum number of outbound peers to connect to, excluding persistent peers
max_num_outbound_peers = 25
# List of node IDs, to which a connection will be (re)established ignoring any existing limits
unconditional_peer_ids = ""
# Maximum pause when redialing a persistent peer (if zero, exponential backoff is used)
persistent_peers_max_dial_period = "0s"
# Time to wait before flushing messages out on the connection
flush_throttle_timeout = "100ms"
# Maximum size of a message packet payload, in bytes
max_packet_msg_payload_size = 10485760
# Rate at which packets can be sent, in bytes/second
send_rate = 1000000000
# Rate at which packets can be received, in bytes/second
recv_rate = 1000000000
# Set true to enable the peer-exchange reactor
pex = true
# Seed mode, in which node constantly crawls the network and looks for
# peers. If another node asks it for addresses, it responds and disconnects.
#
# Does not work if the peer-exchange reactor is disabled.
seed_mode = false
# Comma separated list of peer IDs to keep private (will not be gossiped to other peers)
private_peer_ids = ""
# Toggle to disable guard against peers connecting from the same ip.
allow_duplicate_ip = false
# Peer connection configuration.
handshake_timeout = "20s"
dial_timeout = "3s"
#######################################################
### Mempool Configuration Option ###
#######################################################
[mempool]
# The type of mempool for this node to use.
#
# Possible types:
# - "flood" : concurrent linked list mempool with flooding gossip protocol
# (default)
# - "nop" : nop-mempool (short for no operation; the ABCI app is responsible
# for storing, disseminating and proposing txs). "create_empty_blocks=false" is
# not supported.
type = "flood"
# Recheck (default: true) defines whether CometBFT should recheck the
# validity for all remaining transaction in the mempool after a block.
# Since a block affects the application state, some transactions in the
# mempool may become invalid. If this does not apply to your application,
# you can disable rechecking.
recheck = true
# Broadcast (default: true) defines whether the mempool should relay
# transactions to other peers. Setting this to false will stop the mempool
# from relaying transactions to other peers until they are included in a
# block. In other words, if Broadcast is disabled, only the peer you send
# the tx to will see it until it is included in a block.
broadcast = true
# WalPath (default: "") configures the location of the Write Ahead Log
# (WAL) for the mempool. The WAL is disabled by default. To enable, set
# wal_dir to where you want the WAL to be written (e.g.
# "data/mempool.wal").
wal_dir = ""
# Maximum number of transactions in the mempool
size = 5000
# Limit the total size of all txs in the mempool.
# This only accounts for raw transactions (e.g. given 1MB transactions and
# max_txs_bytes=5MB, mempool will only accept 5 transactions).
max_txs_bytes = 1073741824
# Size of the cache (used to filter transactions we saw earlier) in transactions
cache_size = 10000
# Do not remove invalid transactions from the cache (default: false)
# Set to true if it's not possible for any invalid transaction to become valid
# again in the future.
keep-invalid-txs-in-cache = false
# Maximum size of a single transaction.
# NOTE: the max size of a tx transmitted over the network is {max_tx_bytes}.
max_tx_bytes = 1048576
# Maximum size of a batch of transactions to send to a peer
# Including space needed by encoding (one varint per transaction).
# XXX: Unused due to https://github.com/tendermint/tendermint/issues/5796
max_batch_bytes = 0
#######################################################
### State Sync Configuration Options ###
#######################################################
[statesync]
# State sync rapidly bootstraps a new node by discovering, fetching, and restoring a state machine
# snapshot from peers instead of fetching and replaying historical blocks. Requires some peers in
# the network to take and serve state machine snapshots. State sync is not attempted if the node
# has any local state (LastBlockHeight > 0). The node will have a truncated block history,
# starting from the height of the snapshot.
enable = false
# RPC servers (comma-separated) for light client verification of the synced state machine and
# retrieval of state data for node bootstrapping. Also needs a trusted height and corresponding
# header hash obtained from a trusted source, and a period during which validators can be trusted.
#
# For Cosmos SDK-based chains, trust_period should usually be about 2/3 of the unbonding time (~2
# weeks) during which they can be financially punished (slashed) for misbehavior.
rpc_servers = ""
trust_height = 0
trust_hash = ""
trust_period = "168h0m0s"
# Time to spend discovering snapshots before initiating a restore.
discovery_time = "15s"
# Temporary directory for state sync snapshot chunks, defaults to the OS tempdir (typically /tmp).
# Will create a new, randomly named directory within, and remove it when done.
temp_dir = ""
# The timeout duration before re-requesting a chunk, possibly from a different
# peer (default: 1 minute).
chunk_request_timeout = "10s"
# The number of concurrent chunk fetchers to run (default: 1).
chunk_fetchers = "4"
#######################################################
### Block Sync Configuration Options ###
#######################################################
[blocksync]
# Block Sync version to use:
#
# In v0.37, v1 and v2 of the block sync protocols were deprecated.
# Please use v0 instead.
#
# 1) "v0" - the default block sync implementation
version = "v0"
#######################################################
### Consensus Configuration Options ###
#######################################################
[consensus]
wal_file = "data/cs.wal/wal"
# How long we wait for a proposal block before prevoting nil
timeout_propose = "3s"
# How much timeout_propose increases with each round
timeout_propose_delta = "500ms"
# How long we wait after receiving +2/3 prevotes for “anything” (ie. not a single block or nil)
timeout_prevote = "1s"
# How much the timeout_prevote increases with each round
timeout_prevote_delta = "500ms"
# How long we wait after receiving +2/3 precommits for “anything” (ie. not a single block or nil)
timeout_precommit = "1s"
# How much the timeout_precommit increases with each round
timeout_precommit_delta = "500ms"
# How long we wait after committing a block, before starting on the new
# height (this gives us a chance to receive some more precommits, even
# though we already have +2/3).
timeout_commit = "5s"
# How many blocks to look back to check existence of the node's consensus votes before joining consensus
# When non-zero, the node will panic upon restart
# if the same consensus key was used to sign {double_sign_check_height} last blocks.
# So, validators should stop the state machine, wait for some blocks, and then restart the state machine to avoid panic.
double_sign_check_height = 0
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
skip_timeout_commit = false
# EmptyBlocks mode and possible interval between empty blocks
create_empty_blocks = false
create_empty_blocks_interval = "0s"
# Reactor sleep duration parameters
peer_gossip_sleep_duration = "100ms"
peer_query_maj23_sleep_duration = "2s"
#######################################################
### Storage Configuration Options ###
#######################################################
[storage]
# Set to true to discard ABCI responses from the state store, which can save a
# considerable amount of disk space. Set to false to ensure ABCI responses are
# persisted. ABCI responses are required for /block_results RPC queries, and to
# reindex events in the command-line tool.
discard_abci_responses = false
#######################################################
### Transaction Indexer Configuration Options ###
#######################################################
[tx_index]
# What indexer to use for transactions
#
# The application will set which txs to index. In some cases a node operator will be able
# to decide which txs to index based on configuration set in the application.
#
# Options:
# 1) "null"
# 2) "kv" (default) - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
# - When "kv" is chosen "tx.height" and "tx.hash" will always be indexed.
# 3) "psql" - the indexer services backed by PostgreSQL.
# When "kv" or "psql" is chosen "tx.height" and "tx.hash" will always be indexed.
indexer = "kv"
# The PostgreSQL connection configuration, the connection format:
# postgresql://<user>:<password>@<host>:<port>/<db>?<opts>
psql-conn = ""
#######################################################
### Instrumentation Configuration Options ###
#######################################################
[instrumentation]
# When true, Prometheus metrics are served under /metrics on
# PrometheusListenAddr.
# Check out the documentation for the list of available metrics.
prometheus = false
# Address to listen for Prometheus collector(s) connections
prometheus_listen_addr = ":26660"
# Maximum number of simultaneous connections.
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
max_open_connections = 3
# Instrumentation namespace
namespace = "cometbft"