Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
cheqd Cosmos CLI can be used manage keys on a node.Keys are closely related to accounts and on-ledger authentication.
Account addresses are on a cheqd node are an encoded version of a public key. Each account is linked with at least one public-private key pair. Multi-sig accounts can have more than one key pair associated with them.
To submit a transaction on behalf of an account, it must be signed with an account's private key.
Cosmos supports multiple keyring backends for the storage and management of keys. Each node operator is free to use the key management method they prefer.
By default, the cheqd-noded
binary is configured to use the os
keyring backend, as it is a safe default compared to file-based key management methods.
For test networks or local networks, this can be overridden to the test
keyring backend which is less secure and uses a file-based key storage mechanism where the keys are stored un-encrypted. To use the test
keyring backend, append --keyring-backend test
to each command that is related to key management or usage.
Each cheqd validator node has at least two keys.
Default location is $HOME/config/node_key.json
Used for peer-to-peer communication
Default location is $HOME/config/priv_validator_key.json
Used to sign consensus messages
When a new key is created, an account address and a mnemonic backup phrase will be printed. Keep mnemonic safe. This is the only way to restore access to the account if they keyring cannot be recovered.
Allows restoring a key from a previously-created BIP39 mnemonic phrase.
Most transactions will require you to use --from <key-alias>
param which is a name or address of private key with which to sign a transaction.
A cheqd-node
instance can be controlled and configured using the cheqd Cosmos CLI.
This document contains the commands for node operators that relate to node management, configuration, and status.
Node ID or node address is a part of peer info. It's calculated from node's pubKey
as hex(address(nodePubKey))
. To get node id
run the following command on the node's machine:
Validator address is a function of validator's public key. To get bech32
encoded validator address run this command on node's machine:
There are several ways to get hex-encoded validator address:
Convert from bech32
Query node using CLI:
Look for "ValidatorInfo":{"Address":"..."}
Validator public key is used in create-validator
transactions. To get bech32
encoded validator public key, run the following command on the node's machine:
Peer info is used to connect to peers when setting up a new node. It has the following format:
Example:
Using this information other participants will be able to join your node.
cheqd is a public self-sovereign identity (SSI) network for building secure 🔐 and private 🤫 self-sovereign identity systems on Cosmos 💫. Our core vision is to add viable commercial models to decentralised digital 🆔
cheqd-node
is the ledger/node component of the cheqd network tech stack, built using Cosmos SDK and Tendermint.
Join our cheqd Discord Server for help, questions, and support if you are looking to join our mainnet or testnet. Either the cheqd team, or one of your fellow node operators will be happy to offer some guidance.
Getting started as a node operator on the cheqd network mainnet is as simple as...
Install the latest stable release of cheqd-node
software (currently v2.x.x
) on a hosting platform of your choice by following the setup guide.
Once you have acquired CHEQ tokens, promote your node to a validator
If successfully configured, your node would become the latest validator on the cheqd mainnet. Welcome to the new digital ID revolution!
Our testnet is the easiest place for developers and node operators to get started if you're not quite ready yet to dive into building apps on our mainnet. To get started...
Install the latest stable release of cheqd-node
software (currently v2.x.x
) on a hosting platform of your choice by following the setup guide.
Acquire testnet CHEQ tokens through our testnet faucet.
Once you have acquired CHEQ tokens, promote your node to a validator
Once installed, cheqd-node
can be controlled using the cheqd Cosmos CLI guide.
Basic token functionality for holding and transferring tokens to other accounts on the same network
Creating, managing, and configuring accounts and keys on a cheqd node
Staking and participating in public-permissionless governance
Governance framework for public-permissionless self-sovereign identity networks
Creating did:cheqd
method DIDs, DID Documents ("DIDDocs")
Querying DIDs/DIDDocs using our Universal Resolver driver
Creating and managing Verifiable Credentials anchored to DIDs on cheqd mainnet
Creating on-ledger DID-Linked "resources" (e.g., schemas, visual representations of credentials, etc) that can be used in DIDDocs and Verifiable Credentials. This is used to support AnonCreds on cheqd
Custom pricing for DID and Resources with burn to manage inflation
Revocation registry/list support to revoke issued credentials
Trust registry support to manage accreditations betweeen organisations, using DIDs and DID-Linked Resources
Holder-pays-issuer and verifier-pays-issuer payment rails for Verifiable Credential exchange.
We plan on adding new functionality rapidly and on a regular basis and welcome feedback on our cheqd Discord server.
See our detailed Product Docs below for more information:
cheqd-node
is written in Go and built using Cosmos SDK. The Cosmos SDK Developer Guide explains a lot of the basic concepts of how the cheqd network functions.
If you want to build a node from source or contribute to the code, please read our guide to building and testing.
If you are building from source, or otherwise interested in running a local network, we have instructions on how to set up a new network for development purposes.
If you notice anything not behaving how you expected, or would like to make a suggestion / request for a new feature, please create a new issue and let us know.
The cheqd Discord is our primary chat channel for the open-source community, software developers, and node operators.
Please reach out to us there for discussions, help, and feedback on the project.
We provide for installation by those who want to run on Docker-based systems.
Docker-based installations are useful when running non-validator (observer) nodes that can be auto-scaled according to demand, or if you're a developer who setup a localnet / access node CLI without running a node.
⚠️ It is NOT recommended to run a validator node using Docker since you need to be absolutely certain about not running two Docker containers as validator nodes simultaneously. If two Docker containers with the same validator keys are active at the same time, this could be perceived by the network as a validator compromise / double-signing infraction and result in jailing / slashing.
pre-requisites below, either as individual installs or using (if running on a developer machine):
Docker Engine v20.10.x and above (use docker -v
to check)
Docker Compose v2.3.x and above (use docker compose version
to check)
Our Docker Compose files . The primary difference in usage is that Docker Compose's new implementation uses docker compose
commands (with a space), rather than the legacy docker-compose
although they are supposed to be drop-in replacements for each other.
Most issues with Docker that get raised with us are typically with for.
Other issues are due to developers . If your issues are specifically with Docker Compose, make sure the command used is docker compose
(with a space).
Both of the .env
files are signposted with the REQUIRED
and OPTIONAL
parameters that can be defined. You must fill out the required configuration parameters.
Once the environment variable files are edited, bringing up a Docker container is as simple as:
If you decide not to use the Docker Compose method, you'll need to configure node settings and volumes for the container manually.
Once you've configured these manually, start using
Alternatively, if you want to just start with the bash terminal without actually starting a node, you could use:
To stop a detached container that was started using Docker Compose, use:
If you also want to remove the container volumes when stopping, add the --volumes
flag to the command:
Be careful with removing volumes, since critical data such as node/validator keys will also be removed when volumes are removed. There's no way to get these back, unless you've backed them up independently.
We have additional guides for the following advanced usage scenarios:
Pull a (replace latest
with a different version tag if you want to pull something other than the latest version):
We provide . This is broken down into three files that need to be modified with the configuration parameters:
: Docker Compose file
: Environment variables used in docker-compose.yml
: Environment variables used inside the cheqd-node
container
Note: The file paths above for the -f
and --env-file
parameter are relative to the . Please modify the file paths for the correct relative/absolute paths on the system where you are executing the commands.
to create custom images
with multiple nodes to simulate a network
Product Docs
Dive into our product docs to learn more about cheqd Studio, Creds and our identity packages & SDKs, including Credo, Veramo, the Universal Resolver and Registrar.
There are two command line interface (CLI) tools for interacting with a running cheqd-node
instance:
cheqd Cosmos CLI: This is intended for node operators. Typically for node configuration, setup, and Cosmos keys.
Identity SDKs: Such as Veramo SDK plugin for cheqd, for identity transactions with DIDs, Verifiable Credentials, and DID-Linked Resources.
This document is focussed on providing guidance on how to use the cheqd Cosmos CLI.
A cheqd-node
instance can be controlled and configured using the cheqd Cosmos CLI.
This document contains the commands for reading and writing token transactions.
--node
: IP address or URL of node to send the request to
--node
: IP address or URL of node to send request to
--chain-id
: i.e. cheqd-testnet-6
--fees
: Maximum fee limit that is allowed for the transaction.
Pay attention at return status code. It should be 0 if a transaction is submitted successfully. Otherwise, an error message may be returned.
Cosmos SDK and Tendermint has a concept of pruning, which allows reducing the disk utilisation and storage required on a node.
There are two kinds of pruning controls available on a node:
Tendermint pruning: This impacts the ~/.cheqdnode/data/blockstore.db/
folder by only retaining the last n specified blocks. Controlled by the min-retain-blocks
parameter in ~/.cheqdnode/config/app.toml
.
Cosmos SDK pruning: This impacts the ~/.cheqdnode/data/application.db/
folder and prunes Cosmos SDK app-level state (a logical layer higher than Tendermint, which is just peer-to-peer). These are set by the pruning
parameters in the ~/.cheqdnode/config/app.toml
file.
This can be done by modifying the pruning parameters inside /home/cheqd/.cheqdnode/config/app.toml
file.
⚠️ In order for either type of pruning to work, your node should be running the latest stable release of cheqd-node (at least v1.3.0+).
You can check which version of cheqd-noded
you're running using:
The output should be a version higher than v1.3.0. If you're on a lower version, you either manually upgrade the node binary or use the interactive installer to execute an upgrade while retaining settings.
The instructions below assume that the home directory for the cheqd
user is set to the default value of /home/cheqd
. If this is not the case for your node, please modify the commands below to the correct path.
Follow similar instructions as mentioned in the installer guide to check systemd service status:
(Substitute with cheqd-noded.service
if you're running a standlone node rather than with Cosmovisor.)
cheqd
user and configuration directorySwitch to the cheqd
user and then the .cheqdnode/config/
directory.
Before you make changes to pruning configuration, you might want to capture the existing usage first (only copy the command bit, not the full line):
The du -h -d 1 ...
command above prints the disk usage for the specified folder down to one folder level depth (-d 1
parameter) and prints the output in GB/MB (-h
parameter, which prints in human-readable values).
app.toml
file for editingOpen the app.toml
file once you've switched to the ~/.cheqdnode/config/
folder using your preferred text editor, such as nano
:
Instructions on how to use text editors such as nano
is out of the scope of this document. If you're unsure how to use it, consider following guides on how to use nano
.
⚠️ If your node was configured to work with release version v1.2.2 or earlier, you may have been advised to run in
pruning="nothing"
mode due to a bug in Cosmos SDK.Ensure you've upgraded to the latest stable release (using the installer) or otherwise. When running a validator node, you're recommended to change this value to
pruning="default"
.
The file should already be populated with values. Edit the pruning
parameter value to one of following:
pruning="nothing"
(highest disk usage): This will disable Cosmos SDK pruning and set your node to behave like an "archive" node. This mode consumes the highest disk usage.
pruning="default"
(recommended, moderate disk usage): This keeps the last 100 states in addition to every 500th state, and prunes on 10-block intervals. This configuration is safe to use on all types of nodes, especially validator nodes.
pruning="everything"
(lowest disk usage): This mode is not recommended when running validator nodes. This will keep the current state and also prune on 10 blocks intervals. This settings is useful for nodes such as seed/sentry nodes, as long as they are not used to query RPC/REST API requests.
pruning="custom"
(custom disk usage): If you set the pruning
parameter to custom
, you will have to modify two additional parameters:
pruning-keep-recent
: This will define how many recent states are kept, e.g., 250
(contrast this against default
).
pruning-interval
: This will define how often state pruning happens, e.g., 50
(contrast against default
, which does it every 10 blocks)
pruning-keep-every
: This parameter is deprecated in newer versions of Cosmos SDK. You can delete this line if it's present in your app.toml
file.
Although the paramters named pruning-*
are only supposed to take effect if the pruning strategy is custom
, in practice it seems that in Cosmos SDK v0.46.10 these settings still impact pruning. Therefore, you're advised to comment out these lines when using default
pruning.
Example configuration file with recommended settings:
Configuring min-retain-blocks
parameter to a non-zero value activates Tendermint pruning, which specifies minimum block height to retain. By default, this parameter is set to 0
, which disables this feature.
Enabling this feature can reduce disk usage significantly. Be careful in setting a value, as it must be at least higher than 250,000 as calculated below:
Unbonding time (14 days) converted to seconds = 1,210,000 seconds
...divided by average block time = approx. 6s / block
= approx. 210,000 blocks
Adding a safety margin (in case average block time goes down) = approx. 250,000 blocks
Therefore, this setting must always be updated to carefully match a valid value in case the unbonding time on the network you're running on is different. (E.g., this value is different on mainnet vs testnet due to different unbonding period.)
Using the recommended values, on the current cheqd mainnet this section would look like the following:
Save and exit from the app.toml
file. Working with text editors is outside the scope of this document, but in general under nano
this would be Ctrl+X
, "yes" to Save modified buffer
, then Enter
.
ℹ️ NOTE: You need root or at least a user with super-user privileges using the
sudo
prefix to the commands below when interact with systemd.
If you switched to the cheqd user, exit out to a root/super-user:
Usually, this will switch you back to root
or other super-user (e.g., ubuntu
).
Restart systemd service:
(Substitute with cheqd-noded.service
above if you're running without Cosmovisor)
Check the systemd service status and confirm that it's running:
Our installer guide has a section on how to check service status.
If you activate/modify any pruning configuration above, the changes to disk usage are NOT immediate. Typically, it may take 1-2 days over which the disk usage reduction is progressively applied.
If you've gone from a higher disk usage setting to a lower disk usage setting, re-run the disk usage command to comapre the breakdown of disk usage in the node data directory:
The output shown should show a difference in disk usage from the previous run before settings were changed for the application.db
folder (if the pruning
parameters were changed) and/or the blockstore.db
folder (if min-retain-blocks
) was changed.
This document describes how to use install and configure a new instance of cheqd-node
using an interactive installer which supports the following functionality:
Setup a new observer/validator node from scratch
Configure a node to work with the testnet/mainnet network
Upgrade an existing node installation
Alternatively, if you want to manage network upgrades manually, you can also opt for a standadalone installation.
⚠️ Read our guidance on hardware, software, and networking pre-requisites for nodes before you get started!
This document specifies the CPU/RAM requirements, firewall ports, and operating system requirements for running cheqd-node.
The interactive installer is written in Python 3 and is designed to work on Ubuntu Linux 20.04 LTS systems. The script has been written to work pre-installed Python 3.x libraries generally available on Ubuntu 20.04.
Cosmovisor (default, but can be skipped): The installer configures Cosmovisor by default, which is a standard Cosmos SDK tool that makes network upgrades happen in an automated fashion. This makes the process of upgrading to new releases for network-wide upgrades easier.
cheqd-noded
binary (mandatory): This is the main piece of ledger-side code each node runs.
Dependencies: In case you request the installer to restore from a snapshot, dependencies such as pv
will be installed so that a progress bar can be shown for snapshot extraction. Otherwise, no additional software is installed by the installer.
Github.com: Fetch latest releases, configuration files, and network settings.
Cloudflare DNS (optional): Used to fetch an externally-resolvable IP address, if this option is selected during install.
Network snapshot server (optional): If requested by the user, the script will fetch latest network snapshots published on snapshots.cheqd.net and then download snapshot files from the snapshot CDN endpoint (snapshots-cdn.cheqd.net)
⚠️ The guidance below is intended for straightforward new installations or upgrades.
If your scenario is more complex, such as in case of upgrading a validator or moving a validator, please review the guidance under our validator guide.
By default, the installer will attempt to create a backup of the ~/.cheqdnode/config/
directory and important files under ~/.cheqdnode/data/
before making any destructive changes. These backups are created under the cheqd user's home directory in a folder called backup
(default location: /home/cheqd/backup
). However, for safety, you're recommended to also make manual backups when upgrading a node.
If you're setting up a new node from scratch, you can safely ignore the advice above.
Stop the running services related to your node. If running via Cosmovisor:
Or if running standalone:
To get started, download the interactive installer script:
Then, start the interactive installer:
ℹ️ NOTE: You need to execute this as root or at least a user with super-user privileges using the
sudo
prefix to the command.
The interactive installer guides you through setting up and configuring a node installation by asking as series of questions.
All the questions specify the default answer/value for that question in square ([]
) brackets, for example, [default: 1]
. If a default value exists, you can just press Enter
without needing to type the whole answer.
Binary release version to install, automatically fetched from Github. The first release displayed in the list will always be the latest stable version. Other versions displayed below it are pre-release/beta versions.
By default, a new user/group called cheqd
will be created and a home directory created for it. The default location is /home/cheqd
and any configuration data directories being created under this path at /home/cheqd/.cheqdnode
.
Join either the existing mainnet (chain ID: cheqd-mainnet-1
) or testnet (chain ID: cheqd-testnet-6
) network.
The next few questions are used to configure Cosmovisor-related options. Read an explanation of Cosmovisor configuration options in Cosmos SDK documentation, or choose to install with the default settings.
Install cheqd-noded using Cosmovisor? (yes/no) [default: yes]
: Use Cosmovisor to run node
Do you want Cosmovisor to automatically download binaries for scheduled upgrades? (yes/no) [default: yes]
: By default, Cosmovisor will attempt to automatically download new binaries that have passed software upgrade proposals voted on the network. You can choose to do this manually if you want more control.
Do you want Cosmovisor to automatically restart after an upgrade? (yes/no) [default: yes]
: By default, Cosmovisor will automatically restart the node after an upgrade height is reached and an upgrade carried out.
You can also choose no
to installing with Cosmovisor on the first question, in which case a standalone binary installation is carried out.
The next set of questions sets common node configuration parameters. These are the minimal configuration parameters necessary for a node to function, but advanced users can later customise other settings.
Answers to these prompts are saved in the app.toml
and /config.toml
files, which are written under /home/cheqd/.cheqdnode/config/
by default (but can be different if a different home directory was set above). An explanation of some these settings are available in requirements for running a node and the validator guide.
Provide a moniker for your cheqd-node [default: <hostname>]:
: Moniker is a human-readable name for your cheqd-node. This is NOT the same as your validator name, and is only used to uniquely identify your node for Tendermint P2P address book.
What is the externally-reachable IP address or DNS name for your cheqd-node? [default: Fetch automatically via DNS resolver lookup]:
: External address is the publicly accessible IP address or DNS name of your cheqd-node. This is used to advertise your node's P2P address to other nodes in the network. If you are running your node behind a NAT, you should set this to your public IP address or DNS name. If you are running your node on a public IP address, you can leave this blank to automatically fetch your IP address via DNS resolver lookup. (Automatic fetching sends a dig
request to whoami.cloudflare.com
)
Specify your node's P2P port [default: 26656]
: Tendermint peer-to-peer traffic port
Specify your node's RPC port [default: 26657]
: Tendermint RPC port
Specify persistent peers [default: none]
: Persistent peers are nodes that you want to always keep connected to. Values for persistent peers should be specified in format: <nodeID>@<IP>:<port>,<nodeID>@<IP>:<port>
.
Specify minimum gas price [default: 50ncheq]
: Minimum gas prices is the price you are willing to accept as a validator to process a transaction. Values should be entered in format <number>ncheq
(e.g., 50ncheq
)
Specify log level (trace|debug|info|warn|error|fatal|panic) [default: error]:
: The default log level of error
is generally recommended for normal operation. You may temporarily need to change to more verbose logging levels if trying to diagnose issues if the node isn't behaving correctly.
Specify log format (json|plain) [default: json]:
: JSON log format allows parsing log files more easily if there's an issue with your node, hence it's set as the default.
When setting up a new node, you typically need to download all past blocks on the network, including any upgrades that were done along the way with the specific binary releases those upgrades went through.
Since this can be quite cumbersome and take a really long time, the installer offers the ability to download a recent blockchain snapshot for the selected network from snapshots.cheqd.net.
If you skip this step, you'll need to manually synchronise with the network.
⚠️ Chain snapshots can range from 10 GBs (for testnet) to 100 GBs (for mainnet). Therefore, this step can take a long time.
If you choose this option, you can step away and return to the installer while it works in the background to complete the rest of the installation. You might want to change settings in your SSH client / server to keep SSH connections alive, since some hosts terminate connection due to inactivity.
If you're running the installer on a machine where an existing installation is already present, you'll be prompted whether you want to update/upgrade the existing installation:
If you choose no
, this will treat the installation as if installing from scratch and prompt with the questions in section above.
If you choose yes
, this will retain existing node configuration and prompt with a different set of questions as outlined below. Choosing "yes" is the default since in most cases, you would want to retain the existing configuration while updating the node binary to a newer version.
Choose binary release version to upgrade the node to.
If Cosmovisor is detected as installed, you'll be offered the option to bump it to the latest default version. Otherwise, you will be given the option of installing it.
The next section allows you to customise Cosmovisor settings. The explanations of the options are same those given above.
By default, the installer will update the systemd
system service settings for the following:
cheqd-cosmovisor.service
(if installed with Cosmovisor) or cheqd-noded.service
(if installed without Cosmovisor): This is the service that runs the node in the background 24/7.
rsyslog.service
: Configures node-specific logging directories and settings.
logrotate.service
and logrotate.timer
: Configures log rotation for node service to limit the duration/size of logs retained to sensible values. By default, this keeps 7 days worth of logs, and compresses logs if they grow larger than 100 MB in size.
Once all prompts have been answered, the installer attempts to carry out the changes requested. This includes:
Setting up a new cheqd
user/group.
Downloading cheqd-noded
and Cosmovisor binaries, as applicable.
Setting environment variables required for node binary / Cosmovisor to function.
Creating directories for node data and configuration.
If present, backing up existing node directories and configuration.
Downloading and extracting snapshots (if requested).
The installer is designed to terminate the installation process and stop making changes if it encounters an error. If this happens, please reach out to us on our community Slack or Discord for how to proceed and recover from errors (if any).
⚠️ The guidance below is intended for straightforward new installations or upgrades.
If your scenario is more complex, such as in case of upgrading a validator or moving a validator, please review the guidance under our validator guide.
If the installer finishes successfully, it will exit with a success message:
Otherwise, if the installation failed, it will exit with an error message which elaborates on the specific error encountered during setup.
The following steps are only recommended if installation has been successful.
Check that the node-related systemd
service is enabled
. This ensures that the node service automatically restarted, even if the service fails or if the machine is rebooted.
If installed with Cosmovisor:
The output line after the systemctl status cheqd-cosmovisor.service
command should say enabled
after Loaded
path and vendor preset
.
If installed without Cosmovisor (standalone binary install):
The output line after the systemctl status cheqd-noded.service
command should say enabled
after Loaded
path and vendor preset
.
Once the node is installed/upgraded, restart the systemd
service to get the node running. These steps require root
or super-user privileges as a pre-requisite.
If installed with Cosmovisor:
If installed without Cosmovisor:
The command above should start the node service. Ideally, the node service should start running and remain running. You can check this by running the command below a couple of times in succession and checking that the output line remains as Active: running
rather than any other status.
(Previous commands can be recalled in bash by pressing the up
arrow key on your keyboard to repeat or cycle through previous commands.)
If installed with Cosmovisor:
The output line after the systemctl status cheqd-cosmovisor.service
command should say enabled
after Loaded
path and vendor preset
.
If installed without Cosmovisor (standalone binary install):
Once the systemd
service is confirmed as running, check that the node is catching up on new blocks by repeating this command 3-5 times:
(Previous commands can be recalled in bash by pressing the up
arrow key on your keyboard to repeat or cycle through previous commands.)
Note: The cheqd-noded status
may not return a successful response immediately after starting the systemd
service. For instance, you might get the following output:
If you encounter the output above, as long as systemctl status ...
returns Active
, this "error" above is completely normal. This is because it takes a few minutes after systemctl start
for the node services to properly start running. Please wait for a few minutes, and then re-run the cheqd-noded status
command.
The output might say catching_up: true
if the node is still catching up, or catching_up: false
if it's fully caught up.
If the node is catching up, the time needed to fully catch up will depend on how far behind your node is. The latest_block_height
value in the output shown above will indicate how far behind the node is. This number should display a larger value every time you re-run the command.
❓ The absolute newest block height across the entire network is displayed in the block explorer. Check the mainnet explorer or the testnet explorer (depending on which network you've joined) to understand the network-wide latest block height vs your node's delta.
If you're configuring a validator, check out our validator guide for further configuration steps to carry out.
A cheqd-node
instance can be controlled and configured using the cheqd Cosmos CLI.
This document contains the commands for account management.
from
can be either key alias or address. If it's an address, corresponding key should be in keychain.
Validator nodes can get "jailed" along with a penalty imposed (through its stake getting slashed). Unlike a proof-of-work (PoW) network (such as Ethereum or Bitcoin), proof-of-stake (PoS) networks (such as the cheqd network, built using ) use from validators.
There are two scenarios in which a validator could be jailed, one of which has more serious consequences than the other.
When a validator "misses" blocks or doesn't participate in consensus, it can get temporarily jailed. By enforcing this check, PoS networks like ours ensure that validators are actively participating in the operation of the network, ensuring that their nodes remain secure and up-to-date with the latest software releases, etc.
The duration on how this is calculated is defined in the . Jailing occurs based on a sliding time window (called the ) calculated as follows.
The signed_blocks_window
(set to 25,920 blocks on mainnet) defines the time window that is used to calculate downtime.
Within this window of 25,920 blocks, at least 50% of the blocks must be signed by a validator. This is defined in the genesis parameter min_signed_per_window
(set to 0.5
for mainnet).
Therefore, if a validator misses 12,960 blocks within the last 25,920 blocks it meets the criteria for getting jailed.
To convert this block window to a time period, consider the block time of the network, i.e., at what frequency a new block is created. The or any other explorer configured for cheqd network (such as ).
Let's assume the block time was 6 seconds. This equates to 12,960 * 6 = 77,760 seconds = ~21.6 hours. This means if the validator is not participating in consensus for more than ~21.6 hours (in this example), it will get temporarily jailed.
Since the block time of the network is variable on the number of nodes participating, network congestion, etc it's always important to calculate the time period on latest block time figures.
1% of all of the stake delegated to the node is slashed, i.e., burned and disappears forever. This includes any stake delegated to the node by external parties. (If a validator gets jailed, delegators may decide to switch whom they delegate to.) The percentage of stake to be slashed is defined in the slash_fraction_downtime
genesis parameter.
During the downtime of a Validator Node, it is common for the Node to miss important software upgrades, since they are no longer in the active set of nodes on the main ledger.
Therefore, the first step is checking that your node is up to date. You can execute the command
The expected response will be the latest cheqd-noded software release. At the time of writing, the expected response would be
Once again, check if your node is up to date, following Step 1.
Expected response: In the output, look for the text latest_block_height
and note the value. Execute the status command above a few times and make sure the value of latest_block_height
has increased each time.
The node is fully caught up when the parameter catching_up
returns the output false.
Additionally,, you can check this has worked:
It shows you a page and field "version": "0.6.0".
If everything is up to date, and the node has fully caught, you can now unjail your node using this command in the cheqd CLI:
This document offers guidance for validators looking to move thier node instance to another one, for example in case of changing VPS provider or something like this.
The main tool required for this is cheqd's .
Before completing the move, ensure the following checks are completed:
config
directory and data/priv_validator_state.json
to safe placeCheck that your config
directory and data/priv_validator_state.json
are copied to a safe place where they will cannot affected by the migration
If you are using cosmosvisor, use systemctl stop cheqd-cosmovisor
For all other cases, use systemctl stop cheqd-noded
.
This step is of the utmost important
If your node is not stopped correctly and two nodes are running with the same private keys, this will lead to a double signing infraction which results in your node being permemently jailed (tombstoned) resulting in a 5% slack of staked tokens.
You will also be required to complete a fresh setup of your node.
Only after you have completed the preparation steps to shut down the previous node, the installation should begin.
Once this has been completed, you will be able to move your existing keys back and settings.
The answers for installer quiestion could be:
Here you can pick up the version what you want.
Set path for cheqd user's home directory [default: /home/cheqd]:
.
This is essentialy a question about where the home directory, cheqdnode
, is located or will be.
It is up to operator where they want to store data
, config
and log
directories.
Do you want to setup a new cheqd-node? (yes/no) [default: yes]:
Here the expected answer is No
.
The main idea is that our old config
directory will be used and data
will be restored from the snapshot.
We don't need to setup the new one.
Select cheqd network to join (testnet/mainnet) [default: mainnet]:
For now, we have 2 networks, testnet
and mainnet
.
Type whichever chain you want to use or just keep the default by clicking Enter
.
Install cheqd-noded using Cosmovisor? (yes/no) [default: yes]:
.
This is also up to the operator.
CAUTION: Downloading a snapshot replaces your existing copy of chain data. Usually safe to use this option when doing a fresh installation. Do you want to download a snapshot of the existing chain to speed up node synchronisation? (yes/no) [
default: yes
].
On this question we recommend to answer Yes
, cause it will help you to catchup with other nodes in the network. That is the main feature from this installer.
Copy config
directory to the CHEQD_HOME_DIRECTORY/.cheqdnode/
Copy data/priv_validator_state.json
to the CHEQD_HOME_DIRECTORY/.cheqdnode/data
Make sure that permissions are cheqd:cheqd
for CHEQD_HOME_DIRECTORY/.cheqdnode
directory. For setting it the next command can help $ sudo chown -R cheqd:cheqd CHEQD_HOME_DIRECTORY/.cheqdnode
Where CHEQD_HOME_DIRECTORY
is the home directory for cheqd
user. By default it's /home/cheqd
or what you answered during the installation for the second question.
You need to specify here new external address by calling the next command under the cheqd
user:
The latest thing in this doc is to run the service and check that all works fine.
where <service-name>
is a name of service depending was Install Cosmovisor
selected or not.
cheqd-cosmovisor
if Cosmovisor was installed.
cheqd-noded
in case of keeping cheqd-noded
as was with debian package approach.
For checking that service works, please run the next command:
where <service-name>
has the same meaning as above.
The status should be Active(running)
When you set up your Validator node, it is recommended that you only stake a very small amount from the actual Validator node. This is to minimise the tokens that could be locked in an unbonding period, were your node to experience signficiant downtime.
You should delegate the rest of your tokens to your Validator node from a different key alias.
How do I do this?
You can add (as many as you want) additional keys you want using the function:
When you create a new key, a mnemonic phrase and account address will be printed. Keep the mnemonic phrase safe as this is the only way to restore access to the account if they keyring cannot be recovered.
You can view all created keys using the function:
You are able to transfer tokens between key accounts using the function.
You can then delegate to your Validator Node, using the function
We use a second/different Virtual Machine to create these new accounts/wallets. In this instane, you only need to install cheqd-noded as a binary, you don't need to run it as a full node.
And then since this VM is not running a node, you can then append the --node parameter to any request and target the RPC port of the VM running the actual node.
That way:
The second node doesn't need to sync the full blockchain; and
You can separate out the keys/wallets, since the IP address of your actual node will be public by definition and people can attack it or try to break in
I’d recommend at least 250 GB at the current chain size. You can choose to go higher, so that you don’t need to revisit this. Within our team, we set alerts on our cloud providers/Datadog to raise alerts when nodes reach 85-90% storage used which allows us to grow the disk storage as and when needed, as opposed to over-provisioning.
Here’s the relevant section in the file:
Green: 90-100% blocks signed
Amber: 70-90% blocks signed
Red: 1-70% blocks signed
Please join the channel 'mainnet-alerts' on the cheqd community slack.
Yes! Here are a few other suggestions:
You can check the current status of disk storage used on all mount points manually through the output of df -hT
The default storage path for cheqd-node is on /home/cheqd
. By default, most hosting/cloud providers will set this up on a single disk volume under the /
(root) path. If you move and mount /home
on a separate disk volume, this will allow you to expand the storage independent of the main volume. This can sometimes make a difference, because if you leave /home
tree mounted on /
mount path, many cloud providers will force you to bump the whole virtual machine category - including the CPU and RAM - to a more expensive tier in order to get additional disk storage on /
. This can also result in over-provisioning since the additional CPU/RAM is likely not required.
You can also optimise the amount of logs stored, in case the logs are taking up too much space. There’s a few techniques here:
In config.toml
you can set the logging level to error
for less logging than the default which is info
. (The other possible value for this is debug
)
[Set the log rotation configuration to use different/custom parameters such as what file-size to rotate at, number of days to retain etc.
As a Validator Node, you should be familiar with the concept of commission. This is the percentage of tokens that you take as a fee for running the infrastructure on the network. Token holders are able to delegate tokens to you, with an understanding that they can earn staking rewards, but as consideration, you are also able to earn a flat percentage fee of the rewards on the delegated stake they supply.
There are three commission values you should be familiar with:
The first is the maximum rate of commission that you will be able to move upwards to.
Please note that this value cannot be changed once your Validator Node is set up, so be careful and do your research.
The second parameter is the maximum amount of commission you will be able to increase by within a 24 hour period. For example if you set this as 0.01, you will be able to increase your commission by 1% a day.
The third value is your current commission rate.
Points to note: lower commission rate = higher likelihood of more token holders delegating tokens to you because they will earn more rewards. However, with a very low commission rate, in the future, you might find that the gas fees on the Network outweight the rewards made through commission.
higher commission rate = you earn more tokens from the existing stake + delegated tokens. But the tradeoff being that it may appear less desirable for new delegators when compared to other Validators.
When setting up the Validator, the Gas parameter is the amount of tokens you are willing to spend on gas.
For simplicity, we suggest setting:
AND setting:
These parameters, together, will make it highly likely that the transaction will go through and not fail. Having the gas set at auto, without the gas adjustment will endanger the transaction of failing, if the gas prices increase.
Gas prices also come into play here too, the lower your gas price, the more likely that your node will be considered in the active set for rewards.
We suggest the set:
should fall within this recommended range:
Low: 25ncheq
Medium: 50ncheq
High: 100ncheq
Your public name, is also known as your moniker.
You are able to change this, as well as the description of your node using the function:
Yes, this is how you should do it. Since it's a public permissionless network, there's no way of pre-determining what the set of IP addresses will be, as entities may leave and join the network. We suggest using a TCP/network load balancer and keeping your VM/node in a private subnet though for security reasons. The LB then becomes your network edge which if you're hosting on a cloud provider they manage/patch/run.
This document provides guidance on how configure and promote a cheqd node to validator status. Having a validator node is necessary to participate in staking rewards, block creation, and governance.
You must already have a running cheqd-node
instance installed using one of the supported methods.
Please also ensure the node is fully caught up with the latest ledger updates.
(recommended method)
Follow the guidance on to create a new account key.
When you create a new key, a new account address and mnemonic backup phrase will be printed. Keep the mnemonic phrase safe as this is the only way to restore access to the account if they keyring cannot be recovered.
P.S. in case of using Ledger Nano device it would be helpful to use
Get your node ID
Follow the guidance on to fetch your node ID.
Get your validator account address
The validator account address is generated in Step 1 above when a new key is added. To show the validator account address, follow the .
(The assumption above is that there is only one account / key that has been added on the node. In case you have multiple addresses, please jot down the preferred account address.)
Ensure your account has a positive balance
Get your node's validator public key
Promote your node to validator status by staking your token balance
You can decide how many tokens you would like to stake from your account balance. For instance, you may want to leave a portion of the balance for paying transaction fees (now and in the future).
To promote to validation, submit a create-validator
transaction to the network:
Parameters required in the transaction above are:
amount
: Amount of tokens to stake. You should stake at least 1 CHEQ (= 1,000,000,000ncheq) to successfully complete a staking transaction.
from
: Key alias of the node operator account that makes the initial stake
min-self-delegation
: Minimum amount of tokens that the node operator promises to keep bonded
pubkey
: Node's bech32
-encoded validator public key from the previous step
commission-rate
: Validator's commission rate. The minimum is set to 0.05
.
commission-max-rate
: Validator's maximum commission rate, expressed as a number with up to two decimal points. The value for this cannot be changed later.
commission-max-change-rate
: Maximum rate of change of a validator's commission rate per day, expressed as a number with up to two decimal points. The value for this cannot be changed later.
chain-id
: Unique identifier for the chain.
For cheqd's current mainnet, this is cheqd-mainnet-1
For cheqd's current testnet, this is cheqd-testnet-6
gas
: Maximum gas to use for this specific transaction. Using auto
uses Cosmos's auto-calculation mechanism, but can also be specified manually as an integer value.
gas-adjustment (optional): If you're using auto
gas calculation, this parameter multiplies the auto-calculated amount by the specified factor, e.g., 1.4
. This is recommended so that it leaves enough margin of error to add a bit more gas to the transaction and ensure it successfully goes through.
gas-prices
: Maximum gas price set by the validator. Default value is 50ncheq
.
Please note the parameters below are just an “example”.
You will see the commission they set, the max rate they set, and the rate of change. Please use this as a guide when thinking of your own commission configurations. This is important to get right, because the commission-max-rate
and commission-max-change-rate
cannot be changed after they are initially set.
Check that your validator node is bonded
Checking that the validator is correctly bonded can be checked via any node:
Find your node by moniker
and make sure that status
is BOND_STATUS_BONDED
.
Check that your validator node is signing blocks and taking part in consensus
Query the latest block. Open <node-address:rpc-port/block
in a web browser. Make sure that there is a signature with your validator address in the signature list.
To use your Ledger Nano you will need to complete the following steps:
Set-up your wallet by creating a PIN and passphrase, which must be stored securely to enable recovery if the device is lost or damaged.
Connect your device to your PC and update the firmware to the latest version using the Ledger Live application.
Install the Cosmos application using the software manager (Manager > Cosmos > Install).
Adding a new key In order to use the hardware wallet address with the cli, the user must first add it via cheqd-noded
. This process only records the public information about the key.
To import the key first plug in the device and enter the device pin. Once you have unlocked the device navigate to the Cosmos app on the device and open it.
To add the key use the following command:
Note
The --ledger
flag tells the command line tool to talk to the ledger device and the --index
flag selects which HD index should be used.
When running this command, the Ledger device will prompt you to verify the genereated address. Once you have done this you will get an output in the following form:
On completion of the steps above, you would have successfully bonded a node as validator to the cheqd testnet and participating in staking/consensus.
This document describes the hardware and software pre-requisites for setting up a new cheqd-node
instance and joining the existing testnet/mainnet. The recommended installation method is to use the .
Alternative installation methods and a developer guide to building from scratch are covered at the end of this page.
For most nodes, the RAM/vCPU requirements are relatively static and do not change over time. However, the disk storage space needs to grow as the chain grows and will evolve over time.
It is recommended to mount disk storage for blockchain data as an expandable volume/partition separate from your root partition. This allows you mount the node data/configuration path on /home
(for example) and increase the storage if necessary independent of the root /
partition since hosting providers typically force an increase in CPU/RAM specifications to grow the root partition.
Extended information on . The figures below have been updated from the default Tendermint recommendations to account for current cheqd network chain size, real-world usage accounting for requests nodes need to handle, etc.
4-8 GB RAM (2 GB RAM minimum)
x64 2.0 GHz 2-4 vCPU or equivalent (x64 1.4 GHz 1 vCPU or equivalent minimum)
650 GB SSD (500 GB minimum)
⚠️ Storage requirements for the blockchain grows with time. Therefore, these minimum storage figures are expected to increase over time. Read our validator guide for "pruning" settings to optimise storage consumed.
We recommend using a storage path that can be kept persistent and restored/remounted (if necessary) for the configuration, data, and log directories associated with a node. This allows a node to be restored along with configuration files such as node keys and for the node's copy of the ledger to be restored without triggering a full chain sync.
We plan on supporting other operating systems in the future, based on demand for specific platforms by the community.
To function properly, cheqd-node
requires two types of ports to be configured. Depending on the setup, you may also need to configure firewall rules to allow the correct ingress/egress traffic.
Node operators should ensure there are no existing services running on these ports before proceeding with installation.
The P2P port is used for peer-to-peer communication between nodes. This port is used for your node to discover and connect to other nodes on the network. It should allow traffic to/from any IP address range.
By default, the P2P port is set to 26656
.
Inbound TCP connections on port 26656
(or your custom port) should be allowed from any IP address range.
Outbound TCP connections must be allowed on all ports to any IP address range.
The default P2P port can be changed in $HOME/.cheqdnode/config/config.toml
.
The RPC port is intended to be used by client applications as well as the cheqd-node CLI. Your RPC port must be active and available on localhost to be able to use the CLI. It is up to a node operator whether they want to expose the RPC port to public internet.
By default, the RPC port is set to 26657
Inbound and outbound TCP connections should be allowed from destinations desired by the node operator. The default is to allow this from any IPv4 address range.
TLS for the RPC port can also be setup separately. Currently, TLS setup is not automatically carried out in the install process described below.
The default RPC port can be changed in $HOME/.cheqdnode/config/config.toml
.
In addition to the P2P/RPC ports above, you need to allow the following ports in terms of firewall access for the node to function correctly:
Domain Name System (DNS): Allow outbound traffic on TCP & UDP port 53 which allows your node to make DNS queries. Without this, your node will fail to make DNS lookups necessary to reach the peer-to-peer traffic ports for other nodes.
While this setup is not compulsory, node operators with higher stakes or a need to have more robust network security may consider setting up a sentry-validator node architecture.
Blockchain applications (especially when running validator nodes) are a-typical from "traditional" web server applications because their performance characteristics tend to be different in the way specified below:
Tend to be more disk I/O heavy: Traditional web apps will typically offload data storage to persistent stores such as a database. In case of a blockchain/validator node, the database is on the machine itself, rather than offloaded to separate machine with a standalone engine. Many blockchains use for their local data copies. (In Cosmos SDK apps, such as cheqd, this is the , but can also be , , etc.) The net result is the same as if you were trying to run a database engine on a machine: the system needs to have fast read/write performance characteristics.
Validator nodes cannot easily be auto-scaled: Many traditional applications can be horizontally (i.e., add more machines) or vertically (i.e., make current machine beefier) scaled. While this is possible for validator nodes, it must be done with extreme caution to ensure there aren't two instances of the same validator active simultaneously. This can be perceived by network consensus as a sign of compromised validator keys and lead to the . These concerns are less relevant for non-validating nodes, since they have a greater tolerance for missed blocks and can be scaled horizontally/vertically.
Docker/Kubernetes setups are not recommended for validators (unless you really know what you're doing): Primarily due to the double-signing risk, it's (../setup-and-configure/docker.md) unless you have a strong DevOps practice. The other reason is related to the first point, i.e., a Docker setup adds an abstraction layer between the actual underlying file-storage vs the Docker volume engine. Depending on the Docker (or similar abstraction) storage drivers used, you may need to for optimal performance.
⚠️ Please ensure you are running the since they may contain fixes/patches that improve node performance.
If you've got monitoring built in for your machine, a memory (RAM) leak would look like a graph where memory usage grows to 100%, falls off a cliff, grows to 100% again (the process repeats itself).
Normal memory usage may grow over time, but will not max out the available memory up to 100%. The graph below is taken from a server run by the cheqd team, over a 14-day period:
Figure 1: Graph showing normal memory usage on a cheqd-node server
A "CPU leak", i.e., where one or more process(es) consume increasing amounts of CPU is rarer, but could also happen if your machine has too few vCPUs and/or underpowered CPUs.
Figure 2: Graph showing normal CPU usage on a cheqd-node server
Figure 4: Graph showing CPU usage on Hetzner cloud, adding up to more than 100%
Check what accounting metric your monitoring tool uses to get a realistic idea of whether your CPU is overloaded or not.
If you don't have a monitoring application installed, you could use the built-in top
or htop
command.
Figure 2: Output of htop
showing CPU and memory usage
htop
is visually easier to understand than top
since it breaks down usage per-CPU, as well as memory usage.
The net result of your system clock being out of sync is that your node:
Constantly tries to dial peers to try and fetch new blocks
Connection gets rejected by some/all of them
Keeps retrying the above until CPU/memory get exhausted, or the node process crashes
To check if your system clock is synchronised, use the following command (note: only copy the command, not the sample output):
The timezone your machine is based in doesn't matter. You should check whether it reports System clock synchronized: yes
and NTP service: active
.
You may also need to allow outbound UDP traffic on port 123 explicitly, depending on your firewall settings. This port is used by the Network Time Protocol (NTP) service.
The JSON output should be similar to below:
Look for the n_peers
value at the beginning: this shows the number of peers your node is connected. A healthy node would typically be connected to anywhere between 5-50 nodes.
Next, search the results for the term is_outbound
. The number of matches for this term should exactly be the same as the value of n_peers
, since this is printed once per peer. The value of is_oubound
may either be true
or false
.
A healthy node should have a mix of is_outbound: true
as well as is_outbound: false
. If your node reports only one of these values, it's a strong indication that your node is unidirectionally connected/reachable, rather than bidirectionally reachable.
Unidirectional connectivity may cause your node to work overtime to stay synchronised with latest blocks on the network. You may fly by just fine - until there's a loss of connectivity to critical mass of peers and then your node goes offline.
Furthermore, your node might fetch the address book from seed nodes, and then try to resolve/contact them (and fail) due to connectivity issues.
Ideally, the IP address or DNS name set in external_address
property in your config.toml
file should be externally reachable.
Once you have tcptraceroute
installed, from this external machine you can execute the following command in tcptraceroute <hostname> <port>
format (note: only copy the actual command, not sample output):
A successful run would result in tcptraceroute
reaching the destination server on the required port (e.g., 26656) and then hanging up. If the connection times out consistently at any of the hops, this could indicate there's a firewall / router in the path dropping or blocking connections.
Inbound TCP traffic on at least port 26656 (or custom P2P port)
Optionally, inbound TCP traffic on other ports (RPC, gRPC, gRPC Web)
Outbound TCP traffic on all ports
Besides firewalls, depending on your network infrastructure, your connectivity issue instead might lie in a router or Network Address Translation (NAT) gateway.
In addition to infrastructure-level firewalls, Ubuntu machines also come with firewall on the machine itself. Typically, this is either disabled or set to allow all traffic by default.
Another common reason for unidirectional node connectivity occurs when the correct P2P inbound/outbound traffic is allowed in firewalls, but DNS traffic is blocked by a firewall.
Your node needs the ability to lookup DNS queries to resolve nodes with DNS names as their external_address
property to IP addresses, since other peers may advertise their addresses as a DNS name. Seed nodes set in config.toml
are a common example of this, since these are advertised as DNS names.
Your node may still scrape by if DNS resolution is blocked, for example, by obtaining an address book from a peer that has already done DNS -> IP resolution. However, this approach can be liable to break down if the resolution is incorrect or entries outdated.
To enable DNS lookups, your infrastructure/OS-level firewalls should allow:
Outbound UDP traffic on port 53: This is the most commonly-used port/protocol.
If the lookup fails, that could indicate DNS queries or blocked, or there are no externally-resolvable IPs where the node can be reached.
Typically, this problem is seen if you (non-exhaustive list):
Have only one CPU (bump to at least two CPU)
Only 1-2 GB of RAM (bump to at least 4 GB)
Most cloud providers should allow dynamically scaling these two factors without downtime. Monitor - especially over a period of days/weeks - whether this improves the situation or not. If the CPU/memory load behaviour remains similar, that likely indicates the issue is different.
Scaling CPU/memory without downtime may be different you're running a physical machine, or if your cloud provider doesn't support it. Please follow the guidance of those hosting platforms.
If your node is not up to date, please
In general, the installer allows you to install the binary and download/extract the latest snapshot from .
If the installation process was successful, the next step is to get back the configurations from :
Yes, you can. You can do this by to more aggressive parameters in the app.toml
file.
Please also see this thread on the trade-offs involved. This will help to some extent, but please note that this is a general property of all blockchains that the chain size will grow. E.g., out of the gate. We recommend using alerting policies to grow the disk storage as needed, which is less likely to require higher spend due to over-provisioning.
One of the simplest ways to do this is to , and with a more detailed view on the per-validator page (, for example). The condition is scored based on :
We have also internally that takes the output of this from condition score from the block explorer GraphQL API and makes it available as a simple REST API that can be used to send alerts on Slack, Discord etc which we have and set up on our Slack/Discord.
In addition to that, (for those who already use it for monitoring/want to set one up) that has metrics for monitoring node status (and a lot more).
You can have a look at other projects on Cosmos to get an idea of the percentages that nodes set as commission.
Follow the guidance on to check that your account is correctly showing the CHEQ testnet tokens provided to you.
The node validator public key is required as a parameter for the next step. More details on validator public key is mentioned in the .
When setting parameters such as the commission rate, a good benchmark is to consider the .
Find out your and look for "ValidatorInfo":{"Address":"..."}
:
Learn more about what you can do with your new validator node in the .
The default directory location for cheqd-node
installations is $HOME/.cheqdnode
, which computes to /home/cheqd/.cheqdnode
when . Custom paths can be defined if desired.
Our are currently compiled and tested for Ubuntu 20.04 LTS
, which is the recommended operating system for installation using interactive installer or binaries.
For other operating systems, we recommend using .
Further details on .
The provide REST, JSONRPC over HTTP, and JSONRPC over WebSockets. These API endpoints can provide useful information for node operators, such as healthchecks, network information, validator information etc.
Network Time Protocol (NTP): Allow outbound traffic on UDP port 123, which allows the NTP service to keep your system clock synchronised. Without this, , it can reject peer-to-peer connectivity.
Tendermint allows more complex setups in production, where the ingress/egress to a validator node is .
The is designed to setup/configure node installation as a service that runs on a virtual machine. This is the recommended setup for most scenarios when running as a validator. A validator node is expected to run 24/7 for network stability and security, and therefore cannot be autoscaled up/down across multiple instances.
If you're not running a validator node, or if you want more advanced control on your setup, is also supported. This method is also useful for running a localnet or when running a node on non-Linux systems for development purposes.
Tendermint documentation has .
; however in our testing so far this method has not been reliable and is therefore currently not recommended.
There's a catch here: depending on your monitoring tool, "100% CPU" could be measured differently! The graph above is from .
Other monitoring tools, such as , count each CPU as "100%", thus making the overall figure displayed in the graph (shown below) add up to number of CPUs x 100%.
, regardless of the CPU usage.
Unfortunately, this only provides the real-time usage, rather than historical usage over time. Historical usage typically requires an external application, which many cloud providers provide, or through 3rd party monitoring tools such as , etc.
, in case you already have a Prometheus instance you can use or comfortable with using the software. This can allow alerting based on actual metrics emitted by the node, rather than just top-level system metrics which are a blunt instrument / don't go into detail.
If your , this could cause Tendermint peer-to-peer connections to be rejected. This is similar to in a normal browser when accessing secure (HTTPS) sites.
If either of these are not true, chances are that your system clock has fallen out of sync, and may be the root cause of CPU/memory leaks. Follow to resolve the issue, and then monitor whether it fixes high utilisation.
Properly-configured nodes should have bidirectional connectivity for network traffic. To check whether this is the case, open <node-ip-address-or-dns-name:rpc-port>/net_info
in your browser, for example, .
Accessing this endpoint via your browser would only work and/or you're accessing from an allowed origin. If this is not the case, you can also view the results for this endpoint from the same machine where your node service is running through the command line:
To determine whether this is true, from a machine other than your node, . Unlike ping
which uses ICMP packets, tcptraceroute
uses TCP, i.e., the actual protocol used for Tendermint P2P to see if the destination is reachable. Success or failure in connectivity using ping
doesn't prove whether your node is reachable, since firewalls along the path may have different rules for ICMP vs TCP.
Your firewall rules on the machine and/or infrastructure (cloud) provider could cause connectivity issues. Ideally, :
Outbound TCP traffic is the default mode on many systems, since the port through which traffic gets routed out is dynamically determined during TCP connection establishment. In some cases, e.g., when , you may require more complex configuration (outside the scope of this document).
Configuring OS-level firewalls is outside the scope of this document, but can generally be :
If ufw status
reports active, follow to allow traffic on the required ports (customise the ports to the required ports).
Outbound TCP traffic on port 853 (explicit rule not needed if you already allow TCP outbound on all ports): Modern DNS servers also allow , which secures the connection using TLS to the DNS server. This can prevent malicious DNS servers from intercepting queries and giving spurious responses.
Outbound TCP traffic on port 443 (explicit rule not needed if you already allow TCP outbound on all ports): Similar to above, this enables , if supported by your DNS resolver.
To check DNS resolution work, try to run a DNS query and see if it returns a response. The following command will use the dig
utility to look up and report your node's externally resolvable IP address via (note: only copy the command, not the sample output):
If your machine is provisioned with , you might find that the node struggles during times of high load, or slowly degrades over time. The minimum figures are recommended for a developer setup, rather than a production-grade node.
Updates to the ledger code running on cheqd mainnet/testnet is voted in via governance proposals on-chain for "breaking" software changes.
We use semantic versioning to define our software release versions. The latest software version running on chain is in the v1.x.x family. Any new release versions that bump only the minor/fix version digits (the second and the third part of release version numbers) is intended to be compatible within the family and does not require an on-chain upgrade proposal to be made.
Network-wide software upgrades are typically initiated by the core development team behind the cheqd project. The process followed for the network upgrade is defined in our guide on creating a Software Upgrade proposal via network governance.
This section lists previous software upgrade proposals which are no longer valid for the active mainnet/testnet.
ℹ️ We provide installation instructions using pre-built Docker images if you just want to setup and use a Docker-based node.
These advanced instructions are intended for developers who want to build their own custom Docker image. You can also build a binary using Golang, or run a Docker-based localnet.
Install Docker pre-requisites below, either as individual installs or using Docker Desktop (if running on a developer machine):
Docker Engine v20.10.x and above (use docker -v
to check)
Docker Compose v2.3.x and above (use docker compose version
to check)
Our Docker Compose files use Compose v2 syntax. The primary difference in usage is that Docker Compose's new implementation uses docker compose
commands (with a space), rather than the legacy docker-compose
although they are supposed to be drop-in replacements for each other.
Most issues with Docker that get raised with us are typically with developers running Mac OS with Apple M-series chips, which Docker has special guidance for.
Other issues are due to developers using the legacy docker-compose
CLI rather than the new docker compose
CLI. If your issues are specifically with Docker Compose, make sure the command used is docker compose
(with a space).
Clone the cheqd-node
repository from Github. (Github has instructions on how to clone a repo.)
Inspect the Dockerfile to understand build arguments and variables. This is only really necessary if you want to modify the Docker build.
Or, If you want to use Docker buildx
engine, look at the usage/configuration in our Github build workflow.
Note: If you're building on a Mac OS system with Apple M-series chips, you should modify the
FROM
statement in the Dockerfile toFROM --platform=linux/amd64 golang:1.18-alpine AS builder
. Otherwise, Docker will try to download the Mac OSdarwin
image for the base Golang image and fail during the build process.
If you're planning on passing/modifying a lot of build arguments from their defaults, you can modify the Docker Compose file and the associated environment files to define the build/run-time variables in a one place. This is the recommended method.
Note that a valid Docker Compose file will only have one build
and image
section, so modify/comment this as necessary. See our instructions for how to use Docker Compose for mainnet/testnet to understand how this works.
Sample command (modify as necessary):
If you don't want to use docker compose build
, or build using docker build
and then run it using Docker Compose, a sample command you could use is (modify as necessary)
Once you built a Docker image, you can:
Configure a Docker-based node installation for mainnet/testnet
Run a localnet using this custom Docker image
If you're running a validator node, it's important to backup your validator's keys and state - especially before attempting any updates or shifting nodes.
Each validator node has three files/secrets that must be backed up, in case you want to restore or move a node. Anything not in scope listed below can be easily restored from snapshot or otherwise replaced with fresh copies, and as such this list is the bare minimum that needs to be backed up.
$CHEQD_HOME
is the data directory for your node, which defaults to/home/cheqd/.cheqdnode
The validator private key is one of the most important secrets that uniquely identifies your validator, and what this node uses to sign blocks, participate in consensus etc. This file is stored under $CHEQD_HOME/config/priv_validator_key.json
.
In the same folder as your validator private key, there's another key called $CHEQD_HOME/config/node_key.json
. This key is used to derive the node ID for your validator.
Backing up this key means if you move or restore your node, you don't have to change the node ID in the configuration files any peers have. This is only relevant (usually) if you're running multiple nodes, e.g., a sentry or seed node.
For most node operators who run a singular validator node, this node key is NOT important and can be refreshed/created as new. It is only used for Tendermint peer-to-peer communication. Hypothetically, if you created a new node key (say when moving a node from one machine to another), and then restored the priv_validator_key.json
, this is absolutely fine.
The validator private state is stored in the data
folder, not the config
folder where most other configuration files are kept - and therefore often gets missed by validator operators during backup. This file is stored at $CHEQD_HOME/data/priv_validator_state.json
.
This file stores the last block height signed by the validator and is updated every time a new block is created. Therefore, this should only be backed up after stopping the node service, otherwise, the data stored within this file will be in an inconsistent state. An example validator state file is shown below:
If you forget to restore to the validator state file when restoring a node, or when restoring a node from snapshot, your validator will double-sign blocks it has already signed, and get jailed permanently ("tombstoned") with no way to re-establish the validator.
The software upgrades and block height they were applied at is stored in $CHEQD_HOME/data/upgrade-info.json
. This file is used by Cosmovisor to track automated updates, and informs it whether it should attempt an upgrade/migration or not.
The simplest way to backup your validator secrets listed above is to display them in your terminal:
You can copy the contents of the file displayed in terminal off the server and store it in a secure location.
To restore the files, open the equivalent file on the machine where you want to restore the files to using a text editor (e.g., nano
) and paste in the contents:
HashiCorp Vault is an open-source project that allows server admins to run a secure, access-controlled off-site backup for secrets. You can either run a self-managed HashiCorp Vault server or use a hosted/managed HashiCorp Vault service.
Before you get started with this guide, make sure you've installed HashiCorp Vault CLI on the validator you want to run backups from.
You also need a running HashiCorp Vault server cluster you can use to proceed with this guide.
Setting up a HashiCorp Vault cluster is outside the scope of this documentation, since it can vary a lot depending on your setup. If you don't already have this set up, HashiCorp Vault documentation and Vault tutorials is the best place to get started.
Once you have Vault CLI set up on the validator, you need to set up environment variables in your terminal to configure which Vault server the secrets need to be backed up to.
Add the following variables to your terminal environment. Depending on which terminal you use (e.g., bash, shell, zsh, fish etc), you may need to modify the statements accordingly. You'll also need to modify the values according to your validator and Vault server configuration.
Download the vault-backup.sh
script from Github:
Make the script executable:
We recommend that you open the script using an editor such as nano
and confirm that you're happy with the environment variables and settings in it.
Before backing up your secrets, it's important to stop the cheqd node service or Cosmovisor service; otherwise, the validator private state will be left in an inconsistent state and result in an incorrect backup.
If you're running via Cosmovisor (the default option), this can be stopped using:
Or, if running as a standalone service:
Once you've confirmed the cheqd service is stopped, execute the Vault backup script:
We use HashiCorp Vault KV v2 secrets engine. Please make sure that it's enabled and mounted under
cheqd
path.
To restore backed-up secrets from a Vault server, you can use the same script using the -r
("restore") flag:
If you're restoring to a different machine than the original machine the backup was done from, you'll need to go through the pre-requisites, CLI setup step, and download the Vault backup script to the new machine as well.
In this scenario, you're also also recommended to disable the service (e.g., cheqd-cosmovisor
) on the original machine. This ensures that if the (original) machine gets restarted, systemd
does not try and start the node service as this can potentially result in two validators running with the same validator keys (which will result in tombstoning).
Once you've successfully restored, you can enable the service (e.g., cheqd-cosmovisor
) on the new machine:
This document offers the information and instructions required for node operators complete an upgrade with a fresh installation.
We decided to remove the debian package as an installtion artifact and use our own installation tool. The main reason for this is to help our current and future node operators join the cheqd network or complete an upgrade in a more intuitive, simpler and less time intensive way.
For this upgrade from 0.5.0
to 0.6.0
there are 2 possible scenario:
upgrade by installing Cosmovisor
upgrade only cheqd-noded
binary.
Cosmovisor is a tool from the cosmos-sdk
team which is able to make an upgrade in a full-auto mode. It can download and exchange binary without any actions from a node operator. Beginning with version 0.6.0
, and with all subsequent versions, we will leverage Cosmosvisor as a tool to handling our upgrade process.
For more information about interactive installer please see this documentation. You will find answers to common questions within this document, however of course feel free to reach out to the team on Slack or Discord.
As the installation and setting up the Cosmovisor can be difficult, and requires some additional steps for setting up the systemd
service, we injected all this steps into our interactive installer.
The flow for installtion is:
systemd
serviceMake sure that it was definitely stopped by using:
The output should be:
the main focus here: Active: inactive (dead)
cheqd-noded
binary with the Cosmovisor installationIMPORTANT For running an Upgrade scenario
you'll be required to setup a current home directory for a cheqd
user as an answer on question Set path for cheqd user's home directory [default: /home/cheqd]:
. This is because the upgrade scenario will only be used if this directory exists.
WARNING Please make sure that you answered yes
for questions about overwriting existing configuration. It's very important when making a new installation with Cosmovisor.
systemd
serviceIf you are updating a current installation the next steps can be used:
systemd
serviceand make sure that it was really stopped by:
Output should be like:
the main focus here: Active: inactive (dead)
IMPORTANT For running an Upgrade scenario
you'll be required to setup a current home directory for a cheqd
user as an answer on question Set path for cheqd user's home directory [default: /home/cheqd]:
. This is because the upgrade scenario will only be used if this directory exists.
WARNING. IF you are keeping just standalone a cheqd-noded
, without Cosmovisor, it's crucial you keep your systemd
service files without overwriting them. Please make sure that your answers were no
.
Prerequisites:
Install Go (currently, our builds are done for Golang v1.18)
To build the executable, run:
We have an in-depth guide for making custom Docker image builds.
We also have an in-depth guide on running a localnet with multiple nodes using Docker / Docker Compose.
Indexers, in a broad context, play a fundamental role in organising and optimising data retrieval within various systems. These tools act as navigational aids, allowing efficient access to specific information by creating structured indexes. In the realm of databases and information management, indexers enhance query performance by creating a roadmap to swiftly locate data entries.
In the context of blockchain and dApps, indexers go beyond traditional databases, facilitating streamlined access to on-chain data. This includes transaction histories, smart contract states, and event logs. In the dynamic and decentralised world of blockchain, indexers contribute to the efficiency of data queries, supporting real-time updates and ensuring the seamless functionality of diverse applications and platforms.
There are several indexer solutions available, each offering different levels of decentralisation, ease of development, and performance for you to consider. These solutions serve as intermediaries to assist in indexing Cheqd.
This document provides instructions on how to run a localnet with multiple validator/non-validator nodes. This can be useful if you are developing applications to work on cheqd network, or in automated testing pipelines.
The techniques described here are used in CI/CD contexts, for example, in this repository itself in the test.yml
Github workflow.
A clone of the cheqd-node
repository
Either a pre-built Docker image downloaded from Github Container Registry, or a custom-built Docker image (the latter is mandatory if you've modified any code in the repository cline).
Docker Engine and Docker Compose (same versions as described in configuration instructions for Docker setup)
A cheqd-node binary to run the network config generation script below.
Our localnet setup instructions are designed to set up a local network with the following node types:
3x validator nodes
1x non-validator/observer node
1x seed node
The definition for this network is described in a localnet Docker Compose file, which can be modified as required for your specific use case. Since it's not possible to cover all possible localnet setups, the following instructions describe the steps necessary to execute a setup similar to that using in our Github test
workflow.
Execute the bash script gen-network-config.sh
to generate validator keys and node configuration for the node types above.
You may modify the output if you want a different mix of node types.
Import the keys generated using the import-keys.sh
bash script:
Modify the docker-compose.yml
file if necessary, along with the per-container environment variables under the container-env
folder.
The default Docker localnet will configure a running network with a pre-built image or custom image.
The five nodes and corresponding ports set up by the default Docker Compose setup will be:
Validator nodes
validator-0
P2P: 26656
RPC: 26657
validator-1
P2P: 26756
RPC: 26757
validator-2
P2P: 26856
RPC: 26857
validator-3
P2P: 26956
RPC: 26957
Seed node
seed-0
P2P: 27056
RPC: 27057
Observer node
observer-0
P2P: 27156
RPC: 27157
You can tests connection to a node using browser: http://localhost:<rpc_port>
. Example for the first node: http://localhost:26657
.
Key and corresponding accounts will be placed in the network config folder by the import-keys.sh
script, which are used within the nodes configured above.
When connecting using CLI, provide the --home
parameter to any CLI command to point to the specific home directory of the corresponding node: --home network-config/validator-x
.
See the cheqd CLI guide to learn about the most common CLI commands.
This page lists the ADRs for cheqd-node that have been Accepted, Proposed, or in Draft stage.
The following ADRs have been moved to the cheqd identity documentation site
SubQuery is a leading blockchain data indexer that provides developers with fast, flexible, universal, open source and decentralised APIs for web3 projects. SubQuery SDK allows developers to get rich indexed data and build intuitive and immersive decentralised applications in a faster and more efficient way. SubQuery supports 150+ ecosystems including Cheqd, Cosmos, Ethereum, Near, Polygon, Polkadot, Algorand, and Avalanche.
Another one of SubQuery's competitive advantages is the ability to aggregate data not only within a chain but across multiple blockchains all within a single project. This allows the creation of feature-rich dashboard analytics, multi-chain block scanners, or projects that index IBC transactions across zones.
SubQuery Docs: SubQuery Academy (Documentation)
Intro Quick Start Guide: 1. Create a New Project
For technical questions and support reach out to us start@subquery.network
SubQuery is open-source, meaning you have the freedom to run it in the following three ways:
Locally on your own computer (or a cloud provider of your choosing), view the instructions on how to run SubQuery Locally.
You can publish it to SubQuery's enterprise-level Managed Service, where we'll host your SubQuery project in production ready services for mission critical data with zero-downtime blue/green deployments. There even is a generous free tier. Find out how.
You can publish it to the decentralised SubQuery Network, the most open, performant, reliable, and scalable data service for dApp developers. The SubQuery Network indexes and services data to the global community in an incentivised and verifiable way and supports Cheqd from launch.
Ubuntu's standard support for Ubuntu 20.04 LTS ("Long Term Support") version ends in April 2025. Ubuntu 20.04 LTS is currently the default Linux operating system used to build and run the binaries for our cheqd-node. To ensure continued stability and security, we need to upgrade to the latest Long-Term Support (LTS) version, Ubuntu 24.04
.
Additionally, to ensure compatibility with Ubuntu 24.04, you'll need to upgrade to cheqd-noded v2.0.2
. This upgrade will address any existing security vulnerabilities and ensure that we are running on the most up-to-date software and distribution, providing a secure and reliable environment.
Please follow the guide to transition both the operating system and our node software smoothly.
Backup critical data (e.g., keys, config files).
Be prepared for the expected downtime of your node (approx 1 hour)
Ensure all dependencies and software running on your node (except cheqd-node) support the latest Ubuntu LTS.
Stop and disable the cheqd-cosmovisor.service:
Update sources list and upgrade existing packages:
You'll probably need to restart your server after this step (you'll see the message if you run the do-release-upgrade command):
Trigger distribution upgrade
During the process, you'll be prompted for a lot of answers, here are the most important ones:
Feel free to continue over ssh, since this shouldn't cause any issues for most of the cloud providers.
This is required, so continue.
After you agree with this, the actual upgrade process will start.
There will be some configuration changes and you'll probably be asked if you want to install a new version of certain software (see the screenshots and examples below). In most cases, it is completely fine to overwrite configuration changes, but be cautious if you have some custom configurations, since these actions will overwrite them:
After that, you'll be asked if you want to remove obsolete packages. It's always a good idea to check details (by pressing d
), especially if you have other software running on your node. In our case, we agreed to remove unused packages:
Finally, you'll be asked to restart your system to apply the changes. Your server should automatically reboot, but it's good to check with your cloud provider documentation and make sure this action won't cause any data loss.
It's probable that SSH fingerprint is going to change after the upgrade, so you won't be able to ssh to your server immediately. To resolve that, you'll need to run the following command on your localhost:
Check the distribution version:
After you ssh again, you can check your distribution version, which should match Ubuntu 22.04 LTS if the upgrade was performed successfully.
Trigger another distribution upgrade:
The remaining steps should be the same as in Step 2, although you won't be asked the same questions as in the first upgrade.
Note that it's possible that you won't be able to proceed with this due to some package incompatibility:
On one of our nodes, it was the postgres-client-15
.
We resolved this issue by removing this package and installing it again after a successful upgrade:
After you complete the upgrade and restart your node again, you should ssh to your server again and check the distribution version. If everything was successful, you should get output like this:
Download the latest version of the interactive installer:
Run the installer:
Select the first option UPGRADE existing installation and proceed by choosing the v2.0.2 version from the list.
For the remaining questions, answer based on your previous setup and complete the binary upgrade.
Check the version of your cheqd-node:
Re-enable cheqd-cosmovisor.service and start your node:
The final step would be to make sure that your node is up and running and that it managed to sync with the network.
Run cheqd-noded status
and check for "catching_up":false
and also check if the latest_block_height
and latest_block_time
looks good to you.
This is the suggested template to be used for ADRs on the cheqd-node project.
Category | Status |
---|
The aim of this ADR is to define the smallest fraction for CHEQ tokens.
Cosmos SDK doesn't provide native support for token fractions. The lowest denomination out-of-the-box that can be used in transactions is 1token
.
To address this issue, similar Cosmos networks assume that they use N digits after the decimal point and multiply all values by 10^(-N) in UI.
Popular Cosmos networks were compared to check how many digits after the decimal point are used by them:
Cosmos: 6
IRIS: 6
Fetch.ai: 18
Binance: 8
It was decided to go with 10^-9 as the smallest fraction, with the whole number token being 1 CHEQ. Based on the SI prefix system, the lowest denomination would therefore be called "nanocheq".
There is no backward compatibility. To adjust the number of digits after the decimal point (lowest token denomination), the network should be restarted.
The power of 10 chosen for the lowest denomination of CHEQ tokens is more precise than for Cosmos ATOMs, which allows transactions to be defined in smaller units.
This decision is hard to change in the future, as changes to denominations require significant disruption when a network is already up and running.
N/A
Category | Status |
---|
Due to the nature of the cheqd project merging concepts from the and self-sovereign identity (SSI), there are two potential options for creating Command Line Interface (CLI) tools for developers to use:
Cosmos-based CLI: Most likely route for Cosmos projects for their node application. Most existing Cosmos node validators will be familiar with this method of managing their node.
VDR CLI: Traditionally, a lot of SSI networks have used and therefore the Indy CLI tool for managing and interacting with the ledger. This has now been renamed to and is the tool that most existing SSI node operators ("stewards") would be familiar with.
Ideally, the cheqd-node
project would provide a consistent set of CLI tools rather than two separate tools with varying feature sets between them.
This ADR will focus on the CLI tool architecture choice for cheqd-node
.
Any CLI tool architecture chosen should not increase the likelihood of introducing bugs, security vulnerabilities, or design pattern deviations from upstream Cosmos SDK.
Actions that are carried out on ledger through a CLI tool in cheqd-node
now include token functionality as well as identity functionality. E.g., if a DID gets compromised, there could be mechanisms to recover or signal that fact to issuers/verifiers. If tokens or the staking balance of node operators get compromised, this may potentially have more severe consequences for them.
Would node operators want a single CLI to manage everything?
This might be the case with node operators from an SSI / digital identity background, or node operators familiar with Hyperledger Indy CLI / VDR Tools CLI.
A “single CLI” could be a single tool as far as the user sees, but actually consist of multiple modules beneath it in how it’s implemented.
Would node operators be okay with having two separate CLIs?
One for Cosmos-ledger functions, and one for identity-specific functions.
Unlike existing Hyperledger Indy networks, it is anticipated that some of the node operators on the cheqd network will have experience running Cosmos validator nodes. For this group, having to learn a new “single” CLI tool could cause a steeper learning curve and a worse user experience than what they have now.
Node operators may want one/separate CLIs for security and operational reasons, i.e., for a separation of concerns in terms of functionality.
Pros:
Simple to do, no changes needed in code developed.
Differences in functionality between the two CLIs can be explained in documentation.
Node operators with good technical skills will understand the difference.
Cosmos CLI design patterns would be consistent the wider Cosmos open source ecosystem.
No steep learning curve for potential node operators who only want to run a node, without implementing SSI functionality in apps.
Cons:
Key storage for Cosmos accounts may need to be done in two different keystores.
Potentially confusing for node operators who use both CLIs to know which one to use for what purpose.
Potentially a steeper learning curve for existing SSI node operators.
Pros:
Both Cosmos CLI and VDR Tools CLI would have native support for identity as well as token transactions.
Node operators/developers could pick their preferred CLI tool.
Cons:
Significant development effort required to implement very similar functionality two separate times, with little difference to the end user in actions that can be executed.
VDR Tools CLI has DID / VC modules that would take significant effort to recreate in Cosmos CLI
Cosmos CLI has token related functionality that would take significant development effort to replicate in VDR Tools CLI, and opens up the possibility that errors in implementation could introduce security vulnerabilities.
Commands in the Cosmos CLI could be made available as aliases in the VDR Tools CLI, or vice versa.
Pros:
Single CLI tool to learn and understand for node operators.
Development effort is simplified, as overlapping functionality is not implemented in two separate tools.
Cons:
Less development effort required than Option 2, but greater than Option 1.
Opens up the possibility that there's deviation in feature coverage between the two CLIs if aliases are not created to make 1:1 feature parity in both tools.
Based on the options considerations above and an analysis of development required, the decision was taken to maintain two separate CLI tools:
cheqd-node
Cosmos CLI: Any Cosmos-specific features, such as network & node management, token functionality required by node operators, etc.
VDR Tools CLI: Any identity-specific features required by issuers, verifiers, holders on SSI networks.
Faster time-to-market on the CLI tools, while freeing up time to build out user-facing functionality.
Cosmos account keys may need to be replicated in two separate key storages. A potential resolution for this in the future is to integrate the ability to use a single keyring for both CLI tools.
Seek feedback from cheqd's open source community and node operators during testnet phase on whether the documentation and user experience is easy to understand and appropriate tools are available.
Category | Status |
---|
The aim of this document is to define the genesis parameters that will be used in cheqd network testnet and mainnet.
Cosmos v0.44.3 parameters are described.
Genesis consists of Tendermint consensus engine parameters and Cosmos app-specific parameters.
Tendermint requires to be defined for basic consensus conditions on any Cosmos network.
auth
modulebank
modulecheqd
module (DID module)crisis
moduledistribution
modulegov
modulemint
moduleresource
moduleslashing
modulestaking
moduleibc
moduleibc-transfer
moduleThe parameters above were agreed separate the cheqd mainnet and testnet parameters. We have bolded the testnet parameters that differ from mainnet.
The token denomination has been changed to make the smallest denomination 10^-9 CHEQ (= 1 ncheq) instead of 1 CHEQ. This is a breaking change from the earliest versions of the cheqd testnet that which required issuing new tokens to be transferred and issued to testnet node operators.
Inflation allows fees to be collected from block rewards in addition to transaction fees.
In production/mainnet, parameters can only be changed via a majority vote without veto defeat according to the cheqd network governance principles. This allows for more democratic governance frameworks to be created for a self-sovereign identity network.
Existing node operators will need to re-establish staking with new staking denomination and staking parameters.
Unbonding period, and deposit period have all been reduced to 2 weeks to balance the speed at which decisions can be reached vs giving enough time to validators to participate.
This is a location to record all high-level architecture decisions for cheqd-node
, the server/node portion of purpose-built network for decentralised identity.
An Architectural Decision (AD) is a software design choice that addresses a functional or non-functional requirement that is architecturally significant.
An Architectural Decision Record (ADR) captures a single AD, such as often done when writing personal notes or meeting minutes; the collection of ADRs created and maintained in a project constitute its decision log.
ADRs are intended to be the primary mechanism for proposing new feature designs and new processes, for collecting community input on an issue, and for documenting the design decisions. An ADR should provide:
Context on the relevant goals and the current state
Proposed changes to achieve the goals
Summary of pros and cons
References
Note the distinction between an ADR and a spec. The ADR provides the context, intuition, reasoning, and justification for a change in architecture, or for the architecture of something new. The spec is much more compressed and streamlined summary of everything as it stands today.
If recorded decisions turned out to be lacking, convene a discussion, record the new decisions here, and then modify the code to match.
Use the when creating a new ADR.
Category | Status |
---|
This ADR describes how cheqd/Cosmos account keys can be imported/exported into identity wallet applications built on Evernym VDR Tools SDK.
Client SDK applications such as need to work with cheqd accounts in identity wallets to be able to interact with the cheqd network ledger.
For example, an identity wallet application or backend application would need to pay network transaction fees for writing . This may also need to be extended in the future to support .
Cosmos SDK uses . This can be replicated using standard crypto libraries to carry out the same steps as in Cosmos SDK:
Functionality will be added to VDR Tools SDK to import/export cheqd accounts using mnemonics paired with the --recover
flag as done with Cosmos wallets.
Not applicable, since this is an entirely new feature in VDR Tools SDK for integration with the new blockchain framework.
Adding/recovering cheqd accounts in VDR Tools SDK will follow a similar, familiar process that users have for Cosmos wallets.
N/A
N/A
Fractions of CHEQ tokens will be referred by their , based on the power of 10 of CHEQ tokens being referred to in context. This notation system is common across other Cosmos networks as well.
Only available in Cosmos CLI | In both Cosmos CLI and VDR CLI | Only available in VDR CLI |
---|
(in Excalidraw format)
Parameter | Description | Mainnet | Testnet |
---|
Parameter | Description | Mainnet | Testnet |
---|
Parameter | Description | Mainnet | Testnet |
---|
Cosmos application is divided . Each module has parameters that help to adjust the module's behaviour.
Parameter | Description | Mainnet | Testnet |
---|
Parameter | Description | Mainnet | Testnet |
---|
Parameter | Description | Mainnet | Testnet |
---|
Parameter | Description | Mainnet | Testnet |
---|
Parameter | Description | Mainnet | Testnet |
---|
Parameter | Description | Mainnet | Testnet |
---|
Parameter | Description | Mainnet | Testnet |
---|
Parameter | Description | Mainnet | Testnet |
---|
Parameter | Description | Mainnet | Testnet |
---|
Parameter | Description | Mainnet | Testnet |
---|
Parameter | Description | Mainnet | Testnet |
---|
Parameter | Description | Mainnet | Testnet |
---|
The mnemonic above is assumed to be a pre-existing one . The "passphrase" above is user-defined, and defaults to blank if not defined.
Mnemonic import/export can be achieved using pre-existing packages and .
Using these pre-existing libraries, cheqd accounts can be recovered using the standard for Cosmos SDK chains described below:
Authors | Alexandr Kolesov |
ADR Stage | ACCEPTED |
Implementation Status | Implemented |
Start Date | 2021-09-08 |
Cosmos transactions + Queries | Signing service + Key storage | Identity transactions + Queries |
MultiSig | (Transaction + Query) sending + Proof validation | DIDs + VCs (+ DID storage) |
Network bootstrapping commands |
| This sets the maximum size of total bytes that can be committed in a single block. This should be larger than | 200,000 (~200 KB) | 200,000 (~200 KB) |
| This sets the maximum gas that can be used in any single block. | 200,000 (~200 KB) | 200,000 (~200 KB) |
| Unused. This has been deprecated and will be removed in a future version of Cosmos SDK. | 1,000 (1 second) | 1,000 (1 second) |
| Maximum age of evidence, in blocks. The basic formula for calculating this is: | 12,100 | 25,920 |
| Maximum age of evidence, in time. It should correspond with an app's "unbonding period". | 1,209,600,000,000,000 (expressed in nanoseconds, ~2 weeks) | 59,200,000,000,000 (expressed in nanoseconds, ~72 hours) |
| This sets the maximum size of total evidence in bytes that can be committed in a single block and should fall comfortably under | 50,000 (~ 50 KB) | 5,000 (~ 5 KB) |
| Types of public keys supported for validators on the network. | Ed25519 | Ed25519 |
| Maximum number of characters in the memo field | 512 | 512 |
| Max number of signatures | 7 | 7 |
| Gas cost of transaction byte | 10 | 10 |
| Cost of | 590 | 590 |
| Cost of | 1,000 | 1,000 |
| The default send enabled value allows send transfers for all coin denominations | True | True |
| The specified transaction fee for creating a Decentralized Identifier (DID) on the cheqd network | 50,000,000,000 ncheq (50 CHEQ) | 50,000,000,000 ncheq (50 CHEQ) |
| The specified transaction fee for updating an existing Decentralized Identifier (DID) on the cheqd network | 25,000,000,000 ncheq (25 CHEQ) | 25,000,000,000 ncheq (25 CHEQ) |
| The specified transaction fee for deactivating an existing Decentralized Identifier (DID) on the cheqd network | 10,000,000,000 ncheq (10 CHEQ) | 10,000,000,000 ncheq (10 CHEQ) |
| The percentage of the transaction fee for | 50% | 50% |
| The percent of rewards that goes to the community fund pool | 0.02 (2%) | 0.02 (2%) |
| Base reward that proposer of a block receives | 0.01 (1%) | 0.01 (1%) |
| Bonus reward that proposer gets for proposing block. This depends on the number of pre-commits included to the block | 0.04 (4%) | 0.04 (4%) |
| Whether withdrawal address can be changed or not. By default, it's the delegator's address. | True | True |
|
| The minimum deposit for a proposal to enter the voting period. | [{ "denom": "ncheq", "amount": "8,000,000,000,000" }] (8,000 cheq) | [{ "denom": "ncheq", "amount": "8,000,000,000,000" }] (8,000 cheq) |
| The maximum period for Atom holders to deposit on a proposal. Initial value: 2 months. | 604,800s (1 week) | 172,800s (48 hours) |
|
| The defined period for an on-ledger vote from start to finish. | 259,200s (3 days) | 86.40s (1.2 minutes) |
|
| Minimum percentage of total stake needed to vote for a result to be considered valid. | 0.334 (33.4%) | 0.334 (33.4%) |
| Minimum proportion of Yes votes for proposal to pass. | 0.5 (50%) | 0.5 (50%) |
| The minimum value of veto votes to total votes ratio for proposal to be vetoed. Default value: 1/3. | 0.334 (33.4%) | 0.334 (33.4%) |
| Name of the cheq smallest denomination | ncheq | ncheq |
| Maximum inflation rate change per year. In Cosmos Hub they use | 0.045 (4.5%) | 0.045 (4.5%) |
| Inflation aims to this value if | 0.04 (4%) | 0.04 (4%) |
| Inflation aims to this value if | 0.01 (1%) | 0.01 (1%) |
| Percentage of bonded tokens at which inflation rate will neither increase nor decrease | 0.60 (60%) | 0.60 (60%) |
| Number of blocks generated per year | 3,155,760 (1 block every ~10 seconds) | 3,155,760 (1 block every ~10 seconds) |
| The specified transaction fee for creating an image as a DID-Linked Resource on the cheqd network | 10,000,000,000 ncheq (10 CHEQ) | 10,000,000,000 ncheq (10 CHEQ) |
| The specified transaction fee for creating a JSON file as a DID-Linked Resource on the cheqd network | 2,500,000,000 ncheq (2.5 CHEQ) | 2,500,000,000 ncheq (2.5 CHEQ) |
| The specified transaction fee for creating any other type of DID-Linked Resource on the cheqd network, other than images or JSON files | 5,000,000,000 ncheq (5 CHEQ) | 5,000,000,000 ncheq (5 CHEQ) |
| The percentage of the transaction fee for | 50% | 50% |
| Number of blocks a validator can miss signing before it is slashed. | 25,920 (expressed in blocks, equates to 259,200 seconds or ~3 days) | 17,280 (expressed in blocks, equates to 172,800 seconds or ~2 days) |
| This percentage of blocks must be signed within the window. | 0.50 (50%) | 0.50 (50%) |
| The minimal time validator have to stay in jail | 600s (~10 minutes) | 600s (~10 minutes) |
| Slashed amount as a percentage for a double sign infraction | 0.05 (5%) | 0.05 (5%) |
| Slashed amount as a percentage for downtime | 0.01 (1%) | 0.01 (1%) |
| A delegator must wait this time after unbonding before tokens become available | 1,210,000s (~2 weeks) | 259,200s (~3 days) |
| The maximum number of validators in the network | 125 | 125 |
| Max amount of unbound/redelegation operations in progress per account | 7 | 7 |
| Amount of unbound/redelegate entries to store | 10,000 | 10,000 |
| Denomination used in staking | ncheq | ncheq |
| Enables or disables all cross-chain token transfers from this chain | true | true |
| Enables or disables all cross-chain token transfers to this chain | true | true |
Authors | Alexandr Kolesov |
ADR Stage | ACCEPTED |
Implementation Status | Implemented |
Start Date | 2021-09-10 |
Authors | Alexandr Kolesov, Ankur Banerjee, Alex Tweeddale |
ADR Stage | ACCEPTED |
Implementation Status | Implemented |
Start Date | 2021-09-15 |
Last Updated | 2022-12-08 |
Authors | Andrew Nikitn, Ankur Banerjee |
ADR Stage | ACCEPTED |
Implementation Status | Implemented |
Start Date | 2021-09-23 |
We would love for you to contribute to cheqd and help make it even better than it is today! As a contributor, here are the guidelines we would like you to follow.
Help us keep cheqd open and inclusive. Please read and follow our Code of Conduct
You can get help for any questions or problems that you have on through the following channels:
Check if your question/problem is already covered under our documentation sites:
Learn about cheqd (basics for general audience)
Raise a bug report or feature request using the "Issue" tab on Github
Ask the question on our Community Slack or Discord
If you find a bug in the source code, you can help us by submitting an issue to our GitHub Repository. Even better, you can submit a Pull Request with a fix.
You can request a new feature by submitting an issue to our GitHub Repository. If you would like to implement a new feature, please consider the size of the change in order to determine the right steps to proceed:
For a Major Feature, first open an issue and outline your proposal so that it can be discussed. This process allows us to better coordinate our efforts, prevent duplication of work, and help you to craft the change so that it is successfully accepted into the project.
Small Features can be crafted and directly submitted as a Pull Request.
Before you submit an issue, please search the issue tracker. An issue for your problem might already exist and the discussion might inform you of workarounds readily available.
We want to fix all the issues as soon as possible, but before fixing a bug, we need to reproduce and confirm it. In order to reproduce bugs, we require that you provide a minimal reproduction. Having a minimal reproducible scenario gives us a wealth of important information without going back and forth to you with additional questions.
A minimal reproduction allows us to quickly confirm a bug (or point out a coding problem) as well as confirm that we are fixing the right problem.
We require a minimal reproduction to save maintainers' time and ultimately be able to fix more bugs. Often, developers find coding problems themselves while preparing a minimal reproduction. We understand that sometimes it might be hard to extract essential bits of code from a larger codebase but we really need to isolate the problem before we can fix it.
Unfortunately, we are not able to investigate / fix bugs without a minimal reproduction, so if we don't hear back from you, we are going to close an issue that doesn't have enough info to be reproduced.
You can file new issues by selecting from our new issue templates and filling out the issue template.
Before you submit your Pull Request (PR) consider the following guidelines:
Search GitHub for an open or closed PR that relates to your submission. You don't want to duplicate existing efforts.
Be sure that an issue describes the problem you're fixing, or documents the design for the feature you'd like to add. Discussing the design upfront helps to ensure that we're ready to accept your work.
Fork the cheqd-node repository.
In your forked repository, make your changes in a new git branch:
Create your patch, including appropriate test cases.
Check that all workflow actions for linting / build / test pass.
Commit your changes using a descriptive commit message that follows Conventional Commits convention
Push your branch to GitHub:
In GitHub, send a pull request to cheqd-node:develop
. The develop
branch is where all pending PRs should be targetted for inclusion in the next release.
By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-ShareAlike 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.
a. Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License.
c. BY-SA Compatible License means a license listed at creativecommons.org/compatiblelicenses, approved by Creative Commons as essentially the equivalent of this Public License.
d. Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights.
e. Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements.
f. Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material.
g. License Elements means the license attributes listed in the name of a Creative Commons Public License. The License Elements of this Public License are Attribution and ShareAlike.
h. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License.
i. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license.
j. Licensor means the individual(s) or entity(ies) granting rights under this Public License.
k. Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them.
l. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world.
m. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning.
a. License grant.
Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to:
A. reproduce and Share the Licensed Material, in whole or in part; and
B. produce, reproduce, and Share Adapted Material.
Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions.
Term. The term of this Public License is specified in Section 6(a).
Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material.
Downstream recipients.
A. Offer from the Licensor – Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License.
B. Additional offer from the Licensor – Adapted Material. Every recipient of Adapted Material from You automatically receives an offer from the Licensor to exercise the Licensed Rights in the Adapted Material under the conditions of the Adapter’s License You apply.
C. No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.
No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i).
b. Other rights.
Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise.
Patent and trademark rights are not licensed under this Public License.
To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties.
Your exercise of the Licensed Rights is expressly made subject to the following conditions.
a. Attribution.
If You Share the Licensed Material (including in modified form), You must:
A. retain the following if it is supplied by the Licensor with the Licensed Material:
i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of warranties;
v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable;
B. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and
C. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License.
You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information.
If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable.
b. ShareAlike.
In addition to the conditions in Section 3(a), if You Share Adapted Material You produce, the following conditions also apply.
The Adapter’s License You apply must be a Creative Commons license with the same License Elements, this version or later, or a BY-SA Compatible License.
You must include the text of, or the URI or hyperlink to, the Adapter's License You apply. You may satisfy this condition in any reasonable manner based on the medium, means, and context in which You Share Adapted Material.
You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, Adapted Material that restrict exercise of the rights granted under the Adapter's License You apply.
Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database;
b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material, including for purposes of Section 3(b); and
c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights.
a. Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You.
b. To the extent possible, in no event will the Licensor be liable to You on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this Public License or use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. Where a limitation of liability is not allowed in full or in part, this limitation may not apply to You.
c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability.
a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically.
b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates:
automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or
upon express reinstatement by the Licensor.
For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License.
c. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License.
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License.
a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License.
a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions.
c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor.
d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority.
Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” The text of the Creative Commons public licenses is dedicated to the public domain under the CC0 Public Domain Dedication. Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at creativecommons.org/policies, Creative Commons does not authorize the use of the trademark “Creative Commons” or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses.
Creative Commons may be contacted at creativecommons.org
If you think you have discovered a security issue in any of cheqd projects, we'd love to hear from you.
We take all security bugs seriously. If confirmed upon investigation, we will patch it within a reasonable amount of time and release a public security bulletin discussing the impact and credit the discoverer.
There are two ways to report a security bug:
Email us at security-github@cheqd.io
Join our cheqd Community Slack and post a message on the #security channel
| 10,000,000,000,000 ncheq (10,000 CHEQ) | 10,000,000,000,000 ncheq (10,000 CHEQ) |
| Maximum expected time per block, used to enforce block delay. This parameter should reflect the largest amount of time that the chain might reasonably take to produce the next block under normal operating conditions. A safe choice is 3-5x the expected time per block. | 30,000,000,000 (expressed in nanoseconds, ~ 30 seconds) | 30,000,000,000 (expressed in nanoseconds, ~ 30 seconds) |
| [ "06-solomachine", "07-tendermint" ] | [ "06-solomachine", "07-tendermint" ] |
We as members, contributors, and leaders pledge to make participation in the cheqd community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, country of origin, personal appearance, race, religion, or sexual identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
Examples of behaviour that contributes to a positive environment for our community include:
Demonstrating empathy and kindness toward other people
Being respectful of differing opinions, viewpoints, and experiences
Giving and gracefully accepting constructive feedback
Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
Focusing on what is best not just for us as individuals, but for the overall community
Examples of unacceptable behaviour include:
The use of sexualized language or imagery, and sexual attention or advances of any kind
Use of inappropriate or non-inclusive language or other behaviour deemed unprofessional or unwelcome in the community
Trolling, insulting or derogatory comments, and personal or political attacks
Public or private harassment
Publishing others' private information, such as a physical or email address, without their explicit permission
Other conduct which could reasonably be considered inappropriate in a professional setting
Community leaders are responsible for clarifying and enforcing our standards of acceptable behaviour and will take appropriate and fair corrective action in response to any behaviour that they deem inappropriate, threatening, offensive, or harmful.
Community leaders have the right and responsibility to remove, edit, or reject comments, commits, messages, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
All community leaders are obligated to respect the privacy and security of the reporter of any incident.
Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
Community Impact: Use of inappropriate language or other behaviour deemed unprofessional or unwelcome in the community.
Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behaviour was inappropriate. A public apology may be requested.
Community Impact: A violation through a single incident or series of actions. Any Community Impact assessment should take into account:
The severity and/or number of incidents/actions
Non-compliance with previous private warnings from community leaders (if applicable)
Consequence: A warning with consequences for continued behaviour. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
Community Impact: A serious violation of community standards, including sustained inappropriate behaviour.
Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behaviour, harassment of an individual, or aggression toward or disparagement of classes of individuals.
Consequence: A permanent ban from any sort of public interaction within the community.
This is the suggested template to be used for ADRs on the cheqd-node project.
Category | Status |
---|---|
The aim of this ADR is to define how "community tax" as described in the Cosmos blockchain framework will work on cheqd network.
communityTax
is a value set in genesis for each Cosmos network and defined as a percentage that is applied to the fees collected in each block.
Tokens collected through this process accumulate in the community pool. The percentage charged as communityTax
can be changed by making proposals on the network and voting for acceptance by the network.
From Cosmos SDK documentation, distribution
module:
The community pool gets
community_tax * fees
, plus any remaining dust after validators get their rewards that are always rounded down to the nearest integer value.
To spend tokens from the community pool:
community-pool-spend
proposal can be submitted on the network.
Recipient address and amount of tokens should be specified.
The purpose for which the requested community pools tokens will be spent should be described.
If proposal is approved using the voting process, the recipient address specified will receive the requested tokens.
The expectation on the recipient is that they spend the tokens for the purpose specified in their proposal.
More information about fee distribution is available in the End Block section of Cosmos's distribution
module documentation.
cheqd's network will keep the communityTax
parameter enabled, i.e., non-zero.
The value of communityTax
, based on a review of similar Cosmos networks will be set to 2%
.
The behavior of communityTax
is the across Cosmos SDK v0.42 and v0.43.
The cheqd network will have a pool of tokens that can be used to spend on initiatives valued by the community.
N/A
cheqd's Governance Framework should provide guidance on how to submit proposals and recommended areas of investment in community efforts.
Category | Status |
---|---|
The Hyperledger Aries protocol describes a payment mechanism that can used to pay for the issuance of credentials.
It is necessary to establish which public APIs from Hyperledger Aries can be implemented in cheqd-node
to provide an implementation of payments using CHEQ tokens using a well-understood SSI protocol.
Hyperledger Aries protocol has the concept of payment "decorators" ~payment_request
and ~payment_receipt
in requests, that can be used to pay using tokens for credential issuance.
A message is sent by the Issuer to the potential Holder, describing the credential they intend to offer and optionally, the price the issuer would be expected to be paid for said credential. This is based on the Hyperledger Aries credential offer RFC.
A payment request can then be defined using the Hyperledger Aries Payment Decorator to add information about an issuing price and address where payment should be sent.
details.id
field contains an invoice number that unambiguously identifies a credential for which payment is requested. When paying, this value should be placed in memo
field for the cheqd payment transaction.
payeeId
field contains a cheqd account address in the correct format for cheqd network.
The payment flow can be broken down into five steps:
Build a request for transferring tokens. Example: cheqd_ledger::bank::build_msg_send(from_account, to_account, amount_for_transfer, denom)
from_account
: The prospective credential holder's cheqd account address
to_account
: Same as payeeId
from the Payment Request
amount_for_transfer
: Price of credential issuance defined as details.total.amount.value
from the Payment Request
denom
: Defined in details.total.amount.currency
from the Payment Request
Build a transaction with the request from the previous step Example: cheqd_ledger::auth::build_tx(pool_alias, pub_key, builded_request, account_number, account_sequence, max_gas, max_coin_amount, denom, timeout_height, memo)
memo
: This should be the same as details.id
from the Payment Request
Sign the transaction Example:cheqd_keys::sign(wallet_handle, key_alias, tx)
.
Broadcast the signed transaction Example: cheqd_pool::broadcast_tx_commit(pool_alias, signed)
.
Key fields in the response above are:
hash
: Transaction hash
height
: Ledger height
This is a message sent by the potential Holder to the Issuer, to request the issuance of a credential after tokens are transferred to the nominated account using a Payment Transaction.
request_id
should be the same as details.id
from Payment Request and memo
from Payment Transaction.
Issuer receives Credential Request + payment_receipt
with payment transaction_id
. It allows the Issuer to:
Get the payment transaction by hash from cheqd network ledger using get_tx_by_hash(hash)
method, where hash
is transaction_id
from previous steps.
Check that memo
field from received transaction contains the correct request_id
.
If steps 1-4 are successful, the Issuer is able to confirm that the requested payment has been made using CHEQ tokens. The credential issuing process can then proceed using standard Hyperledger Aries protocol procedures.
REPLACE WITH PNG
Editable version available on swimlanes.io or as text for compatible UML diagram generators below:
Credential issuance outside of the payment flow is compatible with and carried out using existing Hyperledger Aries protocol procedures. This should provide a level of compatibility with existing apps/SDKs that implement Aries protocol.
Defining the transaction in CHEQ tokens is specific to the cheqd network.
By defining the payment mechanism using Hyperledger Aries protocols, this allows the possibility in the future to support payments on multiple networks.
Existing SSI app developers should already be familiar with Hyperledger Aries (if building on Hyperledger Indy) and provides a transition path to add new functionality.
Hyperledger Aries may not be a familiar protocol for other Cosmos projects.
Using the Payment Decorator in practice means there could be interoperability challenges at in implementations that impact credential issuance and exchange.
N/A
Category | Status |
---|---|
Issued credentials need to be revocable by their issuers. Revocation needs to be straightforward and fast. Testing of revocation needs to preserve privacy (be non-correlating), and it should be possible to do without contacting the issuer.
This has obvious use cases for professional credentials being revoked for fraud or misconduct, e.g., a driver’s license could be revoked for criminal activity. However, it’s also important if a credential gets issued in error (e.g., has a typo in it that misidentifies the subject). The latter case is important even for immutable and permanent credentials such as a birth certificate.
In addition, it seems likely that the data inside credentials will change over time (e.g., a person’s mailing address or phone number updates). This is likely to be quite common, revocation can be used to guarantee currency of credential data when it happens. In other words, revocation may be used to force updated data, not just to revoke authorization.
Adds a Revocation Registry Definition, that Issuer creates and publishes for a particular Credential Definition. It contains public keys, maximum number of credentials the registry may contain, reference to the Credential Definition, plus some revocation registry specific data.
value
(dict):
Dictionary with Revocation Registry Definition's data:
max_cred_num
(integer): The maximum number of credentials the Revocation Registry can handle
tails_hash
(string): Tails file digest
tails_location
(string): Tails file location (URL)
issuance_type
(string enum): Defines credential revocation strategy. Can have the following values:
ISSUANCE_BY_DEFAULT
: All credentials are assumed to be issued and active initially, so that Revocation Registry needs to be updated (REVOC_REG_ENTRY
transaction sent) only when revoking. Revocation Registry stores only revoked credentials indices in this case. Recommended to use if expected number of revocation actions is less than expected number of issuance actions.
ISSUANCE_ON_DEMAND
: No credentials are issued initially, so that Revocation Registry needs to be updated (REVOC_REG_ENTRY
transaction sent) on every issuance and revocation. Revocation Registry stores only issued credentials indices in this case. Recommended to use if expected number of issuance actions is less than expected number of revocation actions.
public_keys
(dict): Revocation Registry's public key
id
(string): Revocation Registry Definition's unique identifier (a key from state trie is currently used) owner:cred_def_id:revoc_def_type:tag
cred_def_id
(string): The corresponding Credential Definition's unique identifier (a key from state tree is currently used)
revoc_def_type
(string enum): Revocation Type. CL_ACCUM
(Camenisch-Lysyanskaya Accumulator) is the only supported type now.
tag
(string): A unique tag to have multiple Revocation Registry Definitions for the same Credential Definition and type issued by the same DID.
Note: REVOC_REG_DEF can be updated.
(owner, cred_def_id, revoc_def_type, tag) -> {data, tx_hash, tx_timestamp }
Request Example:
Reply Example:
The Revocation Registry Entry contains the new accumulator value and issued/revoked indices. This is just a delta of indices, not the whole list. It can be sent each time a new credential is issued/revoked.
value
(dict):
Dictionary with revocation registry's data:
accum
(string): The current accumulator value
prev_accum
(string): The previous accumulator value. It is compared with the current value, and transaction is rejected if they don't match. This is necessary to avoid dirty writes and updates of accumulator.
issued
(list of integers): An array of issued indices (may be absent/empty if the type is ISSUANCE_BY_DEFAULT
). This is delta, and will be accumulated in state.
revoked
(list of integers): An array of revoked indices. This is delta; will be accumulated in state)
revoc_reg_def_id
(string): The corresponding Revocation Registry Definition's unique identifier (a key from state trie is currently used)
revoc_def_type
(string enum): Revocation Type. CL_ACCUM
(Camenisch-Lysyanskaya Accumulator) is the only supported type now.
Note: REVOC_REG_ENTRY can be updated.
MARKER_REVOC_REG_ENTRY_ACCUM:revoc_reg_def_id -> {data, tx_hash, tx_timestamp }
MARKER_REVOC_REG_ENTRY:revoc_reg_def_id -> {data, tx_hash, tx_timestamp }
Request Example:
Reply Example:
This is the suggested template to be used for ADRs on the cheqd-node project.
Category | Status |
---|---|
What is the status, such as proposed, accepted, rejected, deprecated, superseded, etc.?
A short (~100 word) description of the issue being addressed. "If you can't explain it simply, you don't understand it well enough." Provide a simplified and layman-accessible explanation of the ADR.
This section describes the forces at play, such as business, technological, social, and project local. These forces are probably in tension, and should be called out as such. The language in this section is value-neutral. It is simply describing facts. It should clearly explain the problem and motivation that the proposal aims to resolve.
This section describes the implementation and/or architecture approach for the proposed changes in detail.
This section describes the resulting context, after applying the decision. All consequences should be listed here, not just the "positive" ones. A particular decision may have positive, negative, and neutral consequences, but all of them affect the team and project in the future.
All ADRs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. The ADR must explain how the author proposes to deal with these incompatibilities. ADR submissions without a sufficient backwards compatibility treatise may be rejected outright.
{positive consequences}
{negative consequences}
{neutral consequences}
{reference link}
{list of questions or action items}
Category | Status |
---|---|
This ADR will define how Verifiable Credential schemas can be represented through the use of a DID URL, which when dereferenced, fetches the credential schemas a resource. The identity entities and transactions for the cheqd network are designed to support usage scenarios and functionality currently supported by Hyperledger Indy.
Hyperledger Indy is a verifiable data registry (VDR) built for DIDs with a strong focus on privacy-preserving techniques. It is one of the most widely-adopted SSI blockchain ledgers. Most notably, Indy is used by the Sovrin Network.
Our aim is to support the functionality enabled by identity-domain transactions in by Hyperledger Indy into cheqd-node
. This will partly enable the goal of allowing use cases of existing SSI networks on Hyperledger Indy to be supported by the cheqd network.
The following identity-domain transactions from Indy were considered:
NYM
: Equivalent to "DIDs" on other networks
ATTRIB
: Payload for DID Document generation
SCHEMA
: Schema used by a credential
CRED_DEF
: Credential definition by an issuer for a particular schema
REVOC_REG_DEF
: Credential revocation registry definition
REVOC_REG_ENTRY
: Credential revocation registry entry
Revocation registries for credentials are not covered under the scope of this ADR. This topic is discussed separately in ADR 007: Revocation registry as there is ongoing research by the cheqd project on how to improve the privacy and scalability of credential revocations.
CL-Schema resource can be created via CreateResource
transaction with the follow list of parameters:
MsgCreateResource:
Collection ID: UUID ➝ (did:cheqd:...:) ➝ Parent DID identifier without a prefix
ID: UUID ➝ specific to resource, also effectively a version number (supplied client-side)
Name: String (e.g., CL-Schema1
) ➝ Schema name
ResourceType ➝ CL-Schema
MimeType ➝ application/json
Data: Byte[] ➝ JSON string with the follow structure:
attrNames: Array of attribute name strings (125 attributes maximum)
CLI Example:
[TODO: explain that a Cred Def is simply an additional property inside of the Issuer's DID Doc]
Adds a Credential Definition (in particular, public key), which is created by an Issuer and published for a particular Credential Schema.
It is not possible to update Credential Definitions. If a Credential Definition needs to be evolved (for example, a key needs to be rotated), then a new Credential Definition needs to be created for a new Issuer DIDdoc. Credential Definitions is added to the ledger in as verification method for Issuer DIDDoc
id
: DID as base58-encoded string for 16 or 32 byte DID value with Cheqd DID Method prefix did:cheqd:<namespace>:
and a resource type at the end.
value
(dict): Dictionary with Credential Definition's data if signature_type
is CL
:
primary
(dict): Primary credential public key
revocation
(dict, optional): Revocation credential public key
schemaId
(string): id
of a Schema the credential definition is created for.
signatureType
(string): Type of the credential definition (that is credential signature). CL-Sig-Cred_def
(Camenisch-Lysyanskaya) is the only supported type now. Other signature types are being explored for future releases.
tag
(string, optional): A unique tag to have multiple public keys for the same Schema and type issued by the same DID. A default tag tag
will be used if not specified.
controller
: DIDs list of strings or only one string of a credential definition controller(s). All DIDs must exist.
CRED_DEF
entity transaction format:
Don't store Schema DIDDoc in the State.
CredDef URL: did:cheqd:mainnet-1:N22KY2Dyvmuu2PyyqSFKue
CredDef Entity URL: did:cheqd:mainnet-1:N22KY2Dyvmuu2PyyqSFKue?service=CL-CredDef
CRED_DEF
DID Document transaction format:
CRED_DEF
state format:
"credDef:<id>" -> {CredDefEntity, txHash, txTimestamp}
Note: CRED_DEF
cannot be updated.
Schema. Option 2
Schema URL: did:cheqd:mainnet-1:N22KY2Dyvmuu2PyyqSFKue#<schema_entity_id>
SCHEMA
DID Document transaction format:
Schema. Option 3
Schema URL: did:cheqd:mainnet-1:N22KY2Dyvmuu2PyyqSFKue
SCHEMA
DID Document transaction format:
Schema. Option 4
Schema URL: did:cheqd:mainnet-1:N22KY2Dyvmuu2PyyqSFKue#<schema_entity_id>
SCHEMA
DID Document transaction format:
SCHEMA
State format:
"schema:<id>" -> {SchemaEntity, txHash, txTimestamp}
id
example: did:cheqd:mainnet-1:N22KY2Dyvmuu2PyyqSFKue
Cred Def. Option 2
Store inside Issuer DID Document
CredDef URL: did:cheqd:mainnet-1:N22KY2Dyvmuu2PyyqSFKue#<cred_def_entity_id>
CRED_DEF
DID Document transaction format:
CRED_DEF
state format:
Positive
Credential Definition is a set of Issuer keys. So storing them in Issuer's DIDDoc reasonable.
Negative
Credential Definition name means that it contains more than just a key and value
field provides this flexibility.
Adding all Cred Defs to Issuer's DIDDoc makes it too large. For every DIDDoc or Cred Def request a client will receive the whole list of Issuer's Cred Defs.
Impossible to put a few controllers for Cred Def.
In theory, we need to make Credential Definitions mutable.
Hyperledger Indy official project background on Hyperledger Foundation wiki
indy-node
GitHub repository: Server-side blockchain node for Indy (documentation)
indy-plenum
GitHub repository: Plenum Byzantine Fault Tolerant consensus protocol; used by indy-node
(documentation)
Indy DID method (did:indy
)
Hyperledger Aries official project background on Hyperledger Foundation wiki
aries
GitHub repository: Provides links to implementations in various programming languages
aries-rfcs
GitHub repository: Contains Requests for Comment (RFCs) that define the Aries protocol behaviour
Cosmos blockchain framework official project website
cosmos-sdk
GitHub repository (documentation)
libsovtoken
: Sovrin Network token library
The fee is used to verify the
Defines the list of allowed client state types. We allow connections from other chains using the , and with light clients using the .
Instances of abusive, harassing, or otherwise unacceptable behaviour may be reported to the community leaders responsible for enforcement at . All complaints will be reviewed and investigated promptly and fairly.
This Code of Conduct is adapted from the , version 2.0, available at .
Community Impact Guidelines were inspired by .
For answers to common questions about this code of conduct, see the FAQ at . Translations are available at .
Authors
Alexandr Kolesov
ADR Stage
ACCEPTED
Implementation Status
Implemented
Start Date
2021-09-08
Authors
Ankur Banerjee
ADR Stage
PROPOSED
Implementation Status
Not Implemented
Start Date
2021-09-01
Authors
Renata Toktar
ADR Stage
DRAFT
Implementation Status
Not Implemented
Start Date
2021-09-10
Authors
{Author or list of authors}
ADR Stage
{DRAFT | PROPOSED | ACCEPTED | REJECTED | SUPERSEDED by ADR-xxx | ABANDONED}
Implementation Status
{Implemented | In Progress | Not Implemented}
Start Date
{yyyy-mm-dd}
Last Updated
{yyyy-mm-dd}
Authors
Renata Toktar, Alexander Kolesov, Ankur Banerjee
ADR Stage
DRAFT
Implementation Status
Draft
Start Date
2022-06-23
Upgrade process includes 2 main parts:
Sending a SoftwareUpgradeProposal
to the network
Moving to the new binary manually
SoftwareUpgradeProposal
In general, this proposal is the document which includes some additional information for operators about improvements in new version of application or another additional remarks. There are not any requirements for proposal text, just recommendations and information for operators. And also, please make sure that this proposal will be stored in the ledger in case of success voting process in the future.
The next steps are describing the general flow for making a proposal:
Send proposal command to the pool;
After getting it, ledger will be in the PROPOSAL_STATUS_DEPOSIT_PERIOD
;
After sending the first deposit from one of other operators, proposal status will be moved to PROPOSAL_STATUS_VOTING_PERIOD
and voting period (2 weeks for now) will be started;
Due to the voting period operators should send their votes to the pool, get new binary downloaded and got to be installed;
After voting period passing (for now it's 2 weeks) in case of success voting process proposal should be passed to PROPOSAL_STATUS_PASSED
;
The next step is waiting for height
which was suggested for upgrade.
On the proposed height
current node will be blocked until new binary will be installed and set up.
The main parameters here are:
upgrade_to_0.3
- name of proposal which will be used in UpgradeHandler
in the new application,
--upgrade-height
- height when upgrade process will be occurred,
--from
- alias of a key which will be used for signing proposal,
<chain_id>
- identifier of chain which will be used while creating the blockchain.
In case of successful submitting the next command can be used for getting proposal_id
:
This command will return list of all proposals. It's needed to find the last one with corresponding name
and title
. Please, remember this proposal_id
because it will be used in next steps.
Also, the next command is very useful for getting information about proposal status:
Expected result for this state is PROPOSAL_STATUS_DEPOSIT_PERIOD
, It means, that pool is in waiting for the first deposit state.
Since getting proposal, the DEPOSIT
should be set to the pool.It will be return after finishing voting_preiod. For setting deposit the next command is used:
Parameters:
<proposal_id>
- proposal identifier from [step](#Command for sending proposal) In this example, amount of deposit is equal to current min-deposit
value.
After getting deposit from the previous step, the VOTING_PERIOD
will be started. For now, we have 2 weeks for making some discussions and collecting needed vote count. For setting vote, the next command can be used:
The main parameters here:
<proposal_id>
- proposal identifier from [step](#Command for sending proposal)
Votes can be queried by sending request:
At the end of voting period, after voting_end_time
, the state of proposal with <proposal_id>
should be changed to PROPOSAL_STATUS_PASSED
, if there was enough votes Also, deposits should be refunded back to the operators.
After getting proposal status as passed, upgrade plan will be active. It can be requested by:
It means, that at height 1000 BeginBlocker
will be set and node will be out of consensus and wait for moving to the new version of application.
It will be stopped and the next messages in log are expected:
After setting up new version of application node will continue ordering process.
Instructions on setting up a new node/version can be found in our installation guide.