Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
We built a network-wide validator status monitoring tool that will send out alerts in case your node starts losing blocks. Join/follow the #mainnet-alerts channel to get these updates.
Our block explorer (BigDipper) exposes a GraphQL endpoint. This GraphQL endpoint is what the block explorer front-end uses when showing validator conditions on the overall/per-validator page.
To simplify parsing the output from the GraphQL endpoint, we did a simple wrapper using Cloudflare Workers to parse the GraphQL output into simple JSON.
To simplify the task of alerting via various channels (and to keep it extensible to other channels), we take the output of our validator status API and parse it via Zapier. This is done as a two-stage process via two separate “Zaps”.
Collating a list of validators missing blocks
Schedule by Zapier to wake up the “Zap” every hour
Run custom JavaScript code using Code by Zapier to parse the JSON output
Filter by Zapier to check if there are entries generated in the compromised validator list. If not, then terminate Zap execution.
If the execution has reached this stage, Looping by Zapier with the following sub-steps:
Formatter by Zapier to carry out text/number formatting.
Digest by Zapier to push the formatted bullet-list item with validator details into a compiled digest.
Sending an alert to designated alert channel: A separate Zap triggers and sends alerts to designated channels. Right now, our setup sends these details to the cheqd Community Slack and the cheqd Community Discord.
Similar to above, Schedule by Zapier to wake the Zap up every hour
“Release” any unreleased digests by using the manual release feature in Digest by Zapier.
Filter by Zapier to check if there are any entries populated in the digest. If not, terminate execution of any further steps at this stage.
If execution has proceeded to this step, use the Zapier App for Slack and Zapier App for Discord to send a message (with formatting) to designated alert channels.
You can copy this Zap to configure a similar setup for other alert channels, such as SMS by Zapier or Email by Zapier.
🛠️ Github repository: cheqd/faucet-ui
The cheqd testnet faucet is a self-serve site that allows app developers and node operators who want to try out our identity functionality or node operations to request test CHEQ tokens, without having to spend money to acquire “real” CHEQ tokens on mainnet.
We built this using Cloudflare Pages as it provides a fast way to create serverless applications which are able to scale up and down dynamically depending on traffic, especially for something such as a testnet faucet which may not receive consistent levels of traffic. The backend for this faucet works using an existing CosmJS faucet app to handle requests, run using a Digital Ocean app wrapped in a Dockerfile.
This solution:
Helps to keep the team focused on building, as no longer do we need to dedicate time for manually responding to requests for tokens.
Creates a far more cost effective way of handling tesnet token distributions
Can be utilised by developers to test cheqd functionality far more efficiently
Can be used by other Cosmos projects to reduce operational overheads and reduce headaches around distributing testnet tokens
Testnet Faucet
Click here to add test CHEQ tokens to your cheqd address and account.
🛠️ Github repository: cheqd/big-dipper-2.0-cosmos
Big Dipper is an open-source block explorer and token management tool serving over 10 proof-of-stake blockchains. It has been forked more than 100 times on GitHub and has served audiences from 140 countries and regions.
Check out our Block Explorers here for mainnet and testnet:
We have added cheqd transaction views to the visual UI of BigDipper, which enables users to see, for example:
createDID transactions
updateDID transactions
createResource transactions
🛠️ Github repository: cheqd/bdjuno
BDJuno (shorthand for BigDipper Juno) is the Juno implementation for BigDipper.
Indexing for cheqd network DIDs and DID-Linked Resources
Changes to workflows/pipelines
Optimised Dockerfile
Mainnet Block Explorer
Click here to view our mainnet block explorer for the cheqd network.
Testnet Block Explorer
Click here to view our testnet block explorer for the cheqd network.
List of tooling and APIs to communicate with cheqd Network
Over the course of cheqd's product development we have identified various tools required to help the cheqd team, cheqd validators, and community members better monitor the network.
All of these are open-source and available to be used within your project.
Please find a selection of tools and APIs to obtain more information about the cheqd Network or its associated services:
Ethereum Bridge
Understand how we have created an ERC20 wrapped token, and how this is made available
Cosmos SDK offers APIs for built-in modules using gRPC, REST, and Tendermint RPC. This project aims to provide simple REST APIs for data that default Cosmos SDK APIs can't provide.
This collection of custom APIs can be deployed as a Cloudflare Worker or compatible serverless platforms.
data-api.cheqd.io/supply/total
(also has an API endpoint alias on /
)
Just total supply of tokens, in main token denomination (CHEQ instead of ncheq
in our case)
Cryptocurrency tracking websites such as CoinMarketCap and CoinGecko require an API endpoint for reporting the total supply of tokens in the main/primary token denomination.
While this figure is available from Cosmos SDK's built-in /cosmos/bank/v1beta1/supply/ncheq
REST endpoint, this returns a JSON object in the lowest token denomination, which cannot be parsed by CoinMarketCap / CoinGecko.
data-api.cheqd.io/supply/circulating
Circulating token supply, in main token denomination (CHEQ instead of ncheq in our case)
Cryptocurrency tracking websites such as CoinMarketCap and CoinGecko require an API endpoint for reporting the circulating supply of tokens in the main/primary token denomination.
This figure is not available from any Cosmos SDK API, because the criteria for determining circulating vs "non-circulating" accounts is defined by CoinMarketCap.
This API calculates the circulating supply by subtracting the account balances of a defined list of wallet addresses ("circulating supply watchlist") from the total supply.
data-api.cheqd.io/supply/staked
Overall tokens staked, in CHEQ.
Provides the overall amount staked pulled from the block explorer.
data-api.cheqd.io/balances/vesting/<address>
Tokens that are still vesting for continuous/delayed vesting accounts, in CHEQ.
There is no Cosmos SDK API that returns balances that are yet to be vested for continuous or delayed vesting accounts.
data-api.cheqd.io/balances/vested/<address>
Tokens that have already vested for continuous/delayed vesting accounts, in CHEQ.
There is no Cosmos SDK API that returns balances that are already vested for continuous or delayed vesting accounts.
data-api.cheqd.io/balances/liquid/<address>
Tokens in continuous/delayed vesting accounts that can be converted to liquid balances, in CHEQ.
Tokens in continuous or delayed vesting accounts that can be converted to liquid balances. This is calculated as the sum of the following figures:
"Delegated free" balance (from the /cosmos/auth/v1beta1/accounts/<address>
REST API) or vested balance, whichever is higher
"Available" balance (if applicable)
"Reward" balance (if applicable)
data-api.cheqd.io/balances/total/<address>
Total account balance for specified account, in CHEQ.
The standard Cosmos SDK REST API for account balances returns JSON with the account balances along with its denomination, usually the lowest denomination. This is hard to parse in applications such as Google Sheets (e.g., to monitor the account balance by fetching a response from a REST API directly in Google Sheets). This API returns a plain number that can be directly plugged into such applications, without having to parse JSON.
Results filtered by threshold value: data-api.cheqd.io/arbitrage
Unfiltered results: data-api.cheqd.io/arbitrage/all
Returns current price of CHEQ token among different markets along with an evaluation of whether they are at risk of arbitrage opportunities.
The CHEQ token trades on multiple markets/exchanges (e.g., Osmosis, Gate.io, BitMart, LBank, Uniswap). This is typically established as CHEQ along with another token pair or currency.
Fluctuations in the exchange rate between CHEQ and other tokens pairs can give rise to opportunities for arbitrage. Having a significant market arbitrage among different exchanges creates a market inefficiencies. Extreme market inefficiencies result market failure and deadweight loss.
Having monitoring capabilities for arbitrage gives opportunities for the cheqd community to rectify potential liquidity issues and aware of exchange rate movements.
To alert a significant market arbitrages for CHEQ listings on different exchanges, we pull latest markets data from the CoinGecko API for cheqd's ticker page via our Market Monitoring API Monitor Markets API. If an arbitrage threshold is exceeded, a webhook trigger is sent to Zapier for alerting via different channels (such as Slack).
This frontend site was developed to work with Cloudflare Workers, a serverless and highly-scalable platform.
Originally, this project was discussed as potentially being deployed using a serverless platform such as AWS Lambda. However, AWS Lambda has a cold-start problem if the API doesn't receive too much traffic or is only accessed infrequently. This can lead to start times ranging into single/double digit seconds, which would be considered an API timeout by many client applications.
Using Cloudflare Workers, these APIs can be served in a highly-scalable fashion and have much lower cold-start times, i.e., in the range of less than 10 milliseconds.
The airdrop tools, used for our community airdrop rewards site, are split into two repos; one for managing the actual distribution of airdrop rewards to wallets, and another for the frontend itself to handle claims.
In terms of the frontend, we learnt that airdrop reward sites need to be more resilient to traffic spikes than most websites because, when announced, community members will tend to flock to the site to claim their rewards generating a large spike in traffic, followed by a period of much lower traffic.
This type of traffic pattern can make prepping the server to host airdrop claim websites particularly difficult. For example, many projects will choose to purchase a large server capacity to prevent server lag, whilst others may simply become overwhelmed with the traffic.
To manage this, the frontend site was developed to work with Cloudflare Workers, a serverless and highly-scalable platform so that the airdrop reward site could handle these spikes in demand.
On the backend we also needed to build something that could manage a surge in demand whilst providing a highly scalable and fast way of completing mass distributions. Initially our implementation struggled with the number of claims resulting in an excessive wait to receive rewards in the claimant’s wallet. To improve this we used 2 separate CosmJS-based Cloudflare Workers scripts; one which lined up claims in 3 separate queues (or more if we wanted to scale further), and a second distributor script that is instantiated dependent on the number of queues (i.e. 3 queues would require 3 distribution workers).
There is no hiding that we ran into some hiccups, in part due to our Cloudflare Worker approach, during our Cosmos Community Mission 2 Airdrop. We have documented all of the issues we ran into during our airdrop and the lessons learnt in our airdrop takeaway blog post.
What is important to explain is that:
The reward site using Cloudflare Workers scaled very well in practice, with no hiccups;
We had problems with the way we collated data, but the fundamental Cloudflare Workers infrastructure we ended up with, after having to refactor for our initial mistakes, is battle tested, highly efficient and resilient.
Any project using the CosmosSDK and looking to carry out an airdrop or community rewards program can now use our Open Sourced frontend UI and Distribution repository to ensure a smooth and efficient process for the community, without any hiccups in the server capacity or distribution mechanics.
We would much rather other projects do not make the same mistakes as we did when we initially started our airdrop process. What we have come away with, in terms of infrastructure and lessons learned, should be an example of the do’s and the not-to-do’s when carrying out a Cosmos based airdrop.
To create an ERC20 representation of the Cosmos based CHEQ token we’ve used a bridge. A blockchain bridge or ‘cross-chain bridge’ enables users to transfer assets or any form of data seamlessly from one entirely separate protocol or ecosystem to another (i.e. Solana to Ethereum, or in our case Cosmos to Ethereum and vice versa).
The (you can also add it to your MetaMask wallet through this link — go to profile summary > click ‘more’ > ‘add token to MetaMask’ )
As we build payment rails for trusted data (more on that below), we want to offer issuers, verifiers (the receivers of trusted data), and holders a choice on the means of settlement. We expect a preference for to eliminate the volatility in either pricing or settling payments for trusted data.
Airdrop UI (frontend)
Check out the repository for our frontend Airdrop UI here.
Airdrop Distribution (backend)
Check out the repository for our backend Airdrop Distribution helper here.
🛠️ Github repository: cheqd/cosmjs-cli-converter
There is an assumption in the Cosmos Ecosystem that wallet addresses across different chains, such as, Cosmos (ATOM), Osmosis (OSMO) and cheqd (CHEQ) are all identical. This is because they all look very similar. However, each chain’s wallet address is actually unique.
Interestingly, each network’s wallet address can be derived using a common derivation path from the Cosmos Hub wallet address. Using one derivation path #BIP44 means that users that use one secret recovery phrase and core account to interact with multiple networks.
Our cross-chain address convertor is able to automate the derivation of any chain address from one Cosmos address to another in bulk. We’ve seen some examples of this previously, but they are mostly designed to do one-off conversions in a browser rather than large-scale batch conversions. Emphatically, our converter could do 200k+ addresses in a few minutes. Doing this using any existing CLI tools or shell scripts can take hours.
This is valuable since it can automate airdrops or distributions to any account, just from a Cosmos Hub address in bulk, making data calculations far more efficient.
For new chains in the Cosmos Ecosystem, this makes it much easier for the core team and Cosmonauts to discover and utilise their account addresses and carry out distributions.