# πŸ¦€ rindexer πŸ¦€ > rindexer is a lighting-fast multi chain indexing solution written in Rust ## Changelog #### Breaking changes *** * breaking: **Generated code uses `Arc` instead of `Arc`** β€” Rust-mode projects that call concrete-type methods (e.g. `get_inner_provider()`) through the generated provider binding will need to use the network-specific accessors (`get_ethereum_provider_cache()`, etc.) which still return the concrete type. After upgrading, run `rindexer codegen typings` to regenerate. #### Bug fixes *** * fix: Handle int/uint array types (e.g. `uint256[]`) in `parse_solidity_integer_type` to prevent ParseIntError during codegen * fix: ClickHouse hash serialization so String are stored as proper strings instead of corrupted scientific-notation values. * fix: native-transfer indexing no longer panics on trace batches containing only block entries (empty blocks). * fix: **Live-indexing heartbeat log replaced** β€” the previous `"No new blocks published in the last 5 minutes"` info log fired spuriously whenever the RPC provider returned a cached tip (every \~half block-time) and could not distinguish a healthy cache hit from a genuinely stuck RPC. It is replaced with a new tip-advance heartbeat that emits either `"Indexing alive - chain tip X"` (info) when the tip has advanced within the interval, or `"RPC tip has not advanced past block X in the last 5 minutes"` (warn) when the tip is genuinely stuck. Log-based alerts that grepped the old message must be updated to match the new warn text. #### Features *** * feat: integrate PagerDuty and OpsGenie alerts * feat: **Parallel historical backfill** β€” new `fetch_concurrency` network config splits historic block ranges across N concurrent workers for faster backfills. Workers share the global adaptive concurrency controller, so a 429 from any worker shrinks in-flight concurrency across all events. Defaults to sequential (opt-in); factory contracts stay sequential. * feat: **Reactive reorg handling** β€” rindexer now actively detects and recovers from chain reorganizations during live indexing. A per-network `ReorgCoordinator` validates the parent hash chain on every new block. When a reorg is detected, indexing pauses, stale events are deleted, block hashes and checkpoints are corrected, and indexing resumes β€” all within seconds. Three detection paths (RPC parent hash validation, removed-logs signal, reth ExEx notifications) converge on the same atomic recovery flow. Crash recovery is built in: if the process dies mid-recovery, startup validation re-detects the reorg on next start. * feat: **`reorg_handling` network config** β€” new `reorg_handling` section on network config with `enabled` (default `true` for live indexing) and `window_size` (default `256`) to control the sliding block hash window. The existing `reorg_safe_distance` on contracts is unchanged and works independently or alongside the new system. * feat: **Aggregation table rollback** β€” reorg recovery extends to no-code aggregation/derived tables (`Set`, `Add`, `Increment`, `Subtract`, `Max`, `Min`). Reversible operations (`Add`/`Subtract`/`Increment`/`Decrement`) use a pre-deletion snapshot with inverse arithmetic; non-reversible operations (`Set`/`Max`/`Min`) replay an operation journal to recalculate the correct value. * feat: **Native transfer rollback** β€” native transfer events (`EvmTraces.NativeTransfer`) are automatically registered in the reorg coordinator and participate in the same rollback/re-index flow as user-defined contract events. * feat: **Stream delivery modes** β€” new `delivery` option on stream configs: `instant` (default) delivers events immediately with reorg notifications for reconciliation; `finalized` buffers events until past the safe distance before publishing. Every `EventMessage` payload now carries a `block_number: u64` field so finalized buffers can key per block β€” downstream consumers using strict schema validation on webhook/SNS/Redis/etc. payloads need to allow the new field. A `rindexer_stream_finalized_buffer_overflow_total { stream_type, network }` counter + warn log surfaces buffers growing past a 10k-event soft cap. Configuring `delivery: finalized` on a network without live indexing now errors at startup instead of silently buffering forever. * feat: **`on_reorg` callback** β€” callback registered per contract, fired after reorg recovery with the list of invalidated tx hashes. Supports multiple independent handlers per contract for flexible monitoring/alerting chains. * feat: **`latest_blocks` internal table** β€” new `rindexer_internal.latest_blocks` table (PostgreSQL + ClickHouse) persists the block hash window for offline reorg detection on restart. * feat: **SQL injection defenses** β€” all user-supplied identifiers from YAML specs are validated at startup before SQL interpolation: table/column names, network names, and filter conditions are checked against injection patterns. * feat: **Reorg metrics** β€” four new Prometheus metrics: `rindexer_reorg_handling_duration_seconds`, `rindexer_reorg_events_deleted_total`, `rindexer_reorg_detection_source_total`, `rindexer_reorg_cascade_total`. * feat: **Native-transfer-only reorg detection** β€” networks indexing only native transfers now get their own `ReorgCoordinator`. Mixed contract+NT configs share one coordinator so a reorg detected by either pipeline rolls back both. * feat: **`native_transfers.tables`** β€” new `tables` field mirroring `contracts[].tables`. Aggregation tables keyed on `NativeTransfer` events participate in reorg rollback. * feat: **`on_reorg` for native transfers** β€” register reorg callbacks on `TraceCallbackRegistry::register_on_reorg`, matching the `EventCallbackRegistry` API. * feat: **Enriched `__rindexer_reorg` payload** β€” adds `events_deleted` total and `affected_events` array (`{indexer, contract, event, schema, table, rows_deleted}` per source table). Additive JSON. * feat: **Reorg notifications fan out across all streams on a network** β€” `__rindexer_reorg` reaches every configured stream on the network regardless of which pipeline detected the reorg. Dispatched in parallel. ### Releases *** all release branches are deployed through `release/VERSION_NUMBER` branches ## 0.39.0-beta - 14th April 2026 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.39.0](https://github.com/joshstevens19/rindexer/tree/release/0.39.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.39.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.39.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.39.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.39.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.39.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.39.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.39.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.39.0/rindexer_win32-amd64.zip) #### Features *** * feat: Add Twilio SMS alerts support β€” send SMS notifications for on-chain events via the Twilio API * feat: **`RINDEXER_CLICKHOUSE_BATCH_SIZE` env var** β€” configure the ClickHouse batch chunk size used for no-code/custom table writes. Defaults to `1000`. Increasing it reduces the number of sequential `INSERT` statements for high-volume streams, at the cost of larger per-request payloads. ## 0.38.0-beta - 10th April 2026 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.38.0](https://github.com/joshstevens19/rindexer/tree/release/0.38.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.38.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.38.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.38.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.38.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.38.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.38.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.38.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.38.0/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: **ClickHouse no-code tables now use native types** β€” `uint256` columns map to `UInt256` (was `String`), `uint128` to `UInt128`, `int256` to `Int256`, `int128` to `Int128`, and `address` to `FixedString(42)` (was `String`). Custom table types now match raw event table types (`solidity_type_to_clickhouse_type`), ensuring consistent schemas across the same rindexer project. * fix: **ClickHouse serialization no longer panics on PG-specific type wrappers** β€” `U256Numeric`, `I256Numeric`, `U256Bytes`, `I256Bytes`, `AddressBytes`, and their Nullable/Vec variants are now serialized correctly for ClickHouse instead of panicking with `"Clickhouse in no-code should never encounter these types"`. This enables no-code tables with `$if()` conditions, arithmetic expressions, and direct `uint256`/`address` field references to work with the ClickHouse storage backend. Previously, any computed value or conditional expression in a no-code table would crash when writing to ClickHouse. #### Features *** * feat: **`database` field on custom tables** β€” optional YAML field that directs a custom table to a specific ClickHouse database (or PostgreSQL schema) instead of the default `{project}_{contract}` naming. Enables multiple contracts to write to a shared table (e.g., `database: indexer` β†’ `indexer.events`). ## 0.37.2-beta - 1st April 2026 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.37.2](https://github.com/joshstevens19/rindexer/tree/release/0.37.2) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.37.2/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.37.2/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.37.2/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.37.2/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.37.2/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.37.2/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.37.2/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.37.2/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: built-in rindexer variables (`$rindexer_block_timestamp`, `$rindexer_log_index`) now respect the target column type when used as direct values (not in arithmetic expressions). Previously `$rindexer_block_timestamp` always produced a `DateTime` wrapper and `$rindexer_log_index` always produced a `U256/VARCHAR` wrapper, causing `invalid sign in external "numeric" value` or `insufficient data left in message` errors when the custom table column was `uint256` (NUMERIC) or `uint64` (BIGINT). The direct variable access path now matches the column type: NUMERIC columns get `U256Numeric`, BIGINT columns get `U64BigInt`, TIMESTAMP columns get `DateTime`, and STRING columns get the decimal string representation. * fix: `block_timestamp` NULL data integrity β€” when `prefetch_block_timestamps` fails (RPC unreachable), the old code silently wrote NULL timestamps to PostgreSQL, causing NOT NULL constraint violations. Now returns an error so the entire event batch retries instead of writing corrupt data. Added `rindexer_block_timestamp_fetch_failures_total` Prometheus counter for observability. * fix: Fixed a PostgreSQL dynamic-upsert bug where arithmetic custom-table updates (add/subtract) could be dropped by rindexer\_sequence\_id ordering, causing wrong final balances for repeated same-key mutations in no-code tables; arithmetic values now always accumulate while metadata fields still keep latest-by-sequence semantics. * fix: Add DATABASE\_POOL\_SIZE allowing user to define the pool size for rindexer ## 0.37.1-beta - 26th March 2026 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.37.1](https://github.com/joshstevens19/rindexer/tree/release/0.37.1) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.37.1/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.37.1/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.37.1/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.37.1/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.37.1/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.37.1/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.37.1/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.37.1/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: /playground hardcodes localhost:3001 as GraphQL endpoint ## 0.37.0-beta - 24th Match 2026 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.37.0](https://github.com/joshstevens19/rindexer/tree/release/0.37.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.37.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.37.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.37.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.37.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.37.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.37.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.37.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.37.0/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: UNNEST batch upsert now accumulates duplicate keys in same batch β€” `add`/`subtract`/`max`/`min` actions use `GROUP BY` pre-aggregation instead of `DISTINCT ON` ([#383](https://github.com/joshstevens19/rindexer/issues/383)) * fix: migrate PostgreSQL TLS from native-tls to rustls * fix: use composite key in progress tracker for multi-network events * fix: pass providers, constants, and multicall addresses in codegen template * fix: YAML arithmetics #### Features *** * feat: add support for tuple\[] (array of structs) via JSONB storage * feat: `BlockIndexingCompleted` event β€” emitted when all events on a chain have indexed up to a block * feat: Reorg detection & recovery β€” active reorg detection during live indexing via tip hash comparison, parent hash chain validation, removed log detection, and a background post-confirmation verifier ## 0.35.0-beta - 28th Janurary 2026 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.35.0](https://github.com/joshstevens19/rindexer/tree/release/0.35.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.35.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.35.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.35.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.35.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.35.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.35.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.35.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.35.0/rindexer_win32-amd64.zip) #### Features *** * feat: Production-grade Prometheus observability ## 0.34.0-beta - 23rd Janurary 2026 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.34.0](https://github.com/joshstevens19/rindexer/tree/release/0.34.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.34.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.34.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.34.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.34.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.34.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.34.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.34.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.34.0/rindexer_win32-amd64.zip) #### Breaking changes *** * StartDetails now has a cron\_scheduler\_handle set it to None if upgrading rust projects #### Bug fixes *** * fix: validation if cols and sets are not aligned in tables * fix: add edge case for bytes32 for name and symbol to solve issues like MKR metadata * fix: graceful shutdown now aborts RPC batches immediately instead of waiting * fix: reduced shutdown timeout from 60s to instant exit when tasks complete * fix: checkpoint progress mid-batch when shutdown detected during event processing * fix: skip cache eviction during shutdown to exit faster #### Features *** * feat: factory cron indexing * feat: adaptive concurrency for RPC batches - scales up to 200 on success, backs off 50% on rate limits * feat: adaptive concurrency for block timestamp fetches using same capacity tracking * feat: cache eviction for VIEW\_CALL\_CACHE and BLOCK\_TIMESTAMP\_CACHE at 10k entrie ## 0.33.0-beta - 21st Janurary 2026 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.33.0](https://github.com/joshstevens19/rindexer/tree/release/0.33.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.33.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.33.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.33.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.33.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.33.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.33.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.33.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.33.0/rindexer_win32-amd64.zip) #### Breaking changes *** * breaking: Rename `rindexer_last_updated_block` to `rindexer_block_number` * breaking: Rename `rindexer_last_updated_at` to `rindexer_block_timestamp` * breaking: need to have timestamp: true to get timestamps in the database #### Bug fixes *** * fix: tx hash + block hash to not parse into char(42) * fix: Injected columns rindexer\_tx\_hash, rindexer\_block\_hash, rindexer\_contract\_address now NOT NULL * fix: Remove unnecessary DEFAULT 0 from rindexer\_block\_number and rindexer\_sequence\_id * fix: Remove legacy block\_timestamp from schema sync exclusion - Cleaned up old migration exclusion for both PostgreSQL and ClickHouse schema sync. The new rindexer\_block\_timestamp column is now fully managed by the timestamp flag. * fix: when graphql port is in use show error message and dont startup graphql #### Features *** * feat: Add nullable: bool option to table columns (columns are NOT NULL by default) * feat: Add $null value support to explicitly set columns to SQL NULL * feat: Add $if(condition, trueValue, falseValue) for conditional value assignment * feat: Add timestamp: bool option to tables - Opt-in rindexer\_block\_timestamp column. When true, creates the column as TIMESTAMPTZ NOT NULL. When false (default), the column is not created, saving storage and avoiding RPC overhead. * feat: Batch fetch and cache block timestamps - When timestamp: true and RPC metadata lacks timestamps, rindexer batch-fetches all unique blocks in a single RPC call per network and caches them globally for the entire indexing run. Deduplicates across events in the same block. * feat: Schema sync support for timestamp column - Adding/removing timestamp: true triggers appropriate schema migrations (add column or prompt for deletion). * feat: better logs when shutting down * feat: hook in multicall3 so $call can be 5x faster (yes 5x speed increase on index time by doing this) * feat: historic cron indexing ## 0.32.0-beta - 20th January 2026 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.32.0](https://github.com/joshstevens19/rindexer/tree/release/0.32.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.32.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.32.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.32.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.32.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.32.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.32.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.32.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.32.0/rindexer_win32-amd64.zip) #### Features *** * feat: Add exponentiation operator and $call() support in arithmetic expressions * Add ^ operator for exponentiation (e.g., 10 ^ $decimals) * Support $call() view calls inside arithmetic expressions * Support $constant() references in $call() arguments * Use PostgreSQL POWER() for SQL generation, U256::checked\_pow for evaluation * Parser precedence: additive β†’ multiplicative β†’ power β†’ primary * Add documentation and liquidation bot example with total\_usd\_value #### Bug fixes *** * fix: postgres graphql comments clash fails to start graphql ## 0.31.0-beta - 20th January 2026 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.31.0](https://github.com/joshstevens19/rindexer/tree/release/0.31.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.31.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.31.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.31.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.31.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.31.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.31.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.31.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.31.0/rindexer_win32-amd64.zip) #### Features *** * feat: handle constants in the yaml allowing you to inject them in the yaml itself ## 0.30.0-beta - 19th January 2026 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.30.0](https://github.com/joshstevens19/rindexer/tree/release/0.30.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.30.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.30.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.30.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.30.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.30.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.30.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.30.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.30.0/rindexer_win32-amd64.zip) #### Features *** * feat: `create_batch_clickhouse_operation` and `create_batch_postgres_operation` functions to do upsert/delete/insert tasks * feat: `tables` supercharged is now live - build your tables, aggregate your data, index view functions everything with a simple YAML file :) ## 0.29.0-beta - 6th January 2026 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.29.0](https://github.com/joshstevens19/rindexer/tree/release/0.29.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.29.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.29.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.29.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.29.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.29.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.29.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.29.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.29.0/rindexer_win32-amd64.zip) #### Features *** * feat: upgrade to alloy 1.1.3 * feat: expose `RindexerEventStream` to allow subscribing to rindexer events #### Bug fixes *** * fix: add missing --graphql and --indexer flags for 'rindexer start all * fix: resolve the bad ANSI escape on the logger causing logs to render the ANSI escape incorrectly #### Breaking changes *** * `IndexingDetails` now includes optional `event_stream` for subscribing to rindexer events ## 0.28.2-beta - 5th November 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.28.2](https://github.com/joshstevens19/rindexer/tree/release/0.28.2) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.28.2/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.28.2/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.28.2/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.28.2/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.28.2/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.28.2/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.28.2/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.28.2/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: reorg safe range calculation for out-of-range an error message * fix: Upgrade dependecy minor versions to get latest alloy fixes * fix: gate kafka streams support behind `kafka` feature ## 0.28.1-beta - 21st October 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.28.1](https://github.com/joshstevens19/rindexer/tree/release/0.28.1) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.28.1/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.28.1/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.28.1/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.28.1/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.28.1/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.28.1/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.28.1/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.28.1/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: allow https connections for clickhouse * fix: resolve issue with "" on postgres insert ## 0.28.0-beta - 13th October 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.28.0](https://github.com/joshstevens19/rindexer/tree/release/0.28.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.28.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.28.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.28.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.28.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.28.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.28.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.28.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.28.0/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: only log error when the current block lumber is lower than the last seen when range is outside chain reorg safe * fix: unpin `tracing-subscriber` version #### Features *** * feat: check if the RPC chain id is matching the configured chain id in the yaml config on startup * feat: add support for `RINDEXER_LOG` environment variable to control the log level of rindexer * feat: Add clickhouse integration to rindexer rust project and nocode ## 0.27.1-beta - 6th October 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.27.1](https://github.com/joshstevens19/rindexer/tree/release/0.27.1) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.27.1/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.27.1/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.27.1/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.27.1/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.27.1/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.27.1/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.27.1/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.27.1/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: build break after alloy v2.10.0: remove NamedChain::PolygonZkEvm usage * fix: resolved race condition in event dependencies indexing that caused issues on networks with high block production rates * fix: support for the latest alloy ## 0.27.0-beta - 26th September 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.27.0](https://github.com/joshstevens19/rindexer/tree/release/0.27.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.27.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.27.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.27.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.27.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.27.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.27.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.27.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.27.0/rindexer_win32-amd64.zip) #### Features *** * feat: add health endpoint with comprehensive system status monitoring * feat: add cloudflare queues to the streams #### Bug fixes *** * fix: optimisations ## 0.26.0-beta - 12th September 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.26.0](https://github.com/joshstevens19/rindexer/tree/release/0.26.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.26.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.26.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.26.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.26.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.26.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.26.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.26.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.26.0/rindexer_win32-amd64.zip) #### Features *** * feat: expose PostgresClient::raw\_connection so its easier to do transactions #### Bug fixes *** * fix: start indexer and graphql when a rust project is started without any commands * fix: take the correct default graphql endpoint when generating graphql files ## 0.25.3-beta - 8th September 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.25.3](https://github.com/joshstevens19/rindexer/tree/release/0.25.3) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.25.3/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.25.3/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.25.3/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.25.3/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.25.3/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.25.3/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.25.3/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.25.3/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: `contract_address` Postgres column changed from `char(66)` to `char(42)` * fix: `PostgresClient` now only exposes `insert_bulk` which handles internally whether to insert rows via INSERT or COPY * fix: regenerated example projects to support latest changes * fix: Index creation fails for filter contracts due to schema name mismatch #### Breaking changes *** * `contract_address` Postgres column changed from `char(66)` to `char(42)` * `PostgresClient` now only exposes `insert_bulk` which handles internally whether to insert rows via INSERT or COPY ## 0.25.2-beta - 30th August 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.25.2](https://github.com/joshstevens19/rindexer/tree/release/0.25.2) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.25.2/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.25.2/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.25.2/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.25.2/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.25.2/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.25.2/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.25.2/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.25.2/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: inject block timestamp in the generate indexing code * fix: resolve timestamp override mapping to contract breaking change * fix: resolve bad logs ## 0.25.1-beta - 28th August 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.25.1](https://github.com/joshstevens19/rindexer/tree/release/0.25.1) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.25.1/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.25.1/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.25.1/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.25.1/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.25.1/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.25.1/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.25.1/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.25.1/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: correct tuple wrapper slice indexing in map\_ethereum\_wrapper\_to\_json ## 0.25.0-beta - 27th August 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.25.0](https://github.com/joshstevens19/rindexer/tree/release/0.25.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.25.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.25.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.25.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.25.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.25.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.25.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.25.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.25.0/rindexer_win32-amd64.zip) #### Features *** * feat: Adds timestamp config to the yaml config, allowing users to opt-in to block timestamps in logs * feat: Add chain\_id to TxInformation struct * feat: Add xtask with block timestamp encoding mechanism * feat: Adds nocode postgres migration "versioning system" * feat: Expose a rust project handler to process raw blocks with transactions if native indexing is enabled. * feat: bump alloy to 1.0.27 #### Bug fixes *** * fix: compiler issue with solar-parse * fix: Move rust\_playground to examples * fix: resolve telegram marketdownv2 ## 0.24.1-beta - 20th August 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.24.1](https://github.com/joshstevens19/rindexer/tree/release/0.24.1) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.24.1/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.24.1/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.24.1/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.24.1/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.24.1/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.24.1/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.24.1/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.24.1/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: graphql embedded binary ## 0.24.0-beta - 19th August 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.24.0](https://github.com/joshstevens19/rindexer/tree/release/0.24.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.24.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.24.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.24.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.24.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.24.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.24.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.24.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.24.0/rindexer_win32-amd64.zip) #### Features *** * feat: support tuples and nested tuples as event inputs * feat: bring graphql into the binary * feat: support array in `input_name` factory filter configuration #### Bug fixes *** * fix: requiring to install lsof when running the graphql ## 0.23.0-beta - 4th August 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.23.0](https://github.com/joshstevens19/rindexer/tree/release/0.23.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.23.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.23.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.23.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.23.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.23.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.23.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.23.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.23.0/rindexer_win32-amd64.zip) #### Features *** * feat: enhance filter conditions parsing and evaluation to handle complex expressions with logical operators * feat: extend `is_known_zk_evm_compatible_chain` to some new chains #### Bug fixes * fix: fix codegen for events with irregular width solidity integer types * fix: logical operators precedence in filter conditions - [https://github.com/joshstevens19/rindexer/issues/225](https://github.com/joshstevens19/rindexer/issues/225) #### Breaking changes *** * `EthereumSqlTypeWrapper` `U64`, `U64Nullable` and `U64BigInt` are now a rust `u64` type * `EthereumSqlTypeWrapper::VecU64` is now a rust `Vec` type * `TxInformation` `block_number` and `transaction_index` are now a rust `u64` type ## 0.22.3-beta - 30th July 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.22.3](https://github.com/joshstevens19/rindexer/tree/release/0.22.3) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.22.3/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.22.3/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.22.3/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.22.3/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.22.3/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.22.3/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.22.3/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.22.3/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: do not try to write streams file even when csv is off ## 0.22.2-beta - 30th July 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.22.2](https://github.com/joshstevens19/rindexer/tree/release/0.22.2) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.22.2/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.22.2/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.22.2/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.22.2/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.22.2/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.22.2/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.22.2/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.22.2/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: remove custom panic handling as causing issues * fix: too much noise on the last seen live logs ## 0.22.1-beta - 29th July 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.22.1](https://github.com/joshstevens19/rindexer/tree/release/0.22.1) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.22.1/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.22.1/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.22.1/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.22.1/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.22.1/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.22.1/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.22.1/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.22.1/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: make rpc error logging only log out errors which are not already handled * fix: generate proper snake case file names for factory contracts handlers * fix: drop rindexer\_internal.latest\_block when drop\_each\_run is defined ## 0.22.0-beta - 25th July 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.22.0](https://github.com/joshstevens19/rindexer/tree/release/0.22.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.22.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.22.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.22.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.22.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.22.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.22.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.22.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.22.0/rindexer_win32-amd64.zip) #### Features *** * feat: add some extra logging extensions to see when rpcs are falling over * feat: adding more logging when the node gets back since last seen block number * feat: improve factory contract indexing by fully indexing factory contract event #### Bug fixes *** * fix: Consolidate Async Runtimes - [https://github.com/joshstevens19/rindexer/issues/271](https://github.com/joshstevens19/rindexer/issues/271) ## 0.21.2-beta - 16th July 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.21.2](https://github.com/joshstevens19/rindexer/tree/release/0.21.2) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.21.2/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.21.2/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.21.2/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.21.2/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.21.2/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.21.2/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.21.2/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.21.2/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: resolve build issues with reth dependencies, and feature gate reth dependencies to avoid bloating the binary size ## 0.21.1-beta - 16th July 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.21.1](https://github.com/joshstevens19/rindexer/tree/release/0.21.1) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.21.1/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.21.1/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.21.1/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.21.1/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.21.1/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.21.1/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.21.1/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.21.1/rindexer_win32-amd64.zip) #### Features *** * feat: Add a `Nullable` Numeric256 SQL type, `Nullable` DateTime SQL type and Uuid SQL type #### Bug fixes *** * fix: resolve issues with updating last seen block even if the event failed ## 0.21.0-beta - 15th July 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.21.0](https://github.com/joshstevens19/rindexer/tree/release/0.21.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.21.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.21.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.21.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.21.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.21.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.21.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.21.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.21.0/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: resolve multi-network dependency issues ## 0.20.0-beta - 10th July 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.20.0](https://github.com/joshstevens19/rindexer/tree/release/0.20.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.20.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.20.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.20.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.20.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.20.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.20.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.20.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.20.0/rindexer_win32-amd64.zip) #### Features *** * feat: support reth natively in rindexer [https://rindexer.xyz/docs/start-building/create-new-project/reth-mode](https://rindexer.xyz/docs/start-building/create-new-project/reth-mode) + [https://rindexer.xyz/docs/advanced/using-reth-exex](https://rindexer.xyz/docs/advanced/using-reth-exex) * Indexing with `eth_getBlockByNumber` for more efficiency (but still retain all debug/trace logic for future options) * Add tx receipt endpoint as useful prep work for raw transaction indexing option * Add an ever-so-slightly backpressured queue per "network-event" for super fair scheduling, decreased write contention on the database, and improved memory utilisation, can use 1/4 of memory with no throughput drop in some cases. #### Bug fixes *** * fix: [https://github.com/ethereum/go-ethereum/pull/31876](https://github.com/ethereum/go-ethereum/pull/31876) changed max address per logs to 1000 to be aligned with geth * Use correct provider base which is `AnyProvider` that can handle optimism, rollups, and all evm chain style responses. * Misc tweaks to optimise how this native transfer fetching is done with batching, and pass rpc provider to allow for better batching in other endpoints * Remove the permits system entirely, it's had some serious problems with fair distribution (where some events would consistenyl take more permits than others) * We weren't writing to the db if no events were found in a block range... this was a major problem for highly infrequent events as rindexer wouldn't write the "last seen block", fixed. * Fixed a pretty major bug with the "optimal log parsing regex" (where the BlockNumber::from\_str() wasn't actually parsing the hex, so it silently failed and used sub-optimal fallbacks) * Fix indexer codegen to use correct names and types, as well as fix formatting problems where rustfmt couldn't parse the nesting we were doing * Other memory optimisations ## 0.19.1-beta - 17th June 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.19.1](https://github.com/joshstevens19/rindexer/tree/release/0.19.1) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.19.1/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.19.1/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.19.1/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.19.1/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.19.1/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.19.1/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.19.1/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.19.1/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: Fix code generation for complex event names (with `_` separator) ## 0.19.0-beta - 17th June 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.19.0](https://github.com/joshstevens19/rindexer/tree/release/0.19.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.19.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.19.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.19.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.19.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.19.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.19.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.19.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.19.0/rindexer_win32-amd64.zip) #### Features *** * feat: Numeric U256 Array EthereumSqlTypeWrapper * feat: U64BigInt and I256Numeric #### Bug fixes *** * fix: Fix dependency events handling after regression introduced with factory filtering ## 0.18.0-beta - 13th June 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.18.0](https://github.com/joshstevens19/rindexer/tree/release/0.18.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.18.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.18.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.18.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.18.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.18.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.18.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.18.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.18.0/rindexer_win32-amd64.zip) #### Features *** * Add support for indexing events from a factory-deployed smart contract by introducing a `factory` filter option in `contract` configuration. ## 0.17.3-beta - 9th June 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.17.3](https://github.com/joshstevens19/rindexer/tree/release/0.17.3) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.17.3/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.17.3/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.17.3/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.17.3/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.17.3/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.17.3/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.17.3/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.17.3/rindexer_win32-amd64.zip) #### Bug fixes *** * Improve RPC Efficiency * Make Native Transfer indexing a little more gently on startup, and a bit slower on failure, to prevent overloading apis with concurrency * Improve Startup speed significantly by reducing lots of redundant sequential rpc calls * Respect native transfer `enabled: false` setting * Add some more debug logging, spans, and change some log levels #### Breaking changes *** * Add `block_poll_frequency` to the yaml config to allow better control over block polling behavior. This will require re-running codegen for rust projects as it breaks the existing `create_client` interface. ### 0.17.2-beta - 30th May 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.17.2](https://github.com/joshstevens19/rindexer/tree/release/0.17.2) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.17.2/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.17.2/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.17.2/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.17.2/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.17.2/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.17.2/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.17.2/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.17.2/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: await register call in event handlers generated on rust projects * fix: resolve foundry compiler mismatch version ### 0.17.1-beta - 27th May 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.17.1](https://github.com/joshstevens19/rindexer/tree/release/0.17.1) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.17.1/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.17.1/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.17.1/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.17.1/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.17.1/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.17.1/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.17.1/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.17.1/rindexer_win32-amd64.zip) #### Bug fixes *** * Resolve Docker image crashes with "Illegal instruction (core dumped)" on various x86-64 processors ### 0.17.0-beta - 26th May 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.17.0](https://github.com/joshstevens19/rindexer/tree/release/0.17.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.17.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.17.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.17.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.17.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.17.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.17.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.17.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.17.0/rindexer_win32-amd64.zip) #### Features *** * Alter full toolchain to stable, drop some nightly rustfmt options and re-run fmt. * Improve logging experience by including more `errors` and `network` info where possible. * Add a currently undocumented `CONTRACT_PERMITS` env var to control manually the concurrency via an env var. * Add `HasTxInformation` trait to allow working with generics over Network Contract Events. #### Bug fixes *** * Fix a bug with broken binary copy in the Postgres client. `finish` should be called manually on a bad write. * Fix bug with breaking out of historical indexing on log fetch error. * Fix native transfer indexing bug by adjusting the reorg safe condition to be correct * Fix `fetch_logs` block range parsing to include fallback string if no Err variant found (fixes Lens indexing) all release branches are deployed through `release/VERSION_NUMBER` branches ### 0.16.1-beta - 20th May 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.16.1](https://github.com/joshstevens19/rindexer/tree/release/0.16.1) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.16.1/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.16.1/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.16.1/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.16.1/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.16.1/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.16.1/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.16.1/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.16.1/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: alloy stable + alloy dependency mismatch ### 0.16.0-beta - 6th May 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.16.0](https://github.com/joshstevens19/rindexer/tree/release/0.16.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.16.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.16.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.16.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.16.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.16.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.16.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.16.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.16.0/rindexer_win32-amd64.zip) #### Breaking changes *** * alloy migration for rust projects - [https://rindexer.xyz/docs/start-building/rust-project-deep-dive/ethers-alloy-migration](https://rindexer.xyz/docs/start-building/rust-project-deep-dive/ethers-alloy-migration) ### 0.15.5-beta - 24th April 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.15.5](https://github.com/joshstevens19/rindexer/tree/release/0.15.5) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.5/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.15.5/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.5/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.15.5/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.5/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.15.5/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.5/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.15.5/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: native trace patches for rust projects ### 0.15.4-beta - 8th April 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.15.4](https://github.com/joshstevens19/rindexer/tree/release/0.15.4) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.4/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.15.4/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.4/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.15.4/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.4/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.15.4/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.4/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.15.4/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: issue if your yaml contracts or event names are too long - postgres max length is 63 char but it doesnt fail meaning last indexed block number is never snapshotted ### 0.15.3-beta - 27th March 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.15.3](https://github.com/joshstevens19/rindexer/tree/release/0.15.3) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.3/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.15.3/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.3/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.15.3/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.3/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.15.3/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.3/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.15.3/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: rust project generation ### 0.15.2-beta - 27th March 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.15.2](https://github.com/joshstevens19/rindexer/tree/release/0.15.2) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.2/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.15.2/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.2/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.15.2/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.2/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.15.2/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.2/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.15.2/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: resolve issue when event is on the last block indexed ### 0.15.1-beta - 26th March 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.15.1](https://github.com/joshstevens19/rindexer/tree/release/0.15.1) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.1/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.15.1/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.1/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.15.1/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.1/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.15.1/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.1/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.15.1/rindexer_win32-amd64.zip) #### Features *** * feat: allow manifest config to define the rpc method used for native-transfer indexing #### Bug fixes * fix: resolve ubuntu to build on version 22.0.4 due to GLIBC\_2.39 issues on 22.4 ### 0.15.0-beta - 25th March 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.15.0](https://github.com/joshstevens19/rindexer/tree/release/0.15.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.15.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.15.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.15.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.15.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.15.0/rindexer_win32-amd64.zip) #### Features *** * feat: improve speed of indexing with a few optimisations * feat: native transfer indexing ### 0.14.0-beta - 6th March 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.14.0](https://github.com/joshstevens19/rindexer/tree/release/0.14.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.14.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.14.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.14.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.14.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.14.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.14.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.14.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.14.0/rindexer_win32-amd64.zip) #### Features *** * feat: add support for alias in streams ### 0.13.0-beta - 20th Feb 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.13.0](https://github.com/joshstevens19/rindexer/tree/release/0.13.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.13.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.13.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.13.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.13.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.13.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.13.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.13.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.13.0/rindexer_win32-amd64.zip) #### Features *** * feat: add a `U64Nullable` type to handle zero values * feat: support custom AWS endpoints * feat: forward event signature for all events streamed ### 0.12.0-beta - 1st Jan 2025 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.12.0](https://github.com/joshstevens19/rindexer/tree/release/0.12.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.12.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.12.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.12.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.12.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.12.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.12.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.12.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.12.0/rindexer_win32-amd64.zip) #### Features *** * feat: Redis Streams #### Bug fixes *** * fix: issue two's complement on large u256 values * fix: resolve building the project on a PR branch * fix: resolve numeric type parsing issues on out of range rust decimals ### 0.11.3-beta - 28th December 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.11.3](https://github.com/joshstevens19/rindexer/tree/release/0.11.3) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.11.3/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.11.3/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.11.3/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.11.3/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.11.3/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.11.3/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.11.3/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.11.3/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: index throws on multiple relationships with same input name ### 0.11.2-beta - 19th December 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.11.2](https://github.com/joshstevens19/rindexer/tree/release/0.11.2) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.11.2/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.11.2/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.11.2/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.11.2/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.11.2/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.11.2/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.11.2/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.11.2/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: cors when accessing rindexer graphql api from different host #### Breaking changes *** * rindexerdown and then `curl -L https://rindexer.xyz/install.sh | bash` to reinstall rindexer ### 0.11.1-beta - 10th December 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.11.1](https://github.com/joshstevens19/rindexer/tree/release/0.11.1) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.11.1/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.11.1/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.11.1/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.11.1/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.11.1/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.11.1/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.11.1/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.11.1/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: resolve bad typings generated for decoding if only 1 parameter in rust project ### 0.11.0-beta - 7th December 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.11.0](https://github.com/joshstevens19/rindexer/tree/release/0.11.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.11.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.11.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.11.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.11.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.11.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.11.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.11.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.11.0/rindexer_win32-amd64.zip) #### Features *** * feat: handle graceful shutdowns #### Bug fixes *** * fix: resolve race condition of dependency blocking indexing * fix: add new type for EthereumSqlTypeWrapper to handle VARCHAR strings * fix: error on startup not +1 onto the next block causing duplicates logs sometimes * fix: issue with it working on windows ### 0.10.0-beta - 15th October 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.10.0](https://github.com/joshstevens19/rindexer/tree/release/0.10.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.10.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.10.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.10.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.10.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.10.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.10.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.10.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.10.0/rindexer_win32-amd64.zip) #### Features *** * feat: expose an insert\_bulk new postgres function to make inserting bulk data easier * feat: expose new ethereum sql type wrappers for bytes types# * feat: expose postgres ToSql trait * feat: support with\_transaction in postgres client * feat: get the block timestamp from the RPC call (its an option as not all providers expose it) * feat: allow you to override environment file path #### Bug fixes *** * fix: dependency events not being applied to the correct contract * fix: resolve defining environment variables in contract address fields in the yaml * fix: resolve topic\_id packing issues ### 0.9.0-beta - 19th September 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.9.0](https://github.com/joshstevens19/rindexer/tree/release/0.9.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.9.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.9.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.9.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.9.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.9.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.9.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.9.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.9.0/rindexer_win32-amd64.zip) #### Features *** * feat: allow some other attributes on the generated event typings #### Bug fixes *** * fix: handle signed integers throughout rindexer * fix: generating global types would repeat the same code on a regenerate causing issues #### Breaking changes *** * breaking: rindexer had an parsing error meaning stuff like `UniswapV3Pool` would parse to uniswap\_v\_3\_pool. This caused some other issues with mapping to object names so now it has been fixed and in the example above it will be `uniswap_v3_pool`, if you have any running indexers with these buggy names in your db you just need to rename them and same with the `rindexer.internal` tables which will have these odd names as well. ### 0.8.0-beta - 17th September 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.8.0](https://github.com/joshstevens19/rindexer/tree/release/0.8.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.8.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.8.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.8.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.8.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.8.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.8.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.8.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.8.0/rindexer_win32-amd64.zip) #### Features *** * feat: info log if no new blocks are published in the last 20 seconds to avoid people thinking rindexer is stuck #### Bug fixes *** * fix: pascal case still has some edge cases on parsing * fix: allow #!\[allow(non\_snake\_case)] in indexer code * fix: still generate internal tables for rindexer even if creating new event tables is disabled ### 0.7.1-beta - 17th September 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.7.1](https://github.com/joshstevens19/rindexer/tree/release/0.7.1) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.7.1/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.7.1/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.7.1/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.7.1/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.7.1/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.7.1/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.7.1/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.7.1/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: throw error if contract names are not unique * fix: allow non camel case types in generated code * fix: pascal case not parsing capitals full words correctly ### 0.7.0-beta - 16th September 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.7.0](https://github.com/joshstevens19/rindexer/tree/release/0.7.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.7.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.7.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.7.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.7.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.7.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.7.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.7.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.7.0/rindexer_win32-amd64.zip) #### Features *** * feat: support multiple abis in a single contract * feat: allow array of filters in the same contract without repeating #### Bug fixes *** * fix: running rust project should only start indexer or graphql passed on args passed * fix: resolve issue of paths in generated typings * fix: when running rindexer codegen typings csv folder created * fix: underscores in events within a rust project maps it wrong in typings * fix: share a postgres instance across rust project ### 0.6.2-beta - 24th August 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.6.2](https://github.com/joshstevens19/rindexer/tree/release/0.6.2) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.6.2/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.6.2/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.6.2/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.6.2/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.6.2/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.6.2/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.6.2/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.6.2/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: Use the prefix when generating abi name properties. ### 0.6.1-beta - 15th August 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.6.1](https://github.com/joshstevens19/rindexer/tree/release/0.6.1) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.6.1/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.6.1/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.6.1/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.6.1/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.6.1/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.6.1/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.6.1/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.6.1/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: resolve issue with conflicting event names on graphql meaning it would not load * fix: resolve filter table names mapping to graphql meaning it would not expose the graphql queries ### 0.6.0-beta - 8th August 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.6.0](https://github.com/joshstevens19/rindexer/tree/release/0.6.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.6.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.6.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.6.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.6.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.6.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.6.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.6.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.6.0/rindexer_win32-amd64.zip) #### Features *** * feat: add a disable\_logs\_bloom\_checks field to the network section of the [YAML configuration file](https://rindexer.xyz/docs/start-building/yaml-config/networks#disable_logs_bloom_checks) ### 0.5.1-beta - 7th August 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.5.1](https://github.com/joshstevens19/rindexer/tree/release/0.5.1) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.5.1/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.5.1/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.5.1/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.5.1/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.5.1/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.5.1/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.5.1/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.5.1/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: resolve unhandled solidity types in solidity\_type\_to\_ethereum\_sql\_type\_wrapper ### 0.5.0-beta - 6th August 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.5.0](https://github.com/joshstevens19/rindexer/tree/release/0.5.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.5.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.5.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.5.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.5.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.5.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.5.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.5.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.5.0/rindexer_win32-amd64.zip) #### Features *** * feat: support chatbots on telegram - [https://rindexer.xyz/docs/start-building/chatbots/telegram](https://rindexer.xyz/docs/start-building/chatbots/telegram) * feat: support chatbots on discord - [https://rindexer.xyz/docs/start-building/chatbots/discord](https://rindexer.xyz/docs/start-building/chatbots/discord) * feat: support chatbots on slack - [https://rindexer.xyz/docs/start-building/chatbots/slack](https://rindexer.xyz/docs/start-building/chatbots/slack) * feat: support streams with kafka - [https://rindexer.xyz/docs/start-building/streams/kafka](https://rindexer.xyz/docs/start-building/streams/kafka) * feat: support streams with rabbitmq - [https://rindexer.xyz/docs/start-building/streams/rabbitmq](https://rindexer.xyz/docs/start-building/streams/rabbitmq) * feat: support streams with webhooks - [https://rindexer.xyz/docs/start-building/streams/webhooks](https://rindexer.xyz/docs/start-building/streams/webhooks) * feat: support streams with sns/sqs - [https://rindexer.xyz/docs/start-building/streams/sns](https://rindexer.xyz/docs/start-building/streams/sns) * feat: create .gitignore file for new projects ### 0.4.0-beta - 30th July 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.4.0](https://github.com/joshstevens19/rindexer/tree/release/0.4.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.4.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.4.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.4.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.4.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.4.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.4.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.4.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.4.0/rindexer_win32-amd64.zip) #### Features *** * feat: create a docker image and github workflow for building it when pushed ### 0.3.1-beta - 30th July 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.3.1](https://github.com/joshstevens19/rindexer/tree/release/0.3.1) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.3.1/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.3.1/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.3.1/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.3.1/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.3.1/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.3.1/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.3.1/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.3.1/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: throw an error if trying to include a non-event type in the `include_events` array * fix: postgres connection error issue seen on supabase * fix: refactor postgres new to always try ssl first then retry without ssl to be inline with best practices ### 0.3.0-beta - 26th July 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.3.0](https://github.com/joshstevens19/rindexer/tree/release/0.3.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.3.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.3.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.3.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.3.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.3.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.3.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.3.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.3.0/rindexer_win32-amd64.zip) #### Features *** * feat: support for phantom events - [https://rindexer.xyz/docs/start-building/phantom](https://rindexer.xyz/docs/start-building/phantom) #### Bug fixes *** * fix: resolve issue with no inputs in events syntax error for postgres * fix: better error message when etherscan is not supported for network ### 0.2.0-beta - 21th July 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.2.0](https://github.com/joshstevens19/rindexer/tree/release/0.2.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.2.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.2.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.2.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.2.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.2.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.2.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.2.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.2.0/rindexer_win32-amd64.zip) #### Features *** * feat: add max\_block\_range to networks - [https://github.com/joshstevens19/rindexer/issues/55](https://github.com/joshstevens19/rindexer/issues/55) * feat: allow you to add your own etherscan api key - [https://rindexer.xyz/docs/start-building/yaml-config/global#etherscan\_api\_key](https://rindexer.xyz/docs/start-building/yaml-config/global#etherscan_api_key) * feat: improve logs bloom log message #### Bug fixes *** * fix: resolve `substitute_env_variables` to use `${}` instead of `$<>` for env variables ### 0.1.4-beta - 20th July 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.1.4](https://github.com/joshstevens19/rindexer/tree/release/0.1.4) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.4/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.1.4/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.4/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.1.4/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.4/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.1.4/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.4/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.1.4/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: fixing the query of implemntation ABI for proxy contracts * fix: add request timeouts to adapt to different verifier's rate limits * fix: make chain\_id u64 instead of u32 - [https://github.com/joshstevens19/rindexer/issues/53](https://github.com/joshstevens19/rindexer/issues/53) * fix: fix rust project not being able to run due to borrower check * fix: fix typings generations to parse the object values correctly ### 0.1.3-beta - 19th July 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.1.3](https://github.com/joshstevens19/rindexer/tree/release/0.1.3) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.3/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.1.3/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.3/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.1.3/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.3/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.1.3/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.3/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.1.3/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: Remove package specifier from codegen Cargo.toml ### 0.1.2-beta - 18th July 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.1.2](https://github.com/joshstevens19/rindexer/tree/release/0.1.2) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.2/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.1.2/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.2/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.1.2/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.2/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.1.2/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.2/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.1.2/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: allow postgres tls connections to be used (?sslmode=require) ### 0.1.1-beta - 16th July 2024 github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.1.1](https://github.com/joshstevens19/rindexer/tree/release/0.1.1) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.1/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.1.1/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.1/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.1.1/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.1/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.1.1/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.1/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.1.1/rindexer_win32-amd64.zip) #### Bug fixes *** * fix: support all the int solidity types - [https://github.com/joshstevens19/rindexer/issues/45](https://github.com/joshstevens19/rindexer/issues/45) ### 0.1.0-beta - 15th July 2024 *** github branch - [https://github.com/joshstevens19/rindexer/tree/release/0.1.0](https://github.com/joshstevens19/rindexer/tree/release/0.1.0) * linux binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.0/rindexer\_linux-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.1.0/rindexer_linux-amd64.tar.gz) * mac apple silicon binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.0/rindexer\_darwin-arm64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.1.0/rindexer_darwin-arm64.tar.gz) * mac apple intel binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.0/rindexer\_darwin-amd64.tar.gz](https://github.com/joshstevens19/rindexer/releases/download/v0.1.0/rindexer_darwin-amd64.tar.gz) * windows binary - [https://github.com/joshstevens19/rindexer/releases/download/v0.1.0/rindexer\_win32-amd64.zip](https://github.com/joshstevens19/rindexer/releases/download/v0.1.0/rindexer_win32-amd64.zip) #### Features *** Release of rindexer ## Shoutout Big big thanks to [Avara](https://avara.xyz/) who allowed me to finish this off in work time. Also big thanks to [Foundry's](https://github.com/foundry-rs/foundry) release github workflow which we took inspiration from to build the cross platform rindexer binaries. Building the cross platform binaries was harder then building the whole project lol. ## Add These commands allow you to through the CLI add elements to your YAML file. ### Contract :::warning You must have networks setup in the YAML to be able to use this command. You can see how to do that [here](/docs/start-building/yaml-config/networks). ::: This allows you download the contracts metadata alongside the ABI from an supported network using Etherscan APIs. This uses a shared Etherscan API key to try to download the ABIs, this means you will get rate limited if you use this too much or if many people use this at the same time. You can add your own API key [here](/docs/start-building/yaml-config/global#etherscan_api_key). You can see all the chains you can download ABIs from [here](https://docs.etherscan.io/contract-verification/supported-chains). ```bash rindexer add contract ``` it then ask you 2 questions: 1. "Enter Network Name" - This is the network name in your YAML file. ```yaml networks: - name: ethereum // [!code focus] chain_id: 1 rpc: https://mainnet.gateway.tenderly.co ``` This will skip for you if you only have one network setup. 2. "Enter Contract Address" - This is the contract address you want to add to your YAML project It will then download the ABI and put it in the `abis` folder and map it automatically in your YAML file. Some things to know: * If the contract is not verified on Etherscan it will not be able to download the ABI. * If the contract is a proxy it try to download the ABI of the implementation contract. ## Codegen You can generate a few different types of code when using rindexer codegen. ### GraphQL You can generate .graphql prebuilt queries to get up and running in seconds. These will be generated in a `queries` folder in the root of where the rindexer yaml is. ```bash rindexer codegen graphql ``` By default it will point to `http://localhost:3001` graphql endpoint, you can change this by passing the endpoint flag: ```bash rindexer codegen graphql --endpoint=YOUR_GRAPHQL_API_URL ``` #### TypeScript [graphql-codegen](https://the-guild.dev/graphql/codegen) is the best tool on the market to generate TypeScript typings for your GraphQL queries, mutations, and subscriptions. learn about the `codegen.ts` config [here](https://the-guild.dev/graphql/codegen/docs/config-reference/codegen-config) the graphql API url is the `schema` in the config, you can set this to your graphql endpoint like so: ```ts import { CodegenConfig } from '@graphql-codegen/cli' const config: CodegenConfig = { // this is YOUR_GRAPHQL_API_URL // [!code focus] schema: 'http://localhost:3001/graphql', // [!code focus] ... } export default config ``` then how you hook up the config with your tool of choice, below are some links to documentation: * React Apollo - [https://the-guild.dev/graphql/codegen/plugins/typescript/typescript-react-apollo#with-react-hooks](https://the-guild.dev/graphql/codegen/plugins/typescript/typescript-react-apollo#with-react-hooks) * React Query - [https://the-guild.dev/graphql/codegen/plugins/typescript/typescript-react-query](https://the-guild.dev/graphql/codegen/plugins/typescript/typescript-react-query) * Node app - [https://the-guild.dev/graphql/codegen/plugins/typescript/typescript-urql](https://the-guild.dev/graphql/codegen/plugins/typescript/typescript-urql) #### .NET, Dart, Java, Flow codegen for other languages can be found [here](https://the-guild.dev/graphql/codegen) ### Typings :::info This feature is only available for Rust projects. Read the indepth guide [here](/docs/start-building/rust-project-deep-dive) ::: When creating a new rust project with rindexer it will create you a typings folder, this has pretty advanced typings for all your contracts, events and network information. This is generated from the ABIs you provide in the YAML configuration file. This folder is not meant to be manually edited and should always be generated using codegen. You can regenerate the typings folder by running the following command: :::info rindexer tries to be as smart as possible when it comes to updating the typings based on the `rindexer.yaml`, it will resolves as much as it can without needing a regenerate but like any codegen tool if you change certain aspects it does need to be regenerated. if you change any of these properties in the `rindexer.yaml` file it will need to be regenerated: * [indexer name](/docs/start-building/yaml-config/top-level-fields#name) * anything in the [network](/docs/start-building/yaml-config/networks) section including adding and removing networks * enabling or disabling a new [storage provider](/docs/start-building/yaml-config/storage) * changing the [contract name](/docs/start-building/yaml-config/contracts#name) * changing from [address](/docs/start-building/yaml-config/contracts#address) contract indexing to [filter](/docs/start-building/yaml-config/contracts#filter) indexing or vice versa * changing the contract [ABI](/docs/start-building/yaml-config/contracts#abi) * anything in the [global](/docs/start-building/yaml-config/global) section Also if you do regenerate your indexer files may need to be updated to match the new typings, you can manually migrate them or generate them again using [indexer codegen command](/docs/start-building/codegen#indexers) ::: ```bash rindexer codegen typings ``` ### Indexers :::info This feature is only available for Rust projects. Read the indepth guide [here](/docs/start-building/rust-project-deep-dive) ::: When creating a new rust project with rindexer it will create you a indexers folder, this is where you will write your custom logic for the indexer. This is where you will do all your indexing logic, you can do anything you want in here, you can do http requests, on chain lookups, custom logic, custom DBs, anything you can think of. rindexer gives you the foundations and also baked in extendability. Rust enforces a strong type system, all logs will be streamed to you just focus on the logic you want. By default if you turn storage postgres on in the YAML configuration file it will also create you postgres tables, also write SQL for you to use and expose you a postgres client. This is a great starting point for you to build on. The tables creation can be skipped by using the [disable\_create\_tables](docs/start-building/yaml-config/storage#disable_create_tables) in the YAML configuration file. If you also enable the CSV storage it will also generate code in the handler to write to that CSV files. You can regenerate the indexers folder by running the following command, please note this will overwrite any custom logic you have written if you run it on an existing project. ```bash rindexer codegen indexers ``` ## Delete This allows you to delete data from the postgres database or csv files. This is useful if you want to start fresh and start indexing again or if you updated an ABI and want to drop the tables and start over. :::warning Once confirmed this can not be undone. ::: ```bash rindexer delete ``` ### Example ```bash rindexer delete This will delete all data in the postgres database and csv files for the project at: /Users/jackedgson/Development/avara/rindexer/examples/rindexer_demo_cli This operation can not be reverted. Make sure you know what you are doing. Are you sure you wish to delete the database data (it can not be reverted)? [yes, no]: yes Successfully deleted all data from the postgres database Are you sure you wish to delete the csv data (it can not be reverted)? [yes, no]: yes Successfully deleted all csv files. ``` ## Health Monitoring Rindexer includes a comprehensive health monitoring system that provides real-time insights into the status of your indexing infrastructure. This built-in monitoring helps you ensure your indexers are running smoothly and quickly identify issues when they occur. ### Overview The health monitoring system tracks the status of key components: * **Database connectivity** - PostgreSQL connection health * **Indexing status** - Whether the indexer is running and active task count * **Sync status** - Data synchronization health across storage backends * **Overall system health** - Aggregated status across all components ### Health Server The health monitoring server runs automatically alongside your rindexer instance on a separate port. By default, it runs on port `8080`, but this can be configured. #### Starting the Health Server The health server starts automatically when you run rindexer with indexing enabled. No additional configuration is required. ```bash # Health server starts automatically with indexing rindexer start indexer rindexer start all ``` #### Health Endpoints ##### GET /health Returns the complete health status of your rindexer instance. **Response Format:** ```json { "status": "healthy", "timestamp": "2024-01-15T10:30:00Z", "services": { "database": "healthy", "indexing": "healthy", "sync": "healthy" }, "indexing": { "active_tasks": 2, "is_running": true } } ``` **HTTP Status Codes:** * `200 OK` - System is healthy * `503 Service Unavailable` - System has issues ### Health Status Types The health endpoint returns different status types for each service: | Status | Description | | ---------------- | ------------------------------------------- | | `healthy` | Service is functioning normally | | `unhealthy` | Service has encountered an error | | `unknown` | Status cannot be determined | | `not_configured` | Service is not set up | | `disabled` | Service is intentionally disabled | | `no_data` | Service is working but no data is available | | `stopped` | Service is not running | ### Service Health Checks #### Database Health Check The database health check verifies PostgreSQL connectivity and functionality: * **`healthy`**: PostgreSQL is enabled and a simple `SELECT 1` query succeeds * **`unhealthy`**: PostgreSQL is enabled but the connection fails or query errors occur * **`not_configured`**: PostgreSQL is enabled but no database client is available * **`disabled`**: PostgreSQL is not enabled in the configuration **What it checks**: Basic database connectivity by executing `SELECT 1` against the PostgreSQL instance. #### Indexing Health Check The indexing health check monitors the indexer process state: * **`healthy`**: The indexer is currently running (system state flag is set) * **`stopped`**: The indexer is not running (system state flag is not set) **What it checks**: The global `IS_RUNNING` flag that tracks whether the indexer process is active. #### Sync Health Check The sync health check verifies data synchronization status based on your storage configuration: **For PostgreSQL storage:** * **`healthy`**: Database has event tables (excluding system tables like `latest_block`, `*_last_known_*`, `*_last_run_*`) * **`no_data`**: No event tables exist yet (acceptable for new deployments) * **`unhealthy`**: Database query fails or connection issues * **`not_configured`**: No database client available **For CSV storage:** * **`healthy`**: CSV directory exists and contains `.csv` files * **`no_data`**: CSV directory doesn't exist or contains no `.csv` files * **`unhealthy`**: CSV directory exists but cannot be read * **`not_configured`**: CSV storage not configured **What it checks**: * **PostgreSQL**: Queries `information_schema.tables` to find user-created event tables * **CSV**: Checks if the CSV directory exists and contains CSV files #### Overall Health Status The overall health status is determined by combining all service checks: * **`healthy`**: All critical services are healthy, or sync shows `no_data` (acceptable for new deployments) * **`unhealthy`**: Any critical service is `unhealthy`, `not_configured`, or indexing is `stopped` **Critical services**: Database, Indexing, and Sync (when enabled) ### Health Server Lifecycle The health server's lifecycle depends on which services you start: #### `rindexer start indexer` (with end\_block set) * **Short-lived**: Health server starts with the indexer and **dies when indexing completes** * **Use case**: Historical data indexing that has a defined end point * **Health monitoring**: Only available during the indexing process #### `rindexer start indexer` (no end\_block set) * **Long-lived**: Health server starts with the indexer and **stays alive for live indexing** * **Use case**: Continuous live indexing that runs indefinitely * **Health monitoring**: Available continuously while the indexer is running #### `rindexer start graphql` * **No health server**: Health server is **not started** in GraphQL-only mode * **Use case**: Running only the GraphQL API without indexing * **Health monitoring**: Not available (health server requires indexing to be enabled) #### `rindexer start all` * **Long-lived**: Health server starts with the indexer and **follows the GraphQL server lifecycle** * **Use case**: Running both indexing and GraphQL API together * **Health monitoring**: Available as long as the GraphQL server is running ### Configuration #### Custom Health Port You can configure the health server port using the `health_port` setting in your `rindexer.yaml` file: ```yaml global: health_port: 8081 ``` ### Production Monitoring #### Load Balancer Health Checks Configure your load balancer to use the health endpoint for health checks: ``` Health Check URL: http://your-rindexer-instance:8080/health Expected Status: 200 OK ``` #### Monitoring Tools You can integrate with monitoring tools like Prometheus, Grafana, or DataDog to track health metrics and set up alerts based on HTTP status codes and response times. ### Troubleshooting #### Common Issues * **Health server not starting**: Check if port is in use, verify YAML configuration * **Database health failing**: Verify PostgreSQL connection and permissions * **Sync health issues**: Check storage configuration and file permissions #### Debugging Enable debug logging for detailed health information: ```bash RUST_LOG=debug rindexer start indexer ``` ### Best Practices * Set up continuous monitoring of the health endpoint * Configure appropriate alert thresholds * Keep health check logs for troubleshooting * Monitor multiple instances if running in a cluster ### API Reference #### Health Endpoint * **URL**: `GET /health` * **Response**: JSON with health status and service information * **Status Codes**: 200 (healthy), 503 (unhealthy) ## Hot Reload Rindexer supports hot-reloading your `rindexer.yaml` configuration without manually stopping and restarting the process. When enabled with the `--watch` flag, rindexer monitors your YAML file for changes, validates the new configuration, and automatically restarts with the updated settings. :::info Hot reload is only available for **no-code** projects. Rust projects will show a warning if `--watch` is used. ::: ### Quick Start Add the `--watch` flag (or `-w`) before the subcommand: :::code-group ```bash [indexer and graphql] rindexer start --watch all ``` ```bash [indexer] rindexer start --watch indexer ``` ```bash [graphql] rindexer start --watch graphql ``` ::: Now edit your `rindexer.yaml` β€” rindexer will automatically detect the change, validate the new config, and restart. ### How It Works When `--watch` is enabled, rindexer runs as two processes: 1. **Outer process** β€” A lightweight restart loop that spawns and monitors the indexer 2. **Inner process** β€” The actual indexer with a file watcher attached The inner process watches `rindexer.yaml` using OS-native file events (FSEvents on macOS, inotify on Linux). When a change is detected: 1. **Debounce** β€” Waits 500ms for rapid successive saves to settle 2. **Validate** β€” Parses the new YAML. If invalid, the change is rejected and the current config keeps running 3. **Diff** β€” Compares old and new manifests to classify the change 4. **Graceful shutdown** β€” Stops the GraphQL server, health server, and active indexing tasks 5. **Restart** β€” The process exits with code `75`, the outer loop catches it and spawns a fresh process #### Change Classification Not all changes are treated equally. Rindexer computes a diff to determine the appropriate action: | Change Type | Action | | ------------------------------------- | ------------------------------------------- | | Contract added, removed, or modified | Restart | | Network RPC URL changed | Restart | | Storage configuration changed | Restart | | Config tuning (buffer, concurrency) | Restart | | Global settings changed | Restart | | Invalid YAML | **Rejected** β€” current config keeps running | | Project name changed | **Rejected** β€” requires manual restart | | Project type changed (rust / no-code) | **Rejected** β€” requires manual restart | | No meaningful change | Skipped | :::warn Project name changes are rejected because the name affects database schema naming. Changing it while running could cause data inconsistency. Stop the indexer, make the change, and restart manually. ::: ### Error Handling If you save an invalid `rindexer.yaml`, rindexer will log an error and keep the current configuration running: ``` ERROR Hot-reload: new manifest is invalid, keeping current config: ... ``` Fix the YAML error and save again β€” rindexer will pick up the corrected file. ### Production Deployment The `--watch` flag includes a built-in restart loop that works out of the box for local development. For production, you can either use the built-in loop or rely on your process manager to handle restarts. #### Docker ```yaml services: rindexer: image: your-rindexer-image command: rindexer start --watch all restart: unless-stopped ``` The container will restart automatically when rindexer exits with code `75` on config change. It will stay stopped on clean shutdown (exit code `0`) or if you run `docker compose down`. #### systemd ```ini [Unit] Description=rindexer indexer After=network.target postgresql.service [Service] ExecStart=/usr/local/bin/rindexer start --watch all WorkingDirectory=/path/to/your/project Restart=on-failure RestartSec=1 Environment=DATABASE_URL=postgresql://user:pass@localhost/db [Install] WantedBy=multi-user.target ``` With `Restart=on-failure`, systemd treats exit code `75` as a failure and restarts the service. Exit code `0` (clean shutdown via Ctrl+C or SIGTERM) is treated as success and stops the service. #### Kubernetes ```yaml apiVersion: apps/v1 kind: Deployment spec: template: spec: containers: - name: rindexer command: ["rindexer", "start", "--watch", "all"] # Kubernetes restarts containers automatically on non-zero exit # Mount your rindexer.yaml via ConfigMap for easy updates volumeMounts: - name: config mountPath: /app/rindexer.yaml subPath: rindexer.yaml volumes: - name: config configMap: name: rindexer-config ``` Update the ConfigMap and rindexer will detect the change and restart: ```bash kubectl create configmap rindexer-config --from-file=rindexer.yaml -o yaml --dry-run=client | kubectl apply -f - ``` :::warn Make sure `drop_each_run` is set to `false` in production. With `drop_each_run: true`, every restart drops and recreates your database tables. ::: ### Limitations * **No-code projects only** β€” Rust projects require recompilation, which is outside the scope of hot reload * **Full process restart** β€” Each reload restarts the entire indexer process. This means historical indexing re-runs from the last checkpoint (no data is lost, but there is a brief pause in live indexing) * **Exit code 75** β€” The process exits with code `75` (EX\_TEMPFAIL) to signal a restart. Make sure your monitoring does not treat this as a crash ## Historic indexing If you want to index only historic data between block ranges just put in the [start\_block](/docs/start-building/yaml-config/contracts#start_block) and [end\_block](/docs/start-building/yaml-config/contracts#end_block) in the YAML configuration file. This will index only the data between those blocks. :::info rindexer will save the last synced block for each contract in the database so it can pick up where it left off if stopped and started again. If you want to start fresh you can use the [delete](/docs/start-building/delete) command to drop all the data and start over. You can also use the [drop\_each\_run](/docs/start-building/yaml-config/storage#drop_each_run) option in the YAML configuration file to drop all the data for the indexer before starting. ::: ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH // [!code focus] details: // [!code focus] - network: ethereum // [!code focus] address: "0xae78736cd615f374d3085123a210448e74fc6393" // [!code focus] start_block: 18600000 // [!code focus] end_block: 18718056 // [!code focus] abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval ``` ## Live indexing If you want to index live data you can just remove the [start\_block](/docs/start-building/yaml-config/contracts#start_block) and [end\_block](/docs/start-building/yaml-config/contracts#end_block) from the YAML configuration file. This will index from the latest block and then index all new blocks as they come in. :::info Important to know this will NOT track last synced block and when you start and stop the indexer it will start from the latest block. ::: ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH // [!code focus] details: // [!code focus] - network: ethereum // [!code focus] address: "0xae78736cd615f374d3085123a210448e74fc6393" // [!code focus] abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval ``` ## Historic and live indexing If you want to index historic data and then live data you can put in the [start\_block](/docs/start-building/yaml-config/contracts#start_block) you wish to index the data from and then remove the [end\_block](/docs/start-building/yaml-config/contracts#end_block) from the YAML configuration file. This will index from the block you specified and then index all new blocks as they come in. :::info rindexer will save the last synced block for each contract in the database so it can pick up where it left off if stopped and started again. If you want to start fresh you can use the [delete](/docs/start-building/delete) command to drop all the data and start over. You can also use the [drop\_each\_run](/docs/start-building/yaml-config/storage#drop_each_run) option in the YAML configuration file to drop all the data for the indexer before starting. ::: ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH // [!code focus] details: // [!code focus] - network: ethereum // [!code focus] address: "0xae78736cd615f374d3085123a210448e74fc6393" // [!code focus] start_block: 18600000 // [!code focus] abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval ``` ## Prometheus Metrics Rindexer exposes Prometheus metrics for production observability. These metrics help you monitor indexing performance, RPC health, database operations, and stream delivery in real-time. ### Overview The metrics endpoint provides insight into: * **Indexing progress** - Events processed, blocks indexed, sync status * **RPC performance** - Request latencies, success/error rates, in-flight requests * **Database operations** - Query durations, operation counts, connection pool status * **Stream delivery** - Message counts, delivery latencies by stream type * **Chain state** - Reorg detection, block lag behind chain head ### Metrics Endpoint Metrics are served on the same port as the health server (default `8080`) at the `/metrics` path. ```bash # Metrics available when indexer is running curl http://localhost:8080/metrics ``` The endpoint returns metrics in [Prometheus text exposition format](https://prometheus.io/docs/instrumenting/exposition_formats/), compatible with Prometheus, Grafana, Datadog, and other monitoring tools. ### Configuration Configure the metrics port using `health_port` in your `rindexer.yaml`: ```yaml global: health_port: 9090 # Metrics available at http://localhost:9090/metrics ``` ### Available Metrics #### Indexing Metrics | Metric | Type | Labels | Description | | --------------------------------- | ------- | ------------------------------ | ------------------------------- | | `rindexer_events_processed_total` | Counter | `network`, `contract`, `event` | Total events processed | | `rindexer_blocks_indexed_total` | Counter | `network`, `contract`, `event` | Total blocks indexed | | `rindexer_last_synced_block` | Gauge | `network`, `contract`, `event` | Last synced block number | | `rindexer_latest_chain_block` | Gauge | `network` | Latest block on chain | | `rindexer_blocks_behind` | Gauge | `network`, `contract`, `event` | Blocks behind chain head | | `rindexer_active_indexing_tasks` | Gauge | - | Currently active indexing tasks | **Example output:** ``` rindexer_events_processed_total{contract="USDC",event="Transfer",network="ethereum"} 19971 rindexer_last_synced_block{contract="USDC",event="Transfer",network="ethereum"} 21901064 rindexer_blocks_behind{contract="USDC",event="Transfer",network="ethereum"} 2425731 ``` #### RPC Metrics | Metric | Type | Labels | Description | | --------------------------------------- | --------- | ----------------------------- | ---------------------------------- | | `rindexer_rpc_requests_total` | Counter | `network`, `method`, `status` | Total RPC requests (success/error) | | `rindexer_rpc_request_duration_seconds` | Histogram | `network`, `method` | RPC request latency | | `rindexer_rpc_requests_in_flight` | Gauge | `network` | Currently pending RPC requests | **Histogram buckets:** 10ms, 25ms, 50ms, 100ms, 250ms, 500ms, 1s, 2.5s, 5s, 10s **Example output:** ``` rindexer_rpc_requests_total{method="eth_getLogs",network="mainnet",status="success"} 156 rindexer_rpc_requests_total{method="eth_getLogs",network="mainnet",status="error"} 2 rindexer_rpc_request_duration_seconds_sum{method="eth_getLogs",network="mainnet"} 45.23 rindexer_rpc_request_duration_seconds_count{method="eth_getLogs",network="mainnet"} 156 ``` #### Database Metrics | Metric | Type | Labels | Description | | ---------------------------------------- | --------- | --------------------- | ---------------------------------- | | `rindexer_db_operations_total` | Counter | `operation`, `status` | Total database operations | | `rindexer_db_operation_duration_seconds` | Histogram | `operation` | Database operation latency | | `rindexer_db_pool_connections` | Gauge | `database`, `state` | Connection pool size (active/idle) | **Operation types:** `query`, `insert`, `update`, `delete`, `batch_insert`, `batch_execute` **Histogram buckets:** 1ms, 5ms, 10ms, 25ms, 50ms, 100ms, 250ms, 500ms, 1s, 2.5s #### Stream Metrics | Metric | Type | Labels | Description | | ------------------------------------------ | --------- | ----------------------- | ------------------------ | | `rindexer_stream_messages_total` | Counter | `stream_type`, `status` | Total messages sent | | `rindexer_stream_message_duration_seconds` | Histogram | `stream_type` | Message delivery latency | **Stream types:** `sns`, `webhook`, `rabbitmq`, `kafka`, `redis`, `cloudflare_queues` #### Chain State Metrics | Metric | Type | Labels | Description | | -------------------------------- | ------- | --------- | ------------------------------ | | `rindexer_reorgs_detected_total` | Counter | `network` | Chain reorganizations detected | | `rindexer_reorg_depth` | Gauge | `network` | Depth of last detected reorg | #### Build Info | Metric | Type | Labels | Description | | --------------------- | ----- | --------- | ---------------------------- | | `rindexer_build_info` | Gauge | `version` | Build information (always 1) | ### Prometheus Integration #### Basic Prometheus Config Add rindexer to your `prometheus.yml`: ```yaml scrape_configs: - job_name: 'rindexer' static_configs: - targets: ['localhost:8080'] scrape_interval: 15s metrics_path: /metrics ``` #### Docker Compose Example ```yaml services: rindexer: image: your-rindexer-image ports: - "8080:8080" prometheus: image: prom/prometheus volumes: - ./prometheus.yml:/etc/prometheus/prometheus.yml ports: - "9090:9090" grafana: image: grafana/grafana ports: - "3000:3000" environment: - GF_SECURITY_ADMIN_PASSWORD=admin ``` ### Useful PromQL Queries #### Indexing Progress ```txt # Events per second (rate over 5 minutes) rate(rindexer_events_processed_total[5m]) # Blocks behind chain head rindexer_blocks_behind # Sync progress percentage (rindexer_last_synced_block / rindexer_latest_chain_block) * 100 ``` #### RPC Health ```txt # RPC error rate rate(rindexer_rpc_requests_total{status="error"}[5m]) / rate(rindexer_rpc_requests_total[5m]) # P99 RPC latency histogram_quantile(0.99, rate(rindexer_rpc_request_duration_seconds_bucket[5m])) # RPC requests per second by method rate(rindexer_rpc_requests_total[5m]) ``` #### Database Performance ```txt # Database operation latency (P95) histogram_quantile(0.95, rate(rindexer_db_operation_duration_seconds_bucket[5m])) # Database error rate rate(rindexer_db_operations_total{status="error"}[5m]) ``` #### Stream Delivery ```txt # Stream delivery rate by type rate(rindexer_stream_messages_total{status="success"}[5m]) # Stream error rate rate(rindexer_stream_messages_total{status="error"}[5m]) ``` ### Alerting Examples #### Prometheus Alerting Rules ```yaml groups: - name: rindexer rules: - alert: RindexerHighBlockLag expr: rindexer_blocks_behind > 1000 for: 5m labels: severity: warning annotations: summary: "Rindexer falling behind chain head" description: "{{ $labels.contract }} is {{ $value }} blocks behind" - alert: RindexerHighRPCErrorRate expr: rate(rindexer_rpc_requests_total{status="error"}[5m]) > 0.1 for: 2m labels: severity: critical annotations: summary: "High RPC error rate detected" - alert: RindexerReorgDetected expr: increase(rindexer_reorgs_detected_total[5m]) > 0 labels: severity: warning annotations: summary: "Chain reorg detected on {{ $labels.network }}" ``` ### Grafana Dashboard Import these panels for a basic rindexer dashboard: 1. **Sync Progress** - Gauge showing `rindexer_blocks_behind` 2. **Events/sec** - Graph of `rate(rindexer_events_processed_total[1m])` 3. **RPC Latency** - Heatmap of `rindexer_rpc_request_duration_seconds` 4. **Error Rates** - Graph of error rates for RPC, DB, and streams 5. **Active Tasks** - Stat panel for `rindexer_active_indexing_tasks` ### Best Practices * **Set appropriate scrape intervals** - 15-30 seconds is typical for indexer metrics * **Monitor block lag** - Alert when `rindexer_blocks_behind` exceeds acceptable thresholds * **Track RPC errors** - High error rates may indicate provider issues or rate limiting * **Watch reorg metrics** - Frequent reorgs may indicate network instability * **Use labels wisely** - Filter by `network`, `contract`, `event` for granular insights ## Phantom events Phantom events enable you to add custom events or modify existing events for any smart contract. This feature allows you to extract any data you need from smart contracts without the constraints of on-chain modifications. ### What are Phantom Events? Phantom events are gasless events logged in an off-chain execution environment that mirrors the mainnet state in real-time. They provide a solution for obtaining additional information from a contract without incurring extra gas costs for users. ### Why Use Phantom Events? Whether you don't control the contract or want to avoid additional gas costs, phantom events offer a flexible solution for diverse data needs and use cases. Powered by [Shadow](https://www.shadow.xyz/) and [dyRPC](https://ui.dyrpc.network/), they are part of the rindexer suite designed to simplify being able to use these powerful features. ### Getting Started with Phantom Events rindexer abstracts away the complexity and offers first-party support for implementing phantom events. It utilizes Etherscan APIs to download source code and ABIs for the contracts you want to index. Note that the shared Etherscan API key may lead to rate limits if heavily used. To avoid this, we recommend adding your own API key here [here](/docs/start-building/yaml-config/global#etherscan_api_key). Right let's get started with phantom events. ### Providers :::warning To use phantom events you will need to have a provider, rindexer just offers first-party support for implementing phantom events. ::: #### [Shadow](https://www.shadow.xyz/) Shadow enables you to modify a deployed contract's source code to add gasless custom event logs and view functions on a shadow fork that is instrumented to mirror mainnet state in realtime. ##### Networks Supported * Ethereum #### [dyRPC](https://ui.dyrpc.network/) dyRPC is a tool built on top of overlay which can be ran on any erigon node and allows you to also modify the contract's source code adding gasless custom event logs and view functions. ##### Networks Supported * Ethereum ### Dependencies #### Installing Foundry :::info If you do not have `foundry` installed it will install it for you when you run the `init` command but we recommend you install it yourself. ::: foundry is required to be installed to compile the contracts. ```bash curl -L https://foundry.paradigm.xyz | bash ``` if you already have got foundry installed you can run `foundryup` to update it. ### Init rindexer uses its CLI first approach for everything and phantom events behaves the same way. Each rindexer project by default does not have phantom events enabled you have to set them up for each project. To enable phantom events for your rindexer project you can run the following command: ```bash rindexer phantom init ``` #### Required information * Shadow * API key (generate on the shadow portal) * Fork ID (generate on the shadow portal) * dyRPC * API key (generate on the dyRPC portal or use "new" to generate a new one) You will be asked to pick your provider and add your API key. It will save the API key int the `.env` file under `RINDEXER_PHANTOM_API_KEY` in your project directory. It will also add your phantom provider to the `rindexer.yaml` file. ### Clone As the `rindexer.yaml` file is defined by you we use these contract names and network names to allow you to easily understand what you are cloning. :::info Only verified contracts on Etherscan can be cloned, if you wish to use an unverified contract it will still work but you will have to create the foundry project manually in the `phantom` folder. ::: ```bash rindexer phantom clone --contract-name --network ``` lets say we had a `rindexer.yaml` file like this: ```yaml name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true drop_each_run: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: '18600000' end_block: '18718056' abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer ``` to clone this contract you would run the following command ```bash rindexer phantom clone --contract-name RocketPoolETH --network ethereum ``` This will create you a folder called `phantom` in the root of your rindexer project and also create the network name to make it easier to find contracts you have cloned. for example in the above example it will create a folder called `phantom/ethereum/` and inside it will have a folder called `RocketPoolETH/` which will have your solidity project files and the contract ABI. This folder will contain all the phantom contracts you have cloned. You can now go to the contracts folder and start making changes to the phantom contracts. ### Add your own event Above we cloned `RocketPoolETH` on `ethereum` lets open up `RocketTokenRETH.sol` and add a phantom event on transfer hook. ```solidity contract RocketTokenRETH is RocketBase, ERC20, RocketTokenRETHInterface { using SafeMath for uint; event EtherDeposited(address indexed from, uint256 amount, uint256 time); event TokensMinted(address indexed to, uint256 amount, uint256 ethAmount, uint256 time); event TokensBurned(address indexed from, uint256 amount, uint256 ethAmount, uint256 time); event PhantomTransferTime(address indexed from, uint256 time); // [!code focus] ... function _beforeTokenTransfer(address from, address, uint256) internal override { // emit your own event emit PhantomTransferTime(from, block.timestamp); // [!code focus] // Don't run check if this is a mint transaction if (from != address(0)) { // Check which block the user's last deposit was bytes32 key = keccak256(abi.encodePacked("user.deposit.block", from)); uint256 lastDepositBlock = getUint(key); if (lastDepositBlock > 0) { // Ensure enough blocks have passed uint256 depositDelay = getUint(keccak256(abi.encodePacked(keccak256("dao.protocol.setting.network"), "network.reth.deposit.delay"))); uint256 blocksPassed = block.number.sub(lastDepositBlock); require(blocksPassed > depositDelay, "Not enough time has passed since deposit"); // Clear the state as it's no longer necessary to check this until another deposit is made deleteUint(key); } } } ``` That is it you can now compile and deploy your phantom contract which we will go over in the next section. ### Editing existing events You can edit any event to whatever you want for example lets say we wanted to change `TokensMinted` to include the new balance of the `to` address after minted. ```solidity contract RocketTokenRETH is RocketBase, ERC20, RocketTokenRETHInterface { using SafeMath for uint; event EtherDeposited(address indexed from, uint256 amount, uint256 time); event TokensMinted(uint256 newBalance, address indexed to, uint256 amount, uint256 ethAmount, uint256 time); // [!code focus] event TokensBurned(address indexed from, uint256 amount, uint256 ethAmount, uint256 time); event PhantomTransferTime(address indexed from, uint256 time); ... function mint(uint256 _ethAmount, address _to) override external onlyLatestContract("rocketDepositPool", msg.sender) { // Get rETH amount uint256 rethAmount = getRethValue(_ethAmount); // Check rETH amount require(rethAmount > 0, "Invalid token mint amount"); // Update balance & supply _mint(_to, rethAmount); // Emit tokens minted event emit TokensMinted(balanceOf(_to), _to, rethAmount, _ethAmount, block.timestamp); // [!code focus] } ``` That is it you can now compile and deploy your phantom contract which we will go over in the next section. :::info If editing different events on different networks (aka your indexing `RocketPoolETH` on ethereum as well as base) your contract details should be separate for each network in the `rindexer.yaml` file, as when you deploy the phantom contract it will remap the new ABI and if your events now do not match the types it will error. ::: ### Compile :::info rindexer uses `foundry` to clone and compile the contracts. ::: To compile the phantom contracts you can run the following command: ```bash rindexer phantom compile --contract-name --network ``` So using the same yaml example as above you would run the following command: ```bash rindexer phantom compile --contract-name RocketPoolETH --network ethereum ``` This will show you the same compile errors as `foundry` would show you if you have made any mistakes. ### Deploy Deploying your phantom contract is different to deploying a normal contract. rindexer will take care of uploading the new phantom contract to the provider and all the mappings for you in the `rindexer.yaml` file. ```bash rindexer phantom deploy --contract-name --network ``` So using the same yaml example as above you would run the following command: ```bash rindexer phantom deploy --contract-name RocketPoolETH --network ethereum ``` This will do a few things to your yaml file: 1. It will add the phantom network to the `rindexer.yaml` file this is always named `phantom_${NETWORK_NAME}_${CONTRACT_NAME}` ```yaml name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co - name: phantom_ethereum_RocketPoolETH // [!code focus] chain_id: 1 // [!code focus] rpc: PROVIDER_RPC // [!code focus] ... ``` 2. It will change the `contracts` section to point the contract details to the phantom network. ```yaml name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co - name: phantom_ethereum_RocketPoolETH // [!code focus] chain_id: 1 // [!code focus] rpc: PROVIDER_RPC // [!code focus] storage: postgres: enabled: true drop_each_run: true contracts: - name: RocketPoolETH details: - network: phantom_ethereum_RocketPoolETH // [!code focus] address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: '18600000' end_block: '18718056' abi: ./abis/phantom_ethereum_RocketPoolETH.abi.json include_events: - Transfer ... ``` 3. It will upload the new ABI to your `abi` folder named the same as the network name but with the `.abi.json` extension. ```yaml name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co - name: phantom_ethereum_RocketPoolETH // [!code focus] chain_id: 1 // [!code focus] rpc: PROVIDER_RPC // [!code focus] storage: postgres: enabled: true drop_each_run: true contracts: - name: RocketPoolETH details: - network: phantom_ethereum_RocketPoolETH // [!code focus] address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: '18600000' end_block: '18718056' abi: ./abis/phantom_ethereum_RocketPoolETH.abi.json // [!code focus] include_events: - Transfer ... ``` right now lets include the `PhantomTransferTime` event in our yaml file `include_events` array so we can index it. ```yaml name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co - name: phantom_ethereum_RocketPoolETH chain_id: 1 rpc: PROVIDER_RPC storage: postgres: enabled: true drop_each_run: true contracts: - name: RocketPoolETH details: - network: phantom_ethereum_RocketPoolETH address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: '18600000' end_block: '18718056' abi: ./abis/phantom_ethereum_RocketPoolETH.abi.json include_events: - PhantomTransferTime // [!code focus] ... ``` ### Indexing everything else is same as before so you can run `rindexer start all` the database tables will all be created, indexer will start indexing and the GraphQL API will be available at [http://localhost:3001/graphql](http://localhost:3001/graphql). :::info Expect slower indexing as you will have to wait for the phantom provider to index the events. All phantom providers at the moment use block ranges over optimal ranges so phantom events will be slower than normal events. ::: That is it you now have created phantom events with rindexer. ## Running ### No-code Project You can run the no-code project really easily with the CLI toolset. :::info rindexer starts your postgres docker compose file up for you automatically if the DATABASE\_URL can not connect to the database and docker-compose.yml is present in the parent directory. You will need to make sure you have docker running on your machine before starting the project. If you have not got docker you can install it [here](https://docs.docker.com/get-docker/). You can also run docker manually by using `docker compose up -d`. ::: :::warn graphql API can only be ran when you have a postgres storage setup in your YAML. ::: :::code-group ```bash [indexer and graphql] rindexer start all ``` ```bash [indexer] rindexer start indexer ``` ```bash [graphql] rindexer start graphql ``` ::: You can change the GraphQL port by doing --port \[number] in both all and graphql commands above. #### Hot Reload Add the `--watch` flag to automatically restart when you edit `rindexer.yaml`: :::code-group ```bash [indexer and graphql] rindexer start --watch all ``` ```bash [indexer] rindexer start --watch indexer ``` ::: Rindexer will validate the new config before restarting. Invalid YAML is rejected and the current process keeps running. See the [Hot Reload](/docs/start-building/hot-reload) documentation for full details including production deployment. :::info If you change your contract ABIs or want to start fresh you can use the [delete](/docs/start-building/delete) command to drop all the data and start over. You can also use the [drop\_each\_run](/docs/start-building/yaml-config/storage#drop_each_run) option in the YAML configuration file to drop all the data for the indexer before starting. ::: ### Health Monitoring When you start rindexer with indexing enabled, a health monitoring server automatically starts on port `8080`. This provides real-time insights into your indexing infrastructure status. #### Quick Health Check Access the health endpoint at `http://localhost:8080/health` to get system status: ```json { "status": "healthy", "services": { "database": "healthy", "indexing": "healthy", "sync": "healthy" }, "indexing": { "active_tasks": 2, "is_running": true } } ``` #### Health Server Lifecycle * **`rindexer start indexer` (with end\_block)**: Short-lived - dies when historical indexing completes * **`rindexer start indexer` (no end\_block)**: Long-lived - stays alive for live indexing * **`rindexer start graphql`**: No health server - health monitoring not available * **`rindexer start all`**: Long-lived - follows GraphQL server lifecycle :::info For detailed health monitoring documentation, see the [Health Monitoring](/docs/start-building/health-monitoring) guide. ::: ### Rust Project If you want to run this with docker support for the postgres first run: ```bash docker compose up -d ``` Then to run the the rust project you can run the following command: :::info You are creating a rust rindexer project you should be wanting to change all of this logic to suit your needs. Just like react create app exposes you to the boilerplate code to get you started, this is the same. If you change the main.rs some of the arguments like --indexer and --graphql may not run with these commands but as you would of changed it you will know how to run it. ::: :::code-group ```bash [everything] cargo run ``` ```bash [indexer only] cargo run -- --indexer ``` ```bash [graphql only] cargo run -- --graphql ``` ::: We also advise you in production to run your rust projects in release mode, you can run it in release mode using ```bash cargo run --release ``` You can also do other fancy production builds with other frameworks like jemalloc and other flags, but we will leave that to you to explore. ## Log block-timestamps ### What is the problem A log result in the JSON-RPC spec does not always expose the block timestamp, which means it can require another block lookup per each log to get the block timestamp. This is not efficient and can cause a big bottleneck in indexing. :::info But we want the best DX and so until all node implementation catch up, we have the following solutions. ::: ## How we handle it ### RPC Support for Log Timestamps We want to be able to solve this effectively and have worked with node implementations like GETH and RETH to include block timestamps in logs. Providers and L2s will slowly begin rolling this out in their nodes over the coming months and years and soon this problem will no longer exist. ### Delta run-length encoded Delta run-length encoding is an effective way to support block-timestamps that are not necessarily sequential but generally follow a pattern. Most chains will have a roughly "fixed" block-time, and this can be used to encode the block-timestamps more efficiently via "runs" of the delta between times. This process requires more upfront-work and more storage/memory, but can be a great way to save on network requests and IO time. We precompute chains and store the [highly compressed kB to MB scale binary files for hydration](https://github.com/joshstevens19/rindexer/tree/master/core/resources). This is a manual process and is designed to optimize backfill operations but won't help for head-of-line indexing which will still require a manual rpc call. ### Fixed timestamps chains These are the simplest of the networks, it is the most extreme delta-run-length encoding and can therefore be optimized even more. Rather than storing "runs", we consider the whole chain to be a single "run" and can simply calculate any timestamp for a block. Due to the lack of any strong guarantee, we can only do this up to a "known" block number where the fixed-timestamp consistency has been validated. If at any time a chain breaks this pattern we must drop back to delta run length encoding. ### Sampled & Batched Lookups if we don't have a precomputed or fixed-chain mapping here we will fall back to an optimised sampled and batch RPC call per network. We aim to minimize network round-trip time with optimal batch-sizes and concurrent requests for large block-ranges. We also perform the lookup as soon as the logs are returned such that if there is any bottle-necking in the handler or database write or stream processing, we will take advantage of that dead time. **Loose time-ordering** We also optionally allow users to configure a sample rate, this can additionally speed up worst-case scenario network latency times by significantly reducing the data over the wire and RPC processing time. For example an RPC call for 2 blocks is fast, but if we have a sparse event over 5,000 blocks in a single log response, we may have to make tens or hundreds of concurrent and/or sequential calls to fetch them all. Sampling helps minimize this by fetching 50 blocks at spaced intervals in a single batched RPC request and interpolating the timestamps between those intervals. This can be a massive performance boost, at the cost of occasionally slight inaccuracies in timestamps. This should be opted-into based on your workload and requirements. ## Extending supported chains To begin encoding a new chain's block-timestamps, you can run the following command: ```sh cargo xtask encode-block-clock \ --network 43114 \ --rpc-url "https://avax-mainnet.g.alchemy.com/v2/API_KEY" \ --batch-size 2000 ``` This will encode and periodically flush data to the file `core/resources/blockclock/43113.blockclock` as per the above example. Simply replace the network if and RPC url and run the command until it is complete. This will potentially consume a lot of CU for your provider so be aware of this. ## Config More advanced configuration options for fine-tuning memory usage, event throughput, and more. Most of the time you will not need to adjust these values. #### Useful background It is useful to become familiar with how rindexer controls concurrency across events and networks. By nature of blockchain indexing, we must index events separately per network. However, we also try to optimize throughput via `eth_getLogs` requests **per event**. This means we ultimately run an indexing process per "network-event". :::info This "network-event" is where concurrency is controlled. ::: ### buffer *Default: `4`* This parameter controls "buffer" of events we will hold in memory, per "network-event". This is extremely useful for limiting the upper memory-bound during large scale backfill operations for high-frequency events (like ERC20 transfers). What happens if the handler does not release events as fast as they can queried? Well,a backlog of events we've indexed would build up in memory and ultimately the process would OOM and be killed. We avoid that by maintaining a bounded channel (buffer) of events. This way when the handler is ready it will pull the next event, and it will trigger a new indexing fetch to fill the freed slot. The default should be enough to balance memory use with high-throughput, however can be tweaked to constrain memory by lowering the value. Or potentially increasing throughput by increasing the value. :::info This concept is known as "back-pressuring". ::: ```yaml [rindexer.yaml] name: rIndexer description: My native transfers rindexer project repository: https://github.com/joshstevens19/rindexer config: buffer: 1 // [!code focus] ``` ### callback\_concurrency *Default: `2`* :::warn When "index\_event\_in\_order" is enabled for an event, it will override this setting with `1` to ensure FIFO ordering. ::: This setting controls the "network-event" handler callback rate. This allows us to have `n` concurrent handlers being called per "network event". This may or may not be desirable based on the use-case and code present in the handler callback function. A case where this may not be desirable is if there is any kind of "per network-event global locking" which would mean that trying to run 2 batches in parallel would simply result in one batch holding the other up. Imagine setting this to some very high number, `999999`, representing unbounded concurrency. In this case you can imagine that there is essentially no "back-pressure". This would work in the case where events are simply being discarded, being maintained in memory, or some other hyper-efficient mechanism. But in reality, the most common case for indexing will be to persist the events to a database, and in these cases there are factors such as data structure locks, database connection pool limits, and resource constraints. This means we cannot reasonably benefit from increasing this number too high, and on the contrary, can suffer a decrease in throughput due to lock contention, and unwanted situations like connection pool exhaustion, deadlocks, and more. You may benefit from increasing this above `2` for very simple workloads, generally a value of `1` or `2` is optimal. ```yaml [rindexer.yaml] name: rIndexer description: My native transfers rindexer project repository: https://github.com/joshstevens19/rindexer config: callback_concurrency: 2 // [!code focus] ``` ### timestamp\_sample\_rate Optionally configure a sample rate to improve the efficiency of large-block-range requests by sampling those blocks. If you do not want to sample either ignore the option or set it to `1.0`. :::warn Sampling is tradeoff and cannot guarantee 100% accuracy of timestamps, but is far more efficient whilst retaining high-accuracy. Only enable sampling if you can accept small inaccuracies in timestamps. ::: ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer config: timestamp_sample_rate: 0.1 // [!code focus] ``` ## Contracts The list of contracts to index for this indexer. :::info You can have multiple contracts in an indexer. ::: ### name This is the name of the contract to index, it will use this name on the database on tables it generates, alongside on generated rust code if you are using the rust project. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: RocketPoolETH // [!code focus] ``` ### details The details for the contract mapping to the network and contract address. #### network The network name to listen for events on, this should match the network name in the networks section of the YAML. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: RocketPoolETH details: - network: ethereum // [!code focus] ``` #### address :::info The address or addresses of the contract or contracts to listen for events on. Only one of `address`, `filter` or `factory` can be provided for a given contract details. ::: The contract address to listen for events on. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" // [!code focus] ``` To listen to many contract addresses you can provide an array of addresses. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: RocketPoolETH details: - network: ethereum address: - "0xae78736cd615f374d3085123a210448e74fc6393" // [!code focus] - "0x2FD5c1659A82E87217DF254f3D4b71A22aE43eE1" // [!code focus] ``` #### filter :::info Only one of `address`, `filter` or `factory` can be provided for a given contract details. ::: If you wish to filter based on events only for example you want all transfer events from all contracts you can use the filter. :::warning You currently cannot mix and match address and filter within the same contract definition. This means if you want to index on a filter and with the same contract definition index on an address you will need to do 2 different contract definitions in the yaml file. ::: ##### event\_name The event name to filter on, it must match the ABI event name. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: ERC20Transfer details: - network: ethereum filter: // [!code focus] event_name: Transfer // [!code focus] ``` ##### Index more then 1 filter for the contract You can just pass in an array of events names to index on the filter. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: ERC20 // [!code focus] details: - network: ethereum filter: // [!code focus] - event_name: Transfer // [!code focus] - event_name: Approval // [!code focus] ``` #### factory :::info Only one of `address`, `filter` or `factory` can be provided for a given contract details. ::: Some contracts are deployed through the factory contract (e.g. Uniswap V3). If you wish to track events only from factory-deployed addresses use `factory` filter. :::warning Factory filter requires to specify events which should be included when indexing through `include_events` property on the contract. ::: ##### name The name of the factory contract to index. ##### address The factory contract address to listen for events on. To listen to many factory contract addresses, you can provide an array of addresses. ##### abi The ABI of the contract is pointing to the JSON file in the repository. It can be a relative path or a full path. ##### event\_name The event name to filter on, it must match the ABI event name. ##### input\_name The path to the factory-deployed contract address in the event inputs. Supports deep property access in case of complex event types: `pool.address`. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: UniswapV3Pool details: - network: ethereum factory: // [!code focus] name: UniswapV3Factory // [!code focus] address: 0x1F98431c8aD98523631AE4a59f267346ea31F984 // [!code focus] abi: ./abis/UniswapV3Factory.abi.json // [!code focus] event_name: PoolCreated // [!code focus] input_name: "pool" // [!code focus] ``` If a factory deploys more than one contract in a single event, an array of inputs can be provided to track multiple factory-deployed addresses. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: UniswapV3Pool details: - network: ethereum factory: // [!code focus] name: UniswapV3Factory // [!code focus] address: 0x1F98431c8aD98523631AE4a59f267346ea31F984 // [!code focus] abi: ./abis/UniswapV3Factory.abi.json // [!code focus] event_name: PoolCreated // [!code focus] input_name: // [!code focus] - "token0" // [!code focus] - "token1" // [!code focus] ``` :::info For factories that deploy a high volume of contracts, consider optimizing event fetching logic to enhance indexing performance. For detailed implementation guidance, refer to the [network configuration documentation](/docs/start-building/yaml-config/networks#get_logs_settings) settings. ::: #### indexed\_1, indexed\_2, indexed\_3 :::info This is optional and can be used on both address and filter. ::: Indexed means that these values will be stored in the topics field rather than the data field when the event gets fired off and you can filter these out on the JSONRPC side so you only get the events you want. In EVM you can have up to 3 indexed fields to filter on. The indexed 1,2,3 are based on the order they emitted in the event. So if you have 3 indexed fields in the event you can filter on all 3 or 2 or 1 of them in any direction. Indexed fields are arrays so you can filter many values in the indexed fields, the arrays are `OR` not `AND` filtering. example ABI: ```json { "anonymous":false, "inputs":[ { "indexed":true, // [!code focus] "internalType":"address", "name":"owner", "type":"address" }, { "indexed":true, // [!code focus] "internalType":"address", "name":"spender", "type":"address" }, { "indexed":false, // [!code focus] "internalType":"uint256", "name":"value", "type":"uint256" } ], "name":"Approval", // [!code focus] "type":"event" } ``` So this ABI says that the inputs `owner` and `spender` are indexed and can be filtered on. `value` is not indexed so you can not filter on it. For example if you wanted to get all the approvals for rETH for owner `0xd87b8e0db0cf9cbf9963c035a6ad72d614e37fd5` and `0x0338ce5020c447f7e668dc2ef778025ce398266b` you could set the indexed filters like so: ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" indexed_filters: // [!code focus] - event_name: Approval // [!code focus] indexed_1: - "0xd87b8e0db0cf9cbf9963c035a6ad72d614e37fd5" // [!code focus] - "0x0338ce5020c447f7e668dc2ef778025ce398266b" // [!code focus] ``` Another example using filters is if you wanted to get all the approvals for any token for owner `0xd87b8e0db0cf9cbf9963c035a6ad72d614e37fd5` and `0x0338ce5020c447f7e668dc2ef778025ce398266b` you could set the indexed filters like so: ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: RocketPoolETH details: - network: ethereum filter: event_name: Approval indexed_filters: // [!code focus] - event_name: Approval // [!code focus] indexed_1: - 0xd87b8e0db0cf9cbf9963c035a6ad72d614e37fd5 // [!code focus] - 0x0338ce5020c447f7e668dc2ef778025ce398266b // [!code focus] ``` #### start\_block The block to start indexing from, you can use the deployed block if you wish to get everything. :::info This is optional but most people will want to use this, if you do not provide an start block it will index the data from now and then live index as new blocks come in. Important to know this will NOT track last synced block and when you start and stop the indexer it will start from the latest block. You can read more about this [here](/docs/start-building/live-indexing-and-historic). ::: ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18600000 // [!code focus] ``` #### end\_block :::info This is optional, if you do not provide an end block it will index all the data and then live index as new blocks come in. You can read more about this [here](/docs/start-building/live-indexing-and-historic). ::: ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18600000 end_block: 18718056 // [!code focus] ``` #### Multiple Networks You can have multiple networks for the same contract, this is useful if you have a contract that is deployed on multiple networks. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: // [!code focus] - name: ethereum // [!code focus] chain_id: 1 // [!code focus] rpc: https://mainnet.gateway.tenderly.co // [!code focus] - name: base // [!code focus] chain_id: 8453 // [!code focus] rpc: https://base.gateway.tenderly.co // [!code focus] storage: postgres: enabled: true contracts: // [!code focus] - name: RocketPoolETH details: // [!code focus] - network: ethereum // [!code focus] address: "0xae78736cd615f374d3085123a210448e74fc6393" // [!code focus] start_block: 18600000 // [!code focus] end_block: 18718056 // [!code focus] - network: base // [!code focus] address: "0xba25348cd615f374d3085123a210448e74fa3333" // [!code focus] start_block: 18118056 // [!code focus] end_block: 18918056 // [!code focus] ``` ### abi The ABI of the contract pointing to the JSON file in the repository. It can be a relative path or a full path. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18600000 end_block: 18718056 abi: ./abis/RocketTokenRETH.abi.json // [!code focus] ``` #### Many ABIs If you need to use many ABIs in the single contract you can pass in an array this is useful if you have a contract which has several different implementations ABIs. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18600000 end_block: 18718056 abi: - ./abis/RocketTokenRETH.abi.json // [!code focus] - ./abis/RocketTokenRETH2.abi.json // [!code focus] ``` ### include\_events The events you wish to include for **raw event logging** - each event creates a row in its own table (e.g., `transfer`, `approval`). :::info This is optional. If you only want custom [tables](#tables) without raw event logging, you can omit this entirely. If neither `include_events` nor `tables` is specified, all events in the ABI will be indexed. ::: ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18600000 end_block: 18718056 abi: ./abis/RocketTokenRETH.abi.json include_events: // [!code focus] - Transfer // [!code focus] - Approval // [!code focus] ``` ### index\_event\_in\_order rindexer was built to be as fast as it can so any blocking processes holds indexing up, the more concurrency the better. Any events which you wish to index in the order the events were emitted on that event in the contract can be put in this list. The more you put in here the slower the indexer will be as it will have to wait for the previous events to be indexed before it can index the next events. :::info This is optional if you do not provide this it will assume speed is more important than order. ::: ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18600000 end_block: 18718056 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval index_event_in_order: // [!code focus] - Transfer // [!code focus] - Approval // [!code focus] ``` ### dependency\_events :::warning if you are using defined dependency\_events and using [relationships](/docs/start-building/yaml-config/storage#relationships) you will need to make sure you define the relationship in the `dependency_events` manually as rindexer can not merge the relationship with the dependency events if custom dependency\_events are defined. ::: :::warning Also note any cross contracts relationships will not be applied automatically, you will need to define them manually in the YAML. if you do not rindexer will panic and let you know that you have to define the [dependency\_events](/docs/start-building/yaml-config/contracts#dependency_events). ::: rindexer was built to be as fast as it can so any blocking processes holds indexing up, the more concurrency the better. Any events which depend on each other can be put in the `dependency_events` list, this will mean that they will be processes in the order they are in the list. * `events` = process these events * `then` = after you processed the `events` above process these events If you do not put an event in the `dependency_events` events then it will be deemed a non-blocking event and will be processed as soon as it can. :::info This is optional if you do not provide this it will assume speed is more important than order. ::: ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18600000 end_block: 18718056 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval dependency_events: // [!code focus] events: - Transfer // [!code focus] then: events: - Approval // [!code focus] ``` #### Cross Contract Dependency Events You can also define dependency events blocking across contracts, this is useful if you have many contracts which emit data but are dependent on each other. :::info WrappedRocketPoolETH example below does not exist on the chain, this is just an example of how you can use dependency events. ::: ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: WrappedRocketPoolETH // [!code focus] details: - network: ethereum address: 0x2FD5c1659A82E87217DF254f3D4b71A22aE43eE8 start_block: 18600000 end_block: 18718056 abi: ./abis/WrappedRocketTokenRETH.abi.json include_events: // [!code focus] - Approval // [!code focus] - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18600000 end_block: 18718056 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer // [!code focus] dependency_events: // [!code focus] events: - Transfer // [!code focus] then: events: - contract_name: WrappedRocketPoolETH // [!code focus] event_name: Approval // [!code focus] ``` So now `WrappedRocketPoolETH` > `Approval` will not be processed until `RocketPoolETH` > `Transfer` is processed. ### tables :::tip[Recommended for No-Code Projects] **Custom tables are the most powerful feature for no-code indexing.** Instead of just logging raw events, you can maintain derived state like token balances, NFT ownership, counters, and cross-chain aggregations - all without writing any Rust code. ::: Custom tables let you define exactly what data you want to track and how events should update it. When using `tables`, you don't need `include_events` - rindexer will automatically subscribe to the events your tables reference. **Quick example - Track ERC20 token balances:** ```yaml [rindexer.yaml] contracts: - name: USDC details: - network: ethereum address: "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48" start_block: 18600000 abi: ./abis/ERC20.json tables: // [!code focus] - name: balances // [!code focus] columns: // [!code focus] - name: holder // [!code focus] - name: balance // [!code focus] default: "0" // [!code focus] events: // [!code focus] - event: Transfer // [!code focus] operations: // [!code focus] - type: upsert // [!code focus] where: // [!code focus] holder: $to // [!code focus] if: "$to != 0x0000000000000000000000000000000000000000" // [!code focus] set: // [!code focus] - column: balance // [!code focus] action: add // [!code focus] value: $value // [!code focus] - type: upsert // [!code focus] where: // [!code focus] holder: $from // [!code focus] if: "$from != 0x0000000000000000000000000000000000000000" // [!code focus] set: // [!code focus] - column: balance // [!code focus] action: subtract // [!code focus] value: $value // [!code focus] ``` For complete documentation including column types, operations (`upsert`, `update`, `delete`), set actions (`add`, `subtract`, `max`, `min`), condition expressions (`if`), `global` tables, `cross_chain` aggregation, and more examples: [**Custom Tables Documentation β†’**](/docs/start-building/tables) ### reorg\_safe\_distance Reorgs can happen on the chain, this is when a block is removed from the chain and replaced with another block. This can cause issues with the indexer indexed state if you turn `reorg_safe_distance` on it will keep a safe distance from the live latest block to avoid any reorg issues. Note if you are doing live indexing you will need to handle more advanced reorgs, support for advanced reorgs is in the backlog for rindexer. :::info This is optional if you do not provide this it will index the latest blocks instantly. ::: ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18600000 end_block: 18718056 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval reorg_safe_distance: true // [!code focus] ``` ### generate\_csv If you wish to generate a CSV file of the indexed data you can turn this on. This will be ignored if you do not have the CSV storage enabled. By default if this is not supplied and the CSV storage is enabled it will generate a CSV file. :::info This is optional if you do not provide this it will generate a CSV file if the CSV storage is enabled. ::: ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: // [!code focus] - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18600000 end_block: 18718056 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval generate_csv: true // [!code focus] ``` ### streams You can configure streams to stream the data to other services, this is useful if you want to use other services to index the data. You can read more about it [here](/docs/start-building/streams). ### chat You can configure chat to send messages You can read more about it [here](/docs/start-building/chatbots). ## global Global YAML. ### etherscan\_api\_key :::info This is optional and will use a shared fallback key if not provided. This can be rate limited as many people may be using it. We advise if using `rindexer add` very often to provide your own key. ::: We advise you to put the etherscan API key in an environment variable. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: rust networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18900000 end_block: 19000000 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval global: // [!code focus] etherscan_api_key: ${ETHERSCAN_API_KEY} // [!code focus] ``` ### contracts :::info If you are building a no-code project you can skip this section. This is for rust projects only. ::: The contracts section of the global YAML config allows you to define contracts which can be used in the indexers. You can define many contracts in a single YAML file. #### name The name of the contract, it should be unique to the YAML file. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: rust networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18900000 end_block: 19000000 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval global: // [!code focus] contracts: // [!code focus] - name: USDT // [!code focus] ``` #### details The details of the contract. ##### address The address of the contract. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: rust networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18900000 end_block: 19000000 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval global: contracts: - name: USDT details: // [!code focus] - address: 0xdac17f958d2ee523a2206206994597c13d831ec7 // [!code focus] ``` ##### network The network the contract is on, this should match the network name in the networks section of the YAML file. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: rust networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18900000 end_block: 19000000 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval global: contracts: - name: USDT details: - address: 0xdac17f958d2ee523a2206206994597c13d831ec7 network: ethereum // [!code focus] ``` #### abi The path to the ABI file for the contract. It can be a relative or full path. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: rust networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18900000 end_block: 19000000 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval global: contracts: - name: USDT details: - address: 0xdac17f958d2ee523a2206206994597c13d831ec7 network: ethereum abi: ./abis/erc20.abi.json // [!code focus] ``` ### Multiple Contracts ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: rust networks: - name: ethereum // [!code focus] chain_id: 1 // [!code focus] rpc: https://mainnet.gateway.tenderly.co // [!code focus] - name: base // [!code focus] chain_id: 8453 // [!code focus] rpc: https://mainnet.base.org // [!code focus] storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18900000 end_block: 19000000 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval global: // [!code focus] contracts: // [!code focus] - name: USDT // [!code focus] details: // [!code focus] - address: 0xdac17f958d2ee523a2206206994597c13d831ec7 // [!code focus] network: ethereum // [!code focus] - address: 0xfde4C96c8593536E31F229EA8f37b2ADa2699bb2 // [!code focus] network: base // [!code focus] abi: ./abis/erc20.abi.json // [!code focus] ``` ## graphql To define some graphql settings you can use the `graphql` section of the YAML configuration file. :::info This is optional if you are happy with the default settings but worth knowing what you can configure. ::: ### port You can use the `--port` flag when running to override the port number you want to use for the GraphQL server but this yaml config allows you to set a default port number. By default if not set it will use port 3001. ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18600000 end_block: 18718056 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval graphql: port: 3001 // [!code focus] ``` ### disable\_advanced\_filters rindexer GraphQL supports [advanced filtering](/docs/accessing-data/graphql#filter) but these filters easily be abused and cause performance issues. If you wish to disable advanced filtering you can set this to true. By default it is enabled so set as false. ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18600000 end_block: 18718056 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval graphql: disable_advanced_filters: true // [!code focus] ``` ### filter\_only\_on\_indexed\_columns When you end up having a database which has a lot of data querying that can become slow, indexes can help speed up the queries and critical for the performance of the GraphQL server. By default rindexer lets you filter on any column even if it is not indexed but this setting allows you to only exposed the ability to filter via GraphQL on the indexed columns. You can define your own indexes in the [storage](/docs/start-building/yaml-config/storage#indexes) section of the YAML configuration file. ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18600000 end_block: 18718056 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval graphql: filter_only_on_indexed_columns: true // [!code focus] ``` ## Overview of the YAML Configuration File The YAML configuration file is the heart of your rindexer project. It defines the project's name, description, repository, and the contracts that will be used to index the data. This file is used to set up the project and configure the indexing tasks that will be performed. **YAML is case-sensitive, so make sure to use the correct case when defining the fields in the configuration file.** :::info YAML files can be mapped to environment variables to store sensitive information, such as RPC urls or other credentials. The syntax for this in the YAML is `${ENV_VARIABLE_NAME}`. ::: ### YAML structure * [Top level fields](/docs/start-building/yaml-config/top-level-fields) - The top-level fields of the YAML configuration file. * [Networks](/docs/start-building/yaml-config/networks) - The networks to listen for events on are defined in the YAML configuration file. * [Storage](/docs/start-building/yaml-config/storage) - The storage configuration is defined in the YAML configuration file. * [Contracts](/docs/start-building/yaml-config/contracts) - The indexers of the project are defined in the YAML configuration file. * [GraphQL](/docs/start-building/yaml-config/graphql) - The GraphQL configuration is defined in the YAML configuration file. * [Global](/docs/start-building/yaml-config/global) - The global events to listen for are defined in the YAML configuration file. * [Config](/docs/start-building/yaml-config/config) - The advanced configuration parameters for the indexer #### Environment Variables YAML files can be mapped to environment variables to store sensitive information, such as RPC urls or other credentials. Alongside different environments mappings, allowing you to store different values for different environments. The syntax for this in the YAML is `${ENV_VARIABLE_NAME}`. This can be used in ANY field in the YAML file. example: ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: ${RPC_URL} // [!code focus] storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18600000 end_block: 18718056 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval ``` ### Example YAML no-code configuration file #### For single contract address Filter events for a specific address ##### Historic ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18600000 end_block: 18718056 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval ``` ##### Live No start or end block will index from all new blocks as they are produced live. ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval ``` ##### Live and historic No end block will index from the start block to the latest block then index all new blocks as they produced live. ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18600000 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval ``` #### For many contract addresses Filter events for many contract addresses ##### Historic ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: - "0xae78736cd615f374d3085123a210448e74fc6393" // [!code focus] - "0x2FD5c1659A82E87217DF254f3D4b71A22aE43eE1" // [!code focus] start_block: 18600000 end_block: 18718056 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval ``` #### For address or addresses with indexed filter Filter events for a specific address or array of addresses filtering on indexed fields. You can read more about indexed fields [here](/docs/start-building/yaml-config/contracts#indexed_1-indexed_2-indexed_3). ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" indexed_filters: // [!code focus] - event_name: Transfer // [!code focus] indexed_1: // [!code focus] - 0xd87b8e0db0cf9cbf9963c035a6ad72d614e37fd5 // [!code focus] start_block: 18600000 end_block: 18718056 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval ``` #### Filter for an event across all contracts The historic, live and historic and live examples above can be used in every example. You can read more about these terms [here](/docs/start-building/live-indexing-and-historic). ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: TransferEvents details: - network: ethereum filter: event_name: Transfer start_block: 18600000 end_block: 18718056 abi: ./abis/ERC20.abi.json ``` #### Filter for an event across all contracts against indexed values The historic, live and historic and live examples above can be used in every example. You can read more about these terms [here](/docs/start-building/live-indexing-and-historic). ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: TransferEventForAddress details: - network: ethereum filter: event_name: Transfer indexed_filters: - event_name: Transfer indexed_1: - 0x4A1a2197f307222cD67A1762D9A352F64558d9Be start_block: 18600000 end_block: 18718056 abi: ./abis/ERC20.abi.json ``` #### Filter for address or addresses deployed by factory contract Filter events that are emitted from a known factory-deployed contract. You can read more about indexed fields [here](/docs/start-building/yaml-config/contracts#factory). ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: UniswapV3Pool details: - network: ethereum factory: name: UniswapV3Factory address: 0x1F98431c8aD98523631AE4a59f267346ea31F984 abi: ./abis/UniswapV3Factory.abi.json event_name: PoolCreated input_name: "pool" ``` ## Native Transfers A special opt-in configuration for indexing native token transfers such as "ETH", in the form of "ERC20"-like transfer events. You can expect the event to be defined as if you were indexing an ERC20 Transfer event. :::warning This is **experimental** functionality which has not yet been extensively tested in production. ::: Supported stream providers: * [Simple](/docs/start-building/yaml-config/native-transfers#simple) - Simple opt-in (for csv, and postgres) * [Complex](/docs/start-building/yaml-config/native-transfers#complex) - Complex indexing configuration with stream providers ## Simple The "simple" opt-in is done via including the top level yaml `native_transfers: true`. This has a few special properties and is designed to kickstart simple persistence based indexing of native transfers. By default, this means: * All networks defined in `networks` will be enabled for native transfer indexing * All enabled `storage` options will be used * Native transfers will be indexed in `live` mode, from the latest block onwards. The event will be persisted to storage under the event name `NativeTransfer`. ```yaml [rindexer.yaml] name: rIndexer description: My native transfers rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 # The rpc provider must support the `trace_block` rpc method in simple mode rpc: https://mainnet.gateway.tenderly.co // [!code focus] storage: postgres: enabled: true native_transfers: true // [!code focus] contracts: [] ``` ## Complex The complex configuration is designed for more powerful configuration. Specifically if your use case is one of the following: 1. You want historical `native transfer` indexing 2. You want to use one of the `stream` or `chat` providers 3. You want to conditionally filter or alias the event name 4. You want to only opt-in to specific networks for `native transfer` events If you provide any `networks` in the `native_transfers` config it is equivalent to setting `native_tranfers: true` and you will be opted in to native transfer indexing for that network. ### networks The network name to listen for events on, this should match the network name in the networks section of the YAML. ```yaml [rindexer.yaml] name: rIndexer description: My native transfers rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: // [!code focus] - network: ethereum // [!code focus] contracts: [] ``` #### start\_block The block to start indexing from, you can use the deployed block if you wish to get everything. :::info This is optional but most people will want to use this, if you do not provide an start block it will index the data from now and then live index as new blocks come in. Important to know this will NOT track last synced block and when you start and stop the indexer it will start from the latest block. You can read more about this [here](/docs/start-building/live-indexing-and-historic). ::: ```yaml [rindexer.yaml] name: rIndexer description: My native transfers rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: // [!code focus] - network: ethereum start_block: 0 // [!code focus] contracts: [] ``` #### end\_block :::info This is optional, if you do not provide an end block it will index all the data and then live index as new blocks come in. You can read more about this [here](/docs/start-building/live-indexing-and-historic). ::: ```yaml [rindexer.yaml] name: rIndexer description: My native transfers rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: // [!code focus] - network: ethereum start_block: 18600000 end_block: 18718056 // [!code focus] contracts: [] ``` #### method :::info This is optional, if you do not provide a method it will default to using `eth_getBlockByNumber` it is the most efficient, well supported, and simple RPC method available. ::: The method field is an advanced option, and typically does not need to be defined. By default it will use `eth_getBlockByNumber` and it is only recommended to manually override this in the event that your RPC provider does not have adequate support or there is some reason you would prefer to use `trace_block` or `debug_traceBlockByNumber`. Valid options are: `eth_getBlockByNumber` `debug_traceBlockByNumber` or `trace_block`. :::code-group ```yaml [debug_traceBlockByNumber] name: rIndexer description: My native transfers rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: // [!code focus] - network: ethereum start_block: 18600000 end_block: 18718056 method: eth_getBlockByNumber contracts: [] ``` ```yaml [debug_traceBlockByNumber] name: rIndexer description: My native transfers rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: // [!code focus] - network: ethereum start_block: 18600000 end_block: 18718056 method: debug_traceBlockByNumber contracts: [] ``` ```yaml [trace_block] name: rIndexer description: My native transfers rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: // [!code focus] - network: ethereum start_block: 18600000 end_block: 18718056 method: trace_block // [!code focus] contracts: [] ``` ```yaml [default] name: rIndexer description: My native transfers rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: // [!code focus] - network: ethereum start_block: 18600000 end_block: 18718056 contracts: [] ``` ::: #### Multiple Networks You can have multiple networks, this is useful if you must track native balances across a variety of networks. ```yaml [rindexer.yaml] name: rIndexer description: My native transfers rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: // [!code focus] - name: ethereum // [!code focus] chain_id: 1 // [!code focus] rpc: https://mainnet.gateway.tenderly.co // [!code focus] - name: base // [!code focus] chain_id: 8453 // [!code focus] rpc: https://base.gateway.tenderly.co // [!code focus] storage: postgres: enabled: true native_transfers: // [!code focus] networks: // [!code focus] - network: ethereum // [!code focus] start_block: 18600000 end_block: 18718056 - network: base // [!code focus] start_block: 18118056 end_block: 18918056 ``` ### reorg\_safe\_distance Reorgs can happen on the chain, this is when a block is removed from the chain and replaced with another block. This can cause issues with the indexer indexed state if you turn `reorg_safe_distance` on it will keep a safe distance from the live latest block to avoid any reorg issues. Note if you are doing live indexing you will need to handle more advanced reorgs, support for advanced reorgs is in the backlog for rindexer. :::info This is optional if you do not provide this it will index the latest blocks as soon as it is available for `debug` indexing by the provider. ::: ```yaml [rindexer.yaml] name: rIndexer description: My native transfers rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co - name: base chain_id: 8453 rpc: https://base.gateway.tenderly.co native_transfers: // [!code focus] networks: - network: ethereum - network: base reorg_safe_distance: true // [!code focus] contracts: [] ``` ### generate\_csv If you wish to generate a CSV file of the indexed data you can turn this on. This will be ignored if you do not have the CSV storage enabled. By default if this is not supplied and the CSV storage is enabled it will generate a CSV file. :::info This is optional if you do not provide this it will generate a CSV file if the CSV storage is enabled. ::: ```yaml [rindexer.yaml] name: rIndexer description: My native transfers rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: // [!code focus] networks: - network: ethereum generate_csv: true // [!code focus] contracts: [] ``` ### streams The stream options for `native_transfers` is equivalent to contract event indexing with one exception. All streams provided will have the `NativeTransfer` event enabled by default, so it does not need to be explicitly defined unless special logic (e.g. aliasing event names) is desired. :::info You can configure streams to stream the data to other services, this is useful if you want to use other services to index the data. You can read more about it [here](/docs/start-building/streams). ::: ### Simple stream definition :::info Notice we ***don't*** define `events` under the `topics` in this SNS stream example. ::: That is because rindexer knows the single `NativeTransfer` event should be included by default. ```yaml [rindexer.yaml] name: indexer description: rindexer native transfers demo project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: - network: ethereum reorg_safe_distance: true streams: sns: aws_config: region: us-east-1 access_key: ${AWS_ACCESS_KEY_ID} secret_key: ${AWS_SECRET_ACCESS_KEY} topics: // [!code focus] - topic_arn: arn:aws:sns:us-east-1:000000000000:ethereum-transfers // [!code focus] networks: // [!code focus] - ethereum // [!code focus] contracts: [] ``` ### Explicit `events` definition In this case, we want to explicitly configure the stream processing for the event. :::info In order to add additional stream-event config, we **MUST** define the event with the name `NativeTransfer`. ::: ```yaml [rindexer.yaml] name: indexer description: rindexer native transfers demo project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: - network: ethereum reorg_safe_distance: true streams: sns: aws_config: region: us-east-1 access_key: ${AWS_ACCESS_KEY_ID} secret_key: ${AWS_SECRET_ACCESS_KEY} topics: - topic_arn: arn:aws:sns:us-east-1:000000000000:ethereum-transfers networks: - ethereum events: // [!code focus] - event_name: NativeTransfer // [!code focus] alias: ETHTransfer // [!code focus] contracts: [] ``` ### chat You can configure chat to send messages You can read more about it [here](/docs/start-building/chatbots). ## networks Networks YAML config describes the networks you wish to enable. :::info You can have multiple networks in a single YAML file. ::: ### Fields #### name The name of the network it should be unique to the YAML so you can not have 2 networks with the same name in the same YAML file. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum // [!code focus] ``` #### chain\_id The chainId of the network. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 // [!code focus] ``` #### rpc The rpc url for the network. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co // [!code focus] ``` You can use [erpc](https://rindexer.xyz/docs/references/rpc-node-providers#rpc-proxy-and-caching) for load-balancing between multiple rpc endpoints (with failover, re-org aware caching, auto-batching, rate-limiters, auto-discovery of node providers, etc.) ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: http://erpc:4000/main/evm/1 // [!code focus] ``` We advise using environment variables for the rpc url to avoid checking in sensitive information. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: ${ETHEREUM_RPC} // [!code focus] ``` You can read more about environment variables in the [Environment Variables](/docs/start-building/yaml-config#environment-variables) section. #### max\_block\_range :::info This field is optional and will slow down indexing if applied, rindexer is fastest when you use a RPC provider who can predict the next block ranges when fetching logs. You can read a bit more about RPC providers [here](/docs/references/rpc-node-providers#rpc-node-providers). ::: Set the max block range for the network, this means when rindexer is fetching logs it will not fetch more than the max block range per request. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co max_block_range: 10000 // [!code focus] ``` #### block\_poll\_frequency :::info This field is optional and may slow down indexing if applied, this is an advanced setting to be used with caution. ::: Set the block poll frequency for the network, this allows making a trade-off between RPC use and live indexing speed. The default setting will aggressively poll new blocks to ensure that we index as quickly as possible. This is not always wanted, and you can choose configure to use an rpc "optimized" version, or manually define the millisecond polling rate per network, or alternatively, manually define a factor of the polling rate. :::code-group ```yaml [rapid (default)] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co block_poll_frequency: rapid // [!code focus] # This will rapid-poll, roughly every ~50ms. ``` ```yaml [optimized] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co block_poll_frequency: optimized // [!code focus] # Reduce RPC call volume (at the cost of slightly slower indexing) whilst still aiming to be a non-human-noticeable indexing lag. ``` ```yaml [division] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co block_poll_frequency: "/3" // [!code focus] # At a 12s blocktime, this will poll around every 4s, i.e. `12s / 3`. ``` ```yaml [millis] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co block_poll_frequency: 1000 // [!code focus] # Poll every 1000ms for the network ``` ::: #### compute\_units\_per\_second :::info This field is optional ::: The compute units per second for the network. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co compute_units_per_second: 660 // [!code focus] ``` #### get\_logs\_settings :::info This field is optional. It is an advanced setting to be used with caution. ::: Advanced configuration options that allow fine-grained control of event fetching logic. ##### address\_filtering Specifies how events that require address filtering (one that use either address filter or factory filter) are fetched from the network. Can be one of: * with `max_address_per_get_logs_request` configuration *(default behaviour)* - events are fetched with addresses filter, log fetching happens in batches that consist of addresses chunks up to the specified value. Useful when events are often happening, but there is no huge number of addresses that are being watched for. The default value is 1000 addresses, which fits most of the RPC provider limits. * `in-memory` - all matching events are fetched and then filtered in memory by addresses. Useful when events are not happening often, but there are a huge number of addresses that are being watched for. :::code-group ```yaml [with max_address_per_get_logs_request] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co get_logs_settings: address_filtering: max_address_per_get_logs_request: 100000 ``` ```yaml [in-memory] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co get_logs_settings: address_filtering: "in-memory" ``` ::: #### disable\_logs\_bloom\_checks :::warning This field is optional and should only be turned on if you know what you are doing. You should only enable this if you are using a chain which does not have support for logs blooms. Logs blooms allow you to be able to skip calling `eth_getLogs` on blocks which do not contain the events you care about, this is a huge performance gain for the indexer alongside a saving on the RPC bill. If you are using a chain which does not support logs blooms you can enable this to skip the bloom checks. ::: ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co disable_logs_bloom_checks: true // [!code focus] ``` #### multicall3\_address :::info This field is optional. By default, rindexer uses the standard Multicall3 address which is deployed on 250+ chains. ::: When using `$call()` in [custom tables](/docs/start-building/tables), rindexer automatically batches view calls using [Multicall3](https://www.multicall3.com/) for significantly improved performance. This can reduce indexing time by 5-10x when your tables use on-chain view calls. The standard Multicall3 contract (`0xcA11bde05977b3631167028862bE2a173976CA11`) is deployed on most EVM chains. See the [full deployment list](https://www.multicall3.com/deployments). If your chain uses a different Multicall3 address, you can specify it: ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: my_custom_chain chain_id: 12345 rpc: https://rpc.mycustomchain.com multicall3_address: "0xYourCustomMulticall3Address" // [!code focus] ``` If Multicall3 is not available on a network, rindexer will automatically detect this and fall back to individual RPC calls. #### reth :::warning Reth mode requires running a Reth archive node and is intended for advanced users. For more information on setting up Reth, visit [reth's official documentation](https://reth.rs/run/ethereum). ::: Configure rindexer to use a local reth node for indexing. This provides direct connection to Reth with minimal latency and native reorg handling. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co # Fallback RPC reth: // [!code focus] enabled: true // [!code focus] logging: true // [!code focus] # Show Reth logs in stdout cli_args: // [!code focus] - "--datadir /data/reth" // [!code focus] - "--http" // [!code focus] - "--full false" // [!code focus] # Archive mode - "--authrpc.jwtsecret /path/to/jwt.hex" // [!code focus] ``` ##### enabled Enable or disable the reth integration for this network. ##### logging Show Reth logs in stdout (useful for debugging). ##### cli\_args Array of Reth CLI arguments in "flag value" format. Common arguments include: * `--datadir`: Path to the reth data directory * `--authrpc.jwtsecret`: Path to the JWT secret file for authenticated RPC * `--authrpc.port`: The port for the auth RPC server (default: 8551) * `--full`: Whether to run as a full node (use `false` for archive node) * `--http`: Enable HTTP RPC server ### Multiple Networks You can have as many networks as you want in the YAML file. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: // [!code focus] - name: ethereum // [!code focus] chain_id: 1 // [!code focus] rpc: https://mainnet.gateway.tenderly.co // [!code focus] - name: base // [!code focus] chain_id: 8453 // [!code focus] rpc: https://mainnet.base.org // [!code focus] ``` ## storage Storage YAML config describes the storage providers you wish to enable. :::info YAML files can be used to store sensitive information, you can use environment variables to store this information. The syntax for this in the YAML is `${ENV_VARIABLE_NAME}`. ::: ### postgres If you wish to store the data in a postgres database you can enable the postgres storage. :::info This is optional if you do not wish to store the data in a postgres database you can leave this section out of your YAML. ::: #### Internal tables When rindexer is running with postgres it uses the database to manage some internal state including the network and contract last seen block and cached records of the yaml so it can remove old indexes and foreign keys in the database. You can see those tables in a schema called `rindexer_internal` and should never be modified manually. #### Own connection string If you are deploying the indexer or want to point to an external database you can supply your own connection string, to do this you have to change/define it the `.env` file. ```bash DATABASE_URL=postgresql://[user[:password]@][host][:port][/dbname] ``` :::info `sslmode=require` is supported as well just include it in the connection string. If you are using AWS RDS, you will need to include the RDS certificates in your connection configuration. You can find the necessary certificates in the [AWS RDS SSL documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html). ::: #### enabled If postgres is enabled or not, if you do not wish to use postgres you can set this to false or remove postgres from the storage completely. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true // [!code focus] ``` #### drop\_each\_run rindexer will keep track of the last synced block for each contracts and events meaning when you start and stop the indexer it will start from the last synced block. rindexer will also create tables and indexes for you again which could clash if you are using rindexer to grab throw away data and want to start over each time you run it. You can use `drop_each_run` to drop all the data for the indexer before starting which will ensure you start fresh. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true drop_each_run: true // [!code focus] ``` #### disable\_create\_tables :::info This is only relevant for the rust projects as no-code if postgres is enabled will have to create the tables. ::: If you do not wish for rindexer to create the database tables for you automatically you can set this to true. By default if will create the tables for you. When this is disabled it will not write the sql in the handlers for you either. This field is optional and can be ignored if you do not need it. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true disable_create_tables: true // [!code focus] ``` #### indexes When you end up having a database which has a lot of data querying that can become slow, indexes can help speed up the queries and critical for the performance of the GraphQL server. By default rindexer lets you filter on any column even if it is not indexed but here you can define the common filtering you are going to use in your application. rindexer sees the ABIs as the source of truth and allows you to map against the information you should know about, rindexer will generate all the SQL for you and naming based on this. :::info When you start up rindexer it will drop any old and new indexes and resync the historic data then apply them again before indexing the live data. Having indexes in place when you writing data to the database can drastically slowdown indexing speed and writing speed to the database. ::: ##### global\_injected\_parameters :::info This is optional ::: rindexer will inject common parameters into the event tables for you: * `contract_address` - The contract address of the event * `tx_hash` - The transaction hash of the event * `block_number` - The block number of the event * `block_hash` - The block hash of the event * `network` - The network of the event * `tx_index` - The transaction index of the event * `log_index` - The log index of the event If you start seeing your queries being slow when using any of these to filter you can add them to the `global_injected_parameters` and rindexer will apply on all tables it generates. For example below I want to filter on the block number and network and my queries are slow so i can add this index: ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true indexes: // [!code focus] global_injected_parameters: // [!code focus] - block_number // [!code focus] - network // [!code focus] ``` ##### contracts You can then define indexes for your contracts ##### name As you can have multiple contracts in your project you have to map its name to the contracts so it can read the ABIs. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true indexes: // [!code focus] contracts: // [!code focus] - name: LensHub // [!code focus] ``` ##### injected\_parameters :::info This is optional ::: This is the same as [the global injected parameters](/docs/start-building/yaml-config/storage#global_injected_parameters) but will only apply to the events of this contract. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true indexes: // [!code focus] contracts: // [!code focus] - name: LensHub injected_parameters: // [!code focus] - block_number // [!code focus] - network // [!code focus] ``` ##### events You can define indexes for specific events in the contract. Events are tables and you can make this with the values of the ABI and rindexer will transform them into the SQL queries you need. An event can have multiple indexes. ###### name The name of the event to apply the indexes to. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true indexes: // [!code focus] contracts: // [!code focus] - name: LensHub events: // [!code focus] - name: QuoteCreated // [!code focus] ``` ###### injected\_parameters :::info This is optional ::: This is the same as [the global injected parameters](/docs/start-building/yaml-config/storage#global_injected_parameters) but will only apply the single event. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true indexes: // [!code focus] contracts: // [!code focus] - name: LensHub events: // [!code focus] - name: QuoteCreated injected_parameters: // [!code focus] - tx_hash // [!code focus] ``` ###### indexes You can define your indexes here - this allows you to define many indexes for the same event. We will use this ABI as an example as it has tuples as well as route inputs. ```json { "anonymous": false, "inputs": [ { "components": [ { "internalType": "uint256", "name": "profileId", "type": "uint256" }, { "internalType": "string", "name": "contentURI", "type": "string" }, { "internalType": "uint256", "name": "pointedProfileId", "type": "uint256" }, { "internalType": "uint256", "name": "pointedPubId", "type": "uint256" }, { "internalType": "uint256[]", "name": "referrerProfileIds", "type": "uint256[]" }, { "internalType": "uint256[]", "name": "referrerPubIds", "type": "uint256[]" }, { "internalType": "bytes", "name": "referenceModuleData", "type": "bytes" }, { "internalType": "address[]", "name": "actionModules", "type": "address[]" }, { "internalType": "bytes[]", "name": "actionModulesInitDatas", "type": "bytes[]" }, { "internalType": "address", "name": "referenceModule", "type": "address" }, { "internalType": "bytes", "name": "referenceModuleInitData", "type": "bytes" } ], "indexed": false, "internalType": "struct Types.QuoteParams", "name": "quoteParams", "type": "tuple" }, { "indexed": true, "internalType": "uint256", "name": "pubId", "type": "uint256" }, { "indexed": false, "internalType": "bytes", "name": "referenceModuleReturnData", "type": "bytes" }, { "indexed": false, "internalType": "bytes[]", "name": "actionModulesInitReturnDatas", "type": "bytes[]" }, { "indexed": false, "internalType": "bytes", "name": "referenceModuleInitReturnData", "type": "bytes" }, { "indexed": false, "internalType": "address", "name": "transactionExecutor", "type": "address" }, { "indexed": false, "internalType": "uint256", "name": "timestamp", "type": "uint256" } ], "name": "QuoteCreated", "type": "event" } ``` ###### event\_input\_names You may want to index one field or you may which to use a composite that filter or sort by multiple columns. ###### single root field Lets say i want to add an index for `transactionExecutor` I look in the ABI for that field and i see its not in a tuple and directly on the root of inputs so i take the input name and apply it to the yaml file. ```json { ... "inputs": [ ... { "indexed": false, "internalType": "address", "name": "transactionExecutor", // [!code focus] "type": "address" }, ... ], "name": "QuoteCreated", // [!code focus] "type": "event" }, ``` ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true indexes: // [!code focus] contracts: // [!code focus] - name: LensHub events: // [!code focus] - name: QuoteCreated indexes: // [!code focus] - event_input_names: // [!code focus] - transactionExecutor // [!code focus] ``` This will create a SQL index like the below: ```sql CREATE INDEX idx_quote_created_transaction_executor ON lens_indexer_lens_hub_quote_created (transaction_executor); ``` :::info do not worry if you do not understand this all you need to care about is that you can now filter on `transaction_executor`) faster. ::: ###### tuple field If you want to add an index in a field which is within a tuple you can do this easily by just mapping the object location. Lets say i want to add an index on the `quoteParams` `referenceModule` field. ```json { "anonymous": false, "inputs": [ { "components": [ ... { "internalType": "address", "name": "referenceModule", // [!code focus] "type": "address" }, ... ], "indexed": false, "internalType": "struct Types.QuoteParams", "name": "quoteParams", // [!code focus] "type": "tuple" }, ... ], "name": "QuoteCreated", "type": "event" }, ``` I would just map this in the yaml file: ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true indexes: // [!code focus] contracts: // [!code focus] - name: LensHub events: // [!code focus] - name: QuoteCreated indexes: // [!code focus] - event_input_names: // [!code focus] - "quoteParams.referenceModule" // [!code focus] ``` This will create a SQL index like the below: ```sql CREATE INDEX idx_quote_created_quote_params_reference_module ON lens_indexer_lens_hub_quote_created (quote_params_reference_module); ``` :::info do not worry if you do not understand this all you need to care about is that you can now filter on `transaction_executor`) faster. ::: ###### multiple indexed fields You may want to index multiple fields if you are doing a filter or ordering on many fields. Composite indexes are supported in the SQL database and you can do this easily by just mapping the object location. :::info Composite indexes are very powerful and can have very high performance on the SQL queries if you are filtering on many fields. ::: Lets say i want to add an index on the `quoteParams` `referenceModule` field alongside the `transactionExecutor`. ```json { "anonymous": false, "inputs": [ { "components": [ ... { "internalType": "address", "name": "referenceModule", // [!code focus] "type": "address" }, ... ], "indexed": false, "internalType": "struct Types.QuoteParams", "name": "quoteParams", // [!code focus] "type": "tuple" }, { "indexed": false, "internalType": "address", "name": "transactionExecutor", // [!code focus] "type": "address" }, ], "name": "QuoteCreated", "type": "event" }, ``` I would just map this in the yaml file: ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true indexes: // [!code focus] contracts: // [!code focus] - name: LensHub events: // [!code focus] - name: QuoteCreated indexes: // [!code focus] - event_input_names: // [!code focus] - transactionExecutor - "quoteParams.referenceModule" // [!code focus] ``` This will create a SQL index like the below: ```sql CREATE INDEX idx_quote_created_transaction_executor_quote_params_reference_module ON lens_indexer_lens_hub_quote_created (transaction_executor, quote_params_reference_module); ``` :::info do not worry if you do not understand this all you need to care about is that you can now filter on `transaction_executor`) faster. ::: #### relationships :::warning if you are using defined [dependency\_events](/docs/start-building/yaml-config/contracts#dependency_events) and using relationships you will need to make sure you define the relationship in the `dependency_events` manually as rindexer can not merge the relationship with the dependency events if custom dependency\_events are defined. If you do not define it within the dependency\_events FK constraints will be thrown. ::: :::warning Also note any cross contracts relationships will not be applied automatically, you will need to define them manually in the YAML. if you do not rindexer will panic and let you know that you have to define the [dependency\_events](/docs/start-building/yaml-config/contracts#dependency_events). ::: You can define your relationships between events, this will add foreign keys to the database and also process them in the correct order. Note rindexer always optimises for speed unless told to do so, on historic data it will drop any foreign keys and run them concurrently, it then re-apply the relationships again before indexing the live data. If still want to only run once the other one has run you can look into the [dependency events](/docs/start-building/yaml-config/contracts#dependency_events). You can define many relationships in the same YAML file. We will use these ABIs as an example as it has tuples as well as route inputs. :::code-group ```json [QuoteCreated ABI] { "anonymous": false, "inputs": [ { "components": [ { "internalType": "uint256", "name": "profileId", "type": "uint256" }, { "internalType": "string", "name": "contentURI", "type": "string" }, { "internalType": "uint256", "name": "pointedProfileId", "type": "uint256" }, { "internalType": "uint256", "name": "pointedPubId", "type": "uint256" }, { "internalType": "uint256[]", "name": "referrerProfileIds", "type": "uint256[]" }, { "internalType": "uint256[]", "name": "referrerPubIds", "type": "uint256[]" }, { "internalType": "bytes", "name": "referenceModuleData", "type": "bytes" }, { "internalType": "address[]", "name": "actionModules", "type": "address[]" }, { "internalType": "bytes[]", "name": "actionModulesInitDatas", "type": "bytes[]" }, { "internalType": "address", "name": "referenceModule", "type": "address" }, { "internalType": "bytes", "name": "referenceModuleInitData", "type": "bytes" } ], "indexed": false, "internalType": "struct Types.QuoteParams", "name": "quoteParams", "type": "tuple" }, { "indexed": true, "internalType": "uint256", "name": "pubId", "type": "uint256" }, { "indexed": false, "internalType": "bytes", "name": "referenceModuleReturnData", "type": "bytes" }, { "indexed": false, "internalType": "bytes[]", "name": "actionModulesInitReturnDatas", "type": "bytes[]" }, { "indexed": false, "internalType": "bytes", "name": "referenceModuleInitReturnData", "type": "bytes" }, { "indexed": false, "internalType": "address", "name": "transactionExecutor", "type": "address" }, { "indexed": false, "internalType": "uint256", "name": "timestamp", "type": "uint256" } ], "name": "QuoteCreated", "type": "event" } ``` ```json [ProfileMetadataSet ABI] { "anonymous": false, "inputs": [ { "indexed": true, "internalType": "uint256", "name": "profileId", "type": "uint256" }, { "indexed": false, "internalType": "string", "name": "metadata", "type": "string" }, { "indexed": false, "internalType": "address", "name": "transactionExecutor", "type": "address" }, { "indexed": false, "internalType": "uint256", "name": "timestamp", "type": "uint256" } ], "name": "ProfileMetadataSet", "type": "event" } ``` ::: ##### contract\_name As you can have multiple contracts in your project you have to map its name to the contracts so it can read the ABIs. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true relationships: // [!code focus] - contract_name: LensHub // [!code focus] ``` ##### event\_name The name of the event to apply the indexes to. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true relationships: // [!code focus] - contract_name: LensHub event_name: QuoteCreated // [!code focus] ``` ##### event\_input\_name This can be a tuple object mapping or a single field which we explained both explained above. Lets say we want to make `QuoteCreated` events `quoteParams.profileId` linked to something other profile id event. ```json [QuoteCreated ABI] { "anonymous": false, "inputs": [ { "components": [ { "internalType": "uint256", "name": "profileId", // [!code focus] "type": "uint256" }, ... ], "indexed": false, "internalType": "struct Types.QuoteParams", "name": "quoteParams", // [!code focus] "type": "tuple" }, ... ], "name": "QuoteCreated", "type": "event" } ``` Lets add that field to the `event_input_name`: ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true relationships: // [!code focus] - contract_name: LensHub event_name: QuoteCreated event_input_name: "quoteParams.profileId" // [!code focus] ``` ##### linked\_to Now we have to map what this referenced to. ##### contract\_name Define the contract name to link to. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true relationships: - contract_name: LensHub event_name: QuoteCreated event_input_name: "quoteParams.profileId" linked_to: // [!code focus] - contract_name: LensHub // [!code focus] ``` ##### event\_name Define the event name to link to. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true relationships: - contract_name: LensHub event_name: QuoteCreated event_input_name: "quoteParams.profileId" linked_to: // [!code focus] - contract_name: LensHub // [!code focus] event_name: ProfileMetadataSet // [!code focus] ``` ##### event\_input\_name Map the event input name for it, this MUST match the same ABI type as the event\_input\_name type above. ```json [ProfileMetadataSet ABI] { "anonymous": false, "inputs": [ { "indexed": true, "internalType": "uint256", "name": "profileId", // [!code focus] "type": "uint256" }, { "indexed": false, "internalType": "string", "name": "metadata", "type": "string" }, { "indexed": false, "internalType": "address", "name": "transactionExecutor", "type": "address" }, { "indexed": false, "internalType": "uint256", "name": "timestamp", "type": "uint256" } ], "name": "ProfileMetadataSet", "type": "event" } ``` ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true relationships: // [!code focus] - contract_name: LensHub event_name: QuoteCreated event_input_name: "quoteParams.profileId" linked_to: // [!code focus] - contract_name: LensHub event_name: ProfileMetadataSet event_input_name: profileId // [!code focus] ``` That is it we have now linked the `QuoteCreated` events `quoteParams.profileId` to the `ProfileMetadataSet` events `profileId`. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true relationships: // [!code focus] - contract_name: LensHub // [!code focus] event_name: QuoteCreated // [!code focus] event_input_name: "quoteParams.profileId" // [!code focus] linked_to: // [!code focus] - contract_name: LensHub // [!code focus] event_name: ProfileMetadataSet // [!code focus] event_input_name: profileId // [!code focus] ``` You can read more about how this changes the GraphQL ability to query the data [here](/docs/accessing-data/graphql#relationships). ### clickhouse If you wish to store the data in a clickhouse database with the no-code project you can enable the clickhouse storage. :::info This is optional if you do not wish to store the data in a clickhouse database you can leave this section out of your YAML. ::: #### Internal tables When rindexer is running with clickhouse it uses the database to manage some internal state including the network and contract last seen block. You can see those tables in a schema called `rindexer_internal` and should never be modified manually. #### Own connection string If you are deploying the indexer or want to point to an external database you can supply your own connection string, to do this you have to change/define it the `.env` file. ```bash CLICKHOUSE_URL="http://[host]:[port]" CLICKHOUSE_DB="default" CLICKHOUSE_USER="default" CLICKHOUSE_PASSWORD="default" RINDEXER_CLICKHOUSE_BATCH_SIZE="1000" ``` `RINDEXER_CLICKHOUSE_BATCH_SIZE` controls the chunk size used when rindexer writes dynamic/no-code ClickHouse batches. The default is `1000`. For high-volume streams, increasing this value reduces the number of sequential ClickHouse `INSERT` requests. The tradeoff is that each request becomes larger, so values should be increased carefully based on the workload and ClickHouse capacity. #### enabled If clickhouse is enabled or not, if you do not wish to use clickhouse you can set this to false or remove clickhouse from the storage completely. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: clickhouse: enabled: true // [!code focus] ``` #### drop\_each\_run rindexer will keep track of the last synced block for each contracts and events meaning when you start and stop the indexer it will start from the last synced block. rindexer will also create tables for you again which could clash if you are using rindexer to grab throw away data and want to start over each time you run it. You can use `drop_each_run` to drop all the data for the indexer before starting which will ensure you start fresh. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: clickhouse: enabled: true drop_each_run: true // [!code focus] ``` #### disable\_create\_tables :::info This is only relevant for the rust projects, as no-code (if clickhouse is enabled) will have to create the tables. ::: If you do not wish for rindexer to create the database tables for you automatically you can set this to true. By default if will create the tables for you. When this is disabled it will not write the sql in the handlers for you either. This field is optional and can be ignored if you do not need it. It will still create the rindexer internal tables for tracking last known block. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: clickhouse: enabled: true disable_create_tables: true // [!code focus] ``` #### indexes Clickhouse does not use indexes like other databases, instead the order by clause on the storage engine is critical for performance. By default we use an order by clause of: ```sql ORDER BY (network, block_number, tx_hash, log_index) ``` This allows efficient searches of per network block ranges, and is a valid "uniqueness" constraint meaning it allows us to ensure the data indexed does not contain duplicates. It also works with or without timestamps being enabled. To assist with performance on common queries we automatically opt-in tables to minmax indexes on `block_number`, `block-timestamp` and add bloom filters for `tx_hash` and `network`. This will allow fast queries for any generic block pruning query, or transaction lookup. ```sql index idx_block_num (block_number) type minmax granularity 1 index idx_timestamp (block_timestamp) type minmax granularity 1 index idx_network (network) type bloom_filter granularity 1 index idx_tx_hash (tx_hash) type bloom_filter granularity 1 ``` ##### additional information You could add custom indexes on fields like `from` or `to` as needed. However indexing in OLAP databases is a complex topic, if you wish to hyper-optimise for some specific query patterns such as a particular field like a wallet address it is more appropriate to leverage the **Rust Project** and custom tables where you can control the order by to index on your primary filter constraint first. An example of this is erc20 transfers where we want to search quickly on a wallet address. In this case it is most benefical to either create a [projection](https://clickhouse.com/docs/sql-reference/statements/alter/projection), or to denormalize the from and to inserts directly into a unified `wallet_address` table with a `direction` field. An example of this would be as follows and would allow extremely optimised `wallet_address = ?` in block timestamp descending queries: ```sql create table if not exists erc20_transfer ( block_timestamp DateTime('UTC'), block_number UInt64, network_id UInt32, transaction_index UInt16, log_index UInt16, currency_address FixedString(20), wallet_address FixedString(20), counterparty_address FixedString(20), transaction_hash FixedString(32), amount UInt256, is_send Bool ) engine = ReplacingMergeTree order by (wallet_address, block_timestamp, transaction_hash, log_index); ``` ### csv If you wish to store the data in a CSV files you can enable the csv storage. :::info This is optional if you do not wish to store the data in a CSV files you can leave this section out of your YAML. ::: #### Last synced block state When indexing with csv and postgres is disabled rindexer keeps the network and contract last seen block in a txt file within the defined path the csv files will be written to, this is to ensure that if the indexer goes down it can pick up where it left off. You can see those txt files under the csv path and in the contract names folder there is a folder called `last-synced-blocks`, each event will have a txt file with the last seen block. If you are using csv and postgres is enabled the last seen block will be stored in the database. #### enabled If csv is enabled or not, if you do not wish to use csv you can set this to false or remove csv from the storage completely. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: csv: enabled: true // [!code focus] ``` #### path :::info This field is optional ::: The path to store the CSV files, it should be a directory path, if it does not exist it will be created in the project directory in folder called `generated_csv`. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: csv: enabled: true path: ./generated_csv // [!code focus] ``` #### disable\_create\_headers :::info This is only relevant for the rust projects as no-code if csv is enabled it will have to csv headers for you. ::: If you do not wish for rindexer to create csv headers for you automatically you can set this to true. By default if will create the csv headers for you. When this is disabled it will not write the csv code in the handlers for you either. This field is optional and can be ignored if you do not need it. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: csv: enabled: true path: ./generated_csv disable_create_headers: true // [!code focus] ``` ### Multiple Storage Providers You can have multiple storage providers in the YAML file. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: // [!code focus] postgres: // [!code focus] enabled: true // [!code focus] csv: // [!code focus] enabled: true // [!code focus] ``` ## Top level fields The top-level fields of the YAML configuration file. ### name The name of the project ```yaml [rindexer.yaml] name: rETHIndexer // [!code focus] ``` ### description :::info This field is optional ::: The description of the project ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project // [!code focus] ``` ### repository :::info This field is optional ::: The repository of the project ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer // [!code focus] ``` ### environment\_path By default rindexer will load the environment variables from the `.env` file in the root of the project. You can override this by providing the path to the environment file you wish to use. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer environment_path: "../../.env" // [!code focus] ``` ### project\_type The rindexer project type #### no-code ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code // [!code focus] ``` OR #### rust ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: rust // [!code focus] ``` ### config More advanced opt-in configuration parameters. [See more details](/docs/start-building/yaml-config/config). #### no-code ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer config: buffer: 2 // [!code focus] callback_concurrency: 4 // [!code focus] ``` ### timestamps Enable block timestamps for all events on all networks. Timestamps are `disabled` by default. Includes timestamps in all rindexer logs. Any logs with timestamps already included are ignored, and we try to be as efficient as possible when fetching timestamps by first using fixed ranges, then precalculated, and lastly fallback to RPC requests when required. Timestamps can also be opted into for specific events. [See more details](/docs/start-building/yaml-config/contracts). ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer timestamps: true ``` ## Custom Tables :::tip[The Recommended Way to Build No-Code Indexers] Custom Tables let you build powerful indexers that maintain **derived state** - like token balances, NFT ownership, and protocol metrics - all through simple YAML configuration. No code required. ::: ### Key Features #### Zero Code Required Define your entire indexing logic in YAML - rindexer automatically generates the database schema, handles all SQL operations, and manages state updates. You never write a single line of Rust, TypeScript, or SQL. #### Automatic Database Operations * **Insert** - Add new rows when events occur * **Upsert** - Insert or update based on a unique key (perfect for balances, ownership) * **Update** - Modify existing rows with `add`, `subtract`, `multiply`, `divide`, or `replace` actions * **Delete** - Remove rows when conditions are met (e.g., NFT transfers, position closures) rindexer handles batching, transactions, and error recovery automatically. #### Powerful Expressions * **Computed Values** - Arithmetic like `$amount * 2`, `$price / $quantity`, `10 ^ $decimals` * **String Templates** - Concatenate with `"$from-$to"` or `"Pool: $token0/$token1"` * **Conditional Logic** - Filter with `if: "$value > 0 && $from != 0x000..."` * **Array Iteration** - Process batch events (ERC1155 `TransferBatch`) automatically * **Array Indexing** - Access `$ids[0]` or struct fields `$transfers[0].amount` #### Onchain Data Integration * **View Calls** - Fetch live data with `$call($contract, "balanceOf(address)", $holder)` * **Static View Calls** - Fetch immutable data with `$call_static($token, "symbol()")` - cached forever, no repeated RPC calls * **Tuple Returns** - Access by index `[0]` or field name `.fieldName` * **Cron Triggers** - Schedule periodic updates with `interval: 5m` or cron expressions #### Advanced Features * **Global Tables** - Single-row aggregates (total supply, TVL) * **Cross-Chain** - Aggregate data across multiple networks * **Transaction Metadata** - Access `$rindexer_block_number`, `$rindexer_tx_hash`, etc. * **Constants** - Reusable values with network-scoped overrides * **Schema Migration** - Auto-detect and apply column changes *** ### The Problem Building indexers traditionally requires significant engineering effort: **1. Manual Code for Every Operation** ```rust // You write handlers for every event async fn handle_transfer(event: Transfer, db: &Database) { // Fetch existing balance let balance = db.get_balance(&event.to).await?; // Calculate new balance let new_balance = balance + event.value; // Update database db.upsert_balance(&event.to, new_balance).await?; // Don't forget the sender... } ``` **2. Raw Event Logs Don't Give You State** ``` Transfer: Alice β†’ Bob, 100 USDC β†’ Row 1 Transfer: Bob β†’ Carol, 50 USDC β†’ Row 2 Transfer: Carol β†’ Alice, 25 USDC β†’ Row 3 ``` To get Alice's balance, you query millions of rows and aggregate them at read time. **3. Complex State Management** Tracking derived state (balances, ownership, positions) means writing upsert logic, handling edge cases, managing database transactions, and testing everything. ### The Solution Custom Tables let you declare **what** you want, and rindexer handles **how**: ```yaml tables: - name: balances columns: - name: holder - name: balance default: "0" events: - event: Transfer operations: - type: upsert where: holder: $to set: - column: balance action: add value: $value ``` **What rindexer does automatically:** * Creates the `balances` table with proper types and indexes * Generates efficient SQL upsert statements * Batches operations for performance * Handles database transactions and retries * Maintains current state as events stream in **Result:** One row per address. Instant lookups. No aggregation. No code. *** ### Quick Start Here's a complete example that tracks ERC20 token balances: ```yaml [rindexer.yaml] name: USDCIndexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: USDC details: - network: ethereum address: "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48" start_block: 18600000 abi: ./abis/ERC20.json tables: // [!code focus] - name: balances // [!code focus] columns: // [!code focus] - name: holder // [!code focus] - name: balance // [!code focus] default: "0" // [!code focus] events: // [!code focus] - event: Transfer // [!code focus] operations: // [!code focus] # Credit the recipient // [!code focus] - type: upsert // [!code focus] where: // [!code focus] holder: $to // [!code focus] if: "$to != 0x0000000000000000000000000000000000000000" // [!code focus] set: // [!code focus] - column: balance // [!code focus] action: add // [!code focus] value: $value // [!code focus] # Debit the sender // [!code focus] - type: upsert // [!code focus] where: // [!code focus] holder: $from // [!code focus] if: "$from != 0x0000000000000000000000000000000000000000" // [!code focus] set: // [!code focus] - column: balance // [!code focus] action: subtract // [!code focus] value: $value // [!code focus] ``` **Result:** A `balances` table with one row per holder, instantly queryable. This example demonstrates the core concepts: * **columns** - Define what data you're storing * **events** - Map contract events to table operations * **where** - Identify which row to update (becomes the primary key) * **if** - Filter which events to process * **set** - Define how to update columns Let's dive deeper into each of these. *** ### Table Configuration :::tip[Multiple Tables Per Contract] You can define **multiple tables** under a single contract. Each table can listen to the same or different events. This is useful for tracking different views of the same data (e.g., balances + supply metrics from Transfer events). ::: #### name The table name. Will be created as `{indexer}_{contract}_{name}` in the database. ```yaml tables: - name: balances # Creates: usdcindexer_usdc_balances ``` *** #### global When `true`, creates a single row per network - perfect for counters and aggregate metrics. No `where` clause needed since the primary key is just `network`. ```yaml tables: - name: metrics global: true // [!code focus] columns: - name: transfer_count type: uint256 default: "0" events: - event: Transfer operations: - type: upsert set: - column: transfer_count action: increment # Adds 1 each time ``` *** #### cross\_chain When `true`, aggregates data across ALL networks. The `network` column is **not created**, so data from Ethereum, Arbitrum, Optimism, etc. all contribute to the same rows. ```yaml tables: - name: global_supply cross_chain: true // [!code focus] global: true columns: - name: total type: uint256 default: "0" ``` *** #### timestamp When `true`, adds the `rindexer_block_timestamp` column to the table. By default, this column is **not created** to optimize performance. ```yaml tables: - name: balances timestamp: true // [!code focus] columns: - name: holder - name: balance default: "0" ``` **Why is this opt-in?** Some RPC nodes don't include block timestamps in event metadata, requiring an additional RPC call to fetch the block and extract its timestamp. This can significantly impact indexing performance, especially for high-volume indexers. :::info[Performance Optimization] When `timestamp: true` is set and your RPC node doesn't include timestamps in event metadata, rindexer will batch-fetch blocks to get timestamps. The optimization includes: * **Batch fetching** - All unique blocks in an event batch are fetched in a single RPC call * **Global caching** - Block timestamps are cached for the entire indexing run, so each block is only fetched once * **Deduplication** - Multiple events in the same block share one fetch This minimizes the performance impact, but if you don't need timestamps, leaving the option off avoids the overhead entirely. Only enable this if you actually need block timestamps in your queries. ::: :::warning[Free/Public RPC Nodes] Using `timestamp: true` with free or public RPC nodes (like public Infura, Alchemy free tier, or other rate-limited endpoints) will result in **very slow indexing**. Free nodes aggressively rate limit requests, and rindexer will automatically throttle to respect these limits - but this can make indexing take hours instead of minutes. **For production use with `timestamp: true`, use a paid RPC provider or run your own node.** ::: *** #### database Optional override for the database (ClickHouse) or schema (PostgreSQL) where this table is created. By default, custom tables are created in `{project}_{contract}` (e.g., `myindexer_usdc.balances`). When `database` is set, the table is created in the specified database instead. ```yaml tables: - name: events database: indexer // [!code focus] columns: - name: id - name: amount ``` This is useful when multiple contracts should write to the **same table**. Without `database`, each contract gets its own isolated table. With `database`, all contracts sharing the same value write to the same `{database}.{table_name}`. ```yaml contracts: - name: ExchangeA tables: - name: trades database: shared # β†’ shared.trades ... - name: ExchangeB tables: - name: trades database: shared # β†’ shared.trades (same table!) ... ``` :::info[Raw event tables are not affected] The `database` override only applies to custom tables. Raw event tables (auto-generated from ABI) always use the default `{project}_{contract}` database for isolation. ::: *** #### columns Define the columns in your table. | Property | Required | Description | | ---------- | -------- | --------------------------------------------------------------- | | `name` | Yes | Column name | | `type` | No | Data type (auto-inferred from event ABI if not specified) | | `default` | No | Default value for new rows | | `nullable` | No | Whether column allows NULL values (default: `false` = NOT NULL) | :::tip[Nullable Columns] By default, all columns are `NOT NULL` for data integrity. If you need to allow NULL values (e.g., for optional fields or using `$null`), set `nullable: true`: ```yaml columns: - name: optional_field type: string nullable: true # Allows NULL values ``` ::: :::tip[Primary Keys] Primary keys are automatically derived from columns used in `where` clauses. Any column that appears in a `where` clause becomes part of the primary key. You don't need to explicitly mark columns as primary keys. **Important:** All operations in a table must use the same `where` columns. See [where](#where) for details. ::: ##### Type Inference Rules Column types are **automatically inferred** in these cases - no `type:` needed: | Value Source | Example | Inferred Type | | --------------------- | ------------------------------------------- | ---------------------------------------- | | Event field | `$from`, `$value`, `$to` | From ABI (e.g., `address`, `uint256`) | | Nested event field | `$data.amount` | From ABI | | Transaction metadata | `$rindexer_block_number` | `uint64` | | Transaction metadata | `$rindexer_tx_hash`, `$rindexer_block_hash` | `string` | | Transaction metadata | `$rindexer_contract_address` | `address` | | Transaction metadata | `$rindexer_block_timestamp` | `timestamp` (requires `timestamp: true`) | | Default value `"0"` | `default: "0"` | `uint256` | | Default value boolean | `default: "true"` | `bool` | | Default value address | `default: "0x000..."` | `address` | You **must specify `type:`** in these cases: | Value Source | Example | Why | | ----------------------- | --------------------------------------------- | ---------------------- | | View calls | `$call($addr, "balanceOf(address)", $holder)` | Return type unknown | | Computed/arithmetic | `$amount * 2`, `$a + $b`, `10 ^ $decimals` | Result type ambiguous | | Arithmetic + view calls | `($amount * $call(...)) / (10 ^ $call(...))` | Complex expression | | String templates | `"$from-$to"` | Always produces string | | Literal values | `"global"`, `"1000"` | No type context | | No value reference | Column not used in `set` or `where` | Nothing to infer from | **Examples:** ```yaml columns: # βœ… Type inferred from event ABI - $to is address, $value is uint256 - name: holder # type: address (inferred from $to) - name: balance # type: uint256 (inferred from $value) default: "0" # βœ… Type inferred from metadata - name: last_block # type: uint64 (inferred from $rindexer_block_number) - name: tx_hash # type: string (inferred from $rindexer_tx_hash) # ⚠️ Must specify type - view call return type unknown - name: token_symbol type: string # Required! $call() can't infer type - name: token_decimals type: uint8 # Required! # ⚠️ Must specify type - arithmetic result - name: doubled_amount type: uint256 # Required! $value * 2 needs explicit type # ⚠️ Must specify type - string template - name: pair_id type: string # Required! "$token0-$token1" is a string # ⚠️ Must specify type - literal value - name: status type: string # Required! "active" is a literal ``` :::tip[When in Doubt, Specify the Type] If you're unsure whether a type will be inferred, just add `type:` explicitly. It never hurts and makes your YAML more readable. ::: ##### Supported Types | Type | Description | PostgreSQL | ClickHouse | | --------------------- | ----------------------- | ----------- | --------------- | | `address` | Ethereum address | CHAR(42) | FixedString(42) | | `string` | Text | TEXT | String | | `bool` | Boolean | BOOLEAN | Bool | | `uint8` - `uint64` | Unsigned integers | BIGINT | UInt64 | | `uint128` - `uint256` | Large unsigned integers | NUMERIC | UInt256 | | `int8` - `int64` | Signed integers | BIGINT | Int64 | | `int128` - `int256` | Large signed integers | NUMERIC | Int256 | | `bytes` | Dynamic bytes | BYTEA | String | | `bytes32` | Fixed 32 bytes | BYTEA | FixedString(66) | | `timestamp` | Date/time | TIMESTAMPTZ | DateTime | | `address[]` | Array of addresses | TEXT\[] | Array(String) | | `uint256[]` | Array of uint256 | TEXT\[] | Array(String) | | `bytes32[]` | Array of bytes32 | TEXT\[] | Array(String) | ##### Array Types Arrays from event parameters are supported and stored as database arrays: ```yaml columns: - name: participants type: address[] - name: amounts type: uint256[] ``` **What works:** * Storing entire arrays from events (e.g., `$addresses`, `$values`) * Querying arrays via GraphQL * Address arrays are stored efficiently * **Iterating over arrays** with `iterate` (see [Array Iteration](#array-iteration-batch-events)) * **Accessing individual elements** with `$array[0]` syntax (see [Array Indexing](#array-indexing)) **Limitations:** * **Cannot use arrays in `where` clauses** - Arrays can't be part of primary keys (use `iterate` to expand arrays into individual rows) :::tip[Array Features] Use `iterate` to process array elements in batch events like ERC1155 `TransferBatch`. Use `$array[0]` to access specific elements when you only need certain positions. See [Array Iteration](#array-iteration-batch-events) and [Array Indexing](#array-indexing) for details. ::: *** ### Events & Operations #### events Maps contract events to table operations. ```yaml events: - event: Transfer # Must match the ABI event name operations: - ... ``` *** #### operations Each operation defines what happens when an event is received. ##### type | Type | Description | Use Case | | -------- | --------------------------------------- | ------------------------------------- | | `upsert` | Insert new row or update existing | Most common - balances, ownership | | `insert` | Insert a new row (no conflict handling) | Time-series data, price history, logs | | `update` | Update existing row only (no insert) | Modify existing records | | `delete` | Remove the row | Clean up data | :::tip[Insert vs Upsert] Use `insert` for time-series or history data where you want a new row each time (no `where` clause needed). Use `upsert` when you want to update existing rows based on a key. ::: ##### where Identifies which row to affect. Maps column names to values. ```yaml where: holder: $to # Column "holder" = event field "to" token_id: $tokenId # Column "token_id" = event field "tokenId" ``` :::warning[All Operations Must Use the Same `where` Columns] Every operation in a table must use **identical `where` columns** because they define the table's primary key. A table can only have one primary key, so all operations must agree on what uniquely identifies a row. **Valid** - same columns, different values: ```yaml operations: - type: upsert where: holder: $to # βœ“ Uses 'holder' set: [...] - type: upsert where: holder: $from # βœ“ Uses 'holder' (same column, different value) set: [...] ``` **Invalid** - different columns: ```yaml operations: - type: upsert where: holder: $to token_id: $id # βœ— Uses 'holder' AND 'token_id' set: [...] - type: upsert where: holder: $from # βœ— Uses only 'holder' - inconsistent! set: [...] ``` If you need different key granularities, use **separate tables** instead. ::: :::info For `global: true` tables, omit the `where` clause - the primary key is just `network`. ::: ##### if Skip events that don't match the condition. Supports comparison and logical operators. ```yaml # Skip zero address if: "$to != 0x0000000000000000000000000000000000000000" # Multiple conditions if: "$value > 0 && $from != 0x0000000000000000000000000000000000000000" # Only update if new value is greater than existing if: "$value > @balance" ``` :::info You can also use `filter:` as an alias for `if:`, but `if:` is recommended for clarity. ::: ##### set Define what columns to update and how. ```yaml set: - column: balance action: add value: $value ``` *** ### Set Actions | Action | Description | Example Result | | ----------- | ---------------------- | ------------------------ | | `set` | Replace value | `balance = 100` | | `add` | Add to existing | `balance = balance + 50` | | `subtract` | Subtract from existing | `balance = balance - 50` | | `max` | Keep the larger value | `high = max(high, 150)` | | `min` | Keep the smaller value | `low = min(low, 50)` | | `increment` | Add 1 | `count = count + 1` | | `decrement` | Subtract 1 | `count = count - 1` | *** ### Value References #### Event Fields Reference any field from the event using `$fieldName`: ```yaml value: $from # Sender address value: $to # Recipient address value: $value # Transfer amount value: $tokenId # NFT token ID ``` #### Tuples and Structs (Nested Fields) Many events contain tuple or struct fields with nested data. Access nested fields using dot notation: ```yaml value: $data.amount # Access 'amount' inside 'data' tuple value: $order.maker # Access 'maker' inside 'order' struct value: $info.token.address # Access deeply nested fields ``` **Example: Event with Tuple/Struct Parameter** For an event like: ```solidity struct OrderInfo { address maker; address taker; uint256 amount; } event OrderFilled(bytes32 indexed orderId, OrderInfo info); ``` Access the nested fields: ```yaml where: order_id: $orderId set: - column: maker action: set value: $info.maker - column: taker action: set value: $info.taker - column: amount action: set value: $info.amount ``` :::tip[Finding Field Names] The field names must match exactly what's in the ABI. Check your contract's ABI JSON file to see the exact parameter names. The ABI defines both the event signature and parameter names. ::: #### Array Indexing Access specific elements from array fields using bracket notation: ```yaml value: $ids[0] # First element of 'ids' array value: $values[1] # Second element of 'values' array value: $data.tokens[0] # First element of nested 'tokens' array ``` This is useful when you only need specific elements from an array, such as the first token in a batch. ##### Post-Array Field Access For arrays of structs, you can access fields within each element: ```yaml value: $transfers[0].amount # 'amount' field of first transfer value: $orders[1].maker # 'maker' field of second order value: $swaps[0].tokenIn # 'tokenIn' field of first swap ``` :::tip[When to Use Post-Array Field Access] Use `$array[index].field` when: * You need a **specific element** from an array of structs * The array has a **fixed/known structure** (e.g., always 2 hops) * You want the **first or last element** of a route For **variable-length arrays**, use [`iterate`](#array-iteration-batch-events) instead to process all elements. ::: #### Array Iteration (Batch Events) For events with parallel arrays (like ERC1155 `TransferBatch`), use `iterate` to process each array element as a separate operation: ```yaml events: - event: TransferBatch iterate: # Iterate over parallel arrays - "$ids as token_id" # Bind each id to 'token_id' - "$values as amount" # Bind each value to 'amount' operations: - type: upsert where: holder: $to token_id: $token_id # Use the iterated value if: "$to != 0x0000000000000000000000000000000000000000" set: - column: balance action: add value: $amount # Use the iterated value ``` **How it works:** 1. `iterate` takes a list of array bindings in the format `"$arrayField as alias"` 2. All arrays must have the same length (they're processed in parallel) 3. For each index, the operations are executed with the aliased values bound 4. Use the aliases (`$token_id`, `$amount`) in `where`, `if`, and `set` clauses :::tip[Single Transfers] For `TransferSingle` events (which don't have arrays), you don't need `iterate` - just reference `$id` and `$value` directly. ::: #### Transaction Metadata Access transaction and block information: ```yaml value: $rindexer_block_number # Block number value: $rindexer_block_timestamp # Block timestamp (requires timestamp: true on table) value: $rindexer_tx_hash # Transaction hash value: $rindexer_block_hash # Block hash value: $rindexer_contract_address # Contract that emitted the event value: $rindexer_log_index # Log index in transaction value: $rindexer_tx_index # Transaction index in block ``` :::info[Block Timestamp] The `$rindexer_block_timestamp` reference only works if you have `timestamp: true` set on the table. Without it, the column won't exist. See [timestamp](#timestamp) for details. ::: #### View Calls (On-Chain Data) Call view functions on smart contracts to fetch additional data not available in events: ```yaml value: $call($rindexer_contract_address, "balanceOf(address)", $holder) value: $call($token, "decimals()") value: $call($token, "totalSupply()") value: $call(0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48, "allowance(address,address)", $owner, $spender) ``` **Syntax:** `$call(contract_address, "function_signature", arg1, arg2, ...)` * **contract\_address**: A literal address or `$field` reference from the event * **function\_signature**: The function signature in Solidity format (e.g., `"balanceOf(address)"`) * **args**: Arguments to pass to the function (can be `$field` references or literals) ##### Accessing Tuple/Struct Returns Many Solidity view functions return multiple values (tuples) or structs. rindexer provides two ways to access specific elements from these returns. ##### Quick Reference | Approach | Syntax | When to Use | | -------------- | ----------------------------------------------- | ------------------------------- | | Position-based | `$call(...)[0]` | Quick, no setup needed | | Named fields | `$call(... returns (type name, ...)).fieldName` | Self-documenting, readable YAML | *** ##### Position-Based Access `[index]` Use `[index]` after the call to access tuple elements by their position (0-indexed): ```yaml # Uniswap V2 getReserves() returns (uint112, uint112, uint32) # Position: [0] = reserve0, [1] = reserve1, [2] = blockTimestampLast value: $call($pool, "getReserves()")[0] # Get reserve0 value: $call($pool, "getReserves()")[1] # Get reserve1 value: $call($pool, "getReserves()")[2] # Get blockTimestampLast ``` **Pros:** Simple, no extra typing **Cons:** Have to remember what each position means *** ##### Named Field Access `.fieldName` Add `returns (type name, type name, ...)` to your function signature to enable `.fieldName` access: ```yaml # Same getReserves() call, but with named access value: $call($pool, "getReserves() returns (uint112 reserve0, uint112 reserve1, uint32 blockTimestampLast)").reserve0 value: $call($pool, "getReserves() returns (uint112 reserve0, uint112 reserve1, uint32 blockTimestampLast)").reserve1 ``` **Pros:** Self-documenting, YAML is readable without looking up ABI **Cons:** More verbose *** :::tip[When to Use Each Approach] * **Position-based `[0]`** - Best for quick prototyping or when the function signature is well-known * **Named `.fieldName`** - Best for production configs where readability matters Both approaches work identically at runtime. Choose based on your preference for brevity vs. clarity. ::: :::tip[View Call Caching] View call results are cached by (network, contract, calldata, block\_number). Repeated calls with the same parameters at the same block are served from cache, reducing RPC load. ::: :::info[Determinism] View calls are executed at the specific block number of the event, ensuring deterministic results during re-indexing. The same event will always produce the same view call result. ::: :::warning[Performance & Rate Limiting] View calls add significant RPC load. rindexer automatically limits concurrent view calls and adapts to rate limits with exponential backoff. **Best Practices:** * Use `$call_static()` for immutable data (symbol, decimals, name) - fetched once, cached forever * Reserve `$call()` for truly dynamic data (balances, allowances, prices) * **Use a paid/unthrottled RPC for heavy view call workloads** ::: :::danger[Free/Public RPC Nodes] Using `$call()` extensively with free or public RPC nodes will result in **extremely slow indexing**. Free nodes impose strict rate limits, and when these are hit, rindexer automatically throttles requests (adding delays up to 30 seconds between batches). What takes minutes with a paid node can take hours with a free node. For any indexer that makes heavy use of view calls, **you must use a paid RPC provider or run your own node**. ::: *** #### Static View Calls (Immutable Data) For onchain data that never changes (like token `symbol()`, `decimals()`, `name()`), use `$call_static()` instead of `$call()`: ```yaml value: $call_static($token, "symbol()") value: $call_static($token, "decimals()") value: $call_static($token, "name()") ``` **How it differs from `$call()`:** | Feature | `$call()` | `$call_static()` | | ------------ | ---------------------------------- | ----------------------------------------- | | Block number | Event's block (historical) | Latest block | | Caching | Per-block (cleared between blocks) | Forever (persists across entire indexing) | | Use case | Dynamic data (balances, prices) | Immutable data (symbol, decimals, name) | | Archive node | Required for historical blocks | Not required | **Example: Token metadata with static calls** ```yaml tables: - name: liquidations columns: - name: token - name: symbol type: string - name: decimals type: uint8 - name: amount_usd type: uint256 events: - event: Liquidation operations: - type: insert set: - column: token value: $collateralAsset - column: symbol value: $call_static($collateralAsset, "symbol()") # Cached forever - column: decimals value: $call_static($collateralAsset, "decimals()") # Cached forever - column: amount_usd # Price is dynamic, decimals is static value: ($amount * $call($oracle, "getPrice()")) / (10 ^ $call_static($collateralAsset, "decimals()")) ``` **Benefits:** * **First call**: RPC request at latest block, result cached permanently * **All subsequent calls**: Instant cache hit, zero RPC overhead * **No archive node needed**: Uses latest block, not historical blocks * **Perfect for high-volume indexing**: Token metadata fetched once per unique address :::tip[When to use $call_static()] Use `$call_static()` for any onchain data that is set once and never changes: * Token metadata: `symbol()`, `decimals()`, `name()` * Contract configuration: `owner()`, `factory()`, `WETH()` * Immutable parameters: `fee()` (if constant), `version()` Use regular `$call()` for data that can change: * Balances: `balanceOf()` * Prices: `getPrice()`, `latestAnswer()` * State: `totalSupply()`, `getReserves()` ::: #### Literal Values Use fixed values: ```yaml value: "0" # Number as string value: "default" # String identifier value: 0x0000000000000000000000000000000000000000 # Address ``` #### Null Values Set a column to SQL NULL using `$null`: ```yaml value: $null # Explicit SQL NULL ``` :::warning[Requires Nullable Column] You can only use `$null` on columns defined with `nullable: true`. rindexer validates this at startup and will show an error if you try to use `$null` on a non-nullable column: ``` Cannot use '$null' for column 'my_column' in table 'my_table' because it is not nullable. Add 'nullable: true' to the column definition to allow NULL values. ``` **Correct usage:** ```yaml columns: - name: optional_data type: string nullable: true # Required for $null events: - event: SomeEvent operations: - type: upsert where: id: $id set: - column: optional_data action: set value: $null # Sets column to NULL ``` ::: #### Conditional Values Use `$if(condition, trueValue, falseValue)` to conditionally set a value based on an expression: ```yaml value: $if($amount > 0, $amount, $null) # Use amount if positive, else null value: $if($from == 0x0000000000000000000000000000000000000000, "mint", "transfer") value: $if($value >= 1000000, $value, $null) # Only store if value >= 1M ``` **Syntax:** `$if(condition, valueIfTrue, valueIfFalse)` * **condition**: A boolean expression using the same syntax as `if:` filters * **valueIfTrue**: Value to use when condition is true (can be `$field`, `$null`, literal, etc.) * **valueIfFalse**: Value to use when condition is false **Supported operators in conditions:** | Operator | Meaning | | -------- | ---------------- | | `==` | Equal | | `!=` | Not equal | | `>` | Greater than | | `>=` | Greater or equal | | `<` | Less than | | `<=` | Less or equal | | `&&` | Logical AND | | `\|\|` | Logical OR | **Examples:** ```yaml columns: - name: holder - name: transfer_type type: string - name: significant_amount type: uint256 nullable: true # Allows $null events: - event: Transfer operations: - type: upsert where: holder: $to set: # Classify transfer type based on addresses - column: transfer_type action: set value: $if($from == 0x0000000000000000000000000000000000000000, "mint", $if($to == 0x0000000000000000000000000000000000000000, "burn", "transfer")) # Only store amount if it's significant (>= 1000), else null - column: significant_amount action: set value: $if($value >= 1000, $value, $null) ``` :::tip[Nested $if()] You can nest `$if()` expressions for multiple conditions, as shown in the `transfer_type` example above. This is equivalent to if-else-if chains. ::: :::tip[$if() vs Multiple Operations] Use `$if()` when you want to set **different values** for the same column based on conditions. Use multiple operations with `if:` filters when you want to perform **different actions** (e.g., different tables, different columns). ::: #### Arithmetic Expressions Perform calculations using event fields: ```yaml value: $value * 2 # Multiply by constant value: $amount + $fee # Add two event fields value: $amount0 - $amount1 # Subtract fields value: $ratio / 100 # Divide by constant value: $amount * $price # Multiply two fields value: 10 ^ $decimals # Exponentiation (10 to the power of decimals) value: $base ^ 18 # Raise field to a power ``` **Supported operators:** `+`, `-`, `*`, `/`, `^` (exponentiation) :::tip[Operator Precedence] Operators follow standard mathematical precedence: 1. `^` (exponentiation) - highest 2. `*`, `/` (multiplication, division) 3. `+`, `-` (addition, subtraction) - lowest Use parentheses to control order: `($a + $b) * $c` ::: :::tip[When to Use Arithmetic] Arithmetic is useful for: * **USD value calculations**: `$amount * $price` * **Fee calculations**: `$amount - $fee` or `$gross * $feePercent / 10000` * **Combining amounts**: `$amount0 + $amount1` * **Scaling values**: `$value / 1000000` (e.g., converting from wei) * **Decimal normalization**: `$amount / (10 ^ $decimals)` ::: *** #### Arithmetic with View Calls You can combine arithmetic expressions with `$call()` and `$call_static()` to compute values that depend on onchain data. This is powerful for computing USD values, normalized amounts, and other derived metrics. ```yaml # Compute USD value: (amount * oracle_price) / 10^decimals # Use $call_static for decimals (immutable), $call for price (dynamic) value: ($amount * $call($constant(oracle), "getAssetPrice(address)", $token)) / (10 ^ $call_static($token, "decimals()")) # Normalize amount by fetching decimals on-chain value: $rawAmount / (10 ^ $call_static($tokenAddress, "decimals()")) # Compute with multiple view calls value: $call($pool, "getReserves()")[0] * $call($oracle, "getPrice()") ``` **How it works:** 1. All `$call()` expressions in the arithmetic are resolved first (fetched from the blockchain) 2. The returned values replace the `$call()` placeholders 3. The arithmetic expression is then evaluated with the resolved values ##### Real-World Example: Liquidation USD Value Here's a complete example from an Aave liquidation indexer that computes the USD value of liquidated collateral: ```yaml constants: oracle: ethereum: "0x54586bE62E3c3580375aE3723C145253060Ca0C2" arbitrum: "0xb56c2F0B653B2e0b10C9b928C8580Ac5Df02C7C7" contracts: - name: Pool abi: ./abis/AaveV3Pool.json details: - network: ethereum address: "0x87870Bca3F3fD6335C3F4ce8392D69350B4fA4E2" start_block: 24263944 tables: - name: liquidations columns: - name: borrower - name: collateral_asset - name: collateral_amount_raw - name: total_usd_value type: uint256 # Result has 8 decimal precision from oracle events: - event: LiquidationCall operations: - type: insert set: - column: borrower action: set value: $user - column: collateral_asset action: set value: $collateralAsset - column: collateral_amount_raw action: set value: $liquidatedCollateralAmount - column: total_usd_value action: set # Formula: (amount * price) / 10^decimals # - $liquidatedCollateralAmount: raw amount from event # - getAssetPrice(): returns price with 8 decimals # - decimals(): returns token decimals (e.g., 18 for WETH) # Result: USD value with 8 decimal precision value: ($liquidatedCollateralAmount * $call($constant(oracle), "getAssetPrice(address)", $collateralAsset)) / (10 ^ $call($collateralAsset, "decimals()")) ``` **Formula breakdown:** * `$liquidatedCollateralAmount` - raw collateral amount from the event (e.g., 1000000000000000000 for 1 WETH) * `$call($constant(oracle), "getAssetPrice(address)", $collateralAsset)` - USD price from Aave oracle (8 decimals, e.g., 200000000000 for $2000) * `$call($collateralAsset, "decimals()")` - token decimals (e.g., 18 for WETH) * `10 ^ decimals` - the divisor to normalize the amount **Result:** `(1e18 * 2000e8) / 10^18 = 2000e8` = $2000 with 8 decimal precision :::tip[When to Use Arithmetic + View Calls] This pattern is ideal for: * **USD value calculations** - Normalize token amounts and multiply by oracle prices * **Percentage calculations** - Fetch basis points or rates and compute percentages * **Cross-token metrics** - Compare values across tokens with different decimals * **Protocol-specific formulas** - Compute health factors, liquidation thresholds, etc. ::: :::warning[Performance Consideration] Each `$call()` in an arithmetic expression requires an RPC call per event. Use `$call_static()` for immutable values like `decimals()` - they're cached forever after the first call, eliminating repeated RPC requests for high-volume indexing. ::: #### String Templates Embed event fields into strings using `$fieldName` within any text: ```yaml value: "$from-$to" # Concatenate two addresses with dash value: "Pool: $token0/$token1" # Create pool identifier value: "Transfer from $from" # Prefix text with field value: "Block $rindexer_block_number: $rindexer_tx_hash" # Mix tx metadata with text ``` :::tip[When to Use String Templates] String templates are useful for: * **Composite keys**: `"$token0-$token1"` for pool identifiers * **Human-readable labels**: `"Transfer from $from to $to"` * **Unique identifiers**: `"$rindexer_contract_address:$tokenId"` * **Combining metadata**: `"Block $rindexer_block_number"` ::: #### Constants Define reusable values at the manifest level and reference them with `$constant(name)`. Constants are especially powerful for **network-scoped configurations** where you need different values per network (like oracle addresses, protocol contracts, or fee recipients). ##### Defining Constants Add constants at the root level of your `rindexer.yaml`: ```yaml name: MyIndexer project_type: no-code constants: # Simple constant - same value for all networks fee_recipient: "0x1234567890123456789012345678901234567890" # Network-scoped constant - different value per network oracle: ethereum: "0x54586bE62E3c3580375aE3723C145253060Ca0C2" arbitrum: "0xbDdE4E4429c6Ef916d2633A2c80E0F6D0F893C44" optimism: "0x3C19d4C5E0D43d1f7a0f4c8E8d5f6b3a2b1c0d9e" base: "0x8B4d3e5F6A7c8D9E0F1a2B3c4D5e6F7a8B9c0D1e" networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co - name: arbitrum chain_id: 42161 rpc: https://arbitrum.gateway.tenderly.co # ... more networks contracts: # ... ``` ##### Using Constants Reference constants with `$constant(name)` anywhere you'd use a value: ```yaml tables: - name: prices columns: - name: asset - name: price_usd type: int256 events: - event: CollateralDeposited operations: - type: upsert where: asset: $collateralAsset set: - column: price_usd action: set # $constant(oracle) resolves to the network-specific oracle address value: $call($constant(oracle), "getAssetPrice(address)", $collateralAsset) ``` **How it works:** * When the event is processed on Ethereum, `$constant(oracle)` resolves to `0x54586bE62E3c3580375aE3723C145253060Ca0C2` * When processed on Arbitrum, it resolves to `0xbDdE4E4429c6Ef916d2633A2c80E0F6D0F893C44` * Simple constants (like `fee_recipient`) resolve to the same value on all networks ##### Where Constants Can Be Used Constants work in: * **View call contract addresses**: `$call($constant(oracle), "getPrice()")` * **View call arguments**: `$call($contract, "allowance(address)", $constant(fee_recipient))` * **Direct values**: `value: $constant(default_amount)` * **Where clauses**: `where: { recipient: $constant(fee_recipient) }` :::tip[When to Use Constants] Constants are ideal for: * **Oracle addresses** that differ per network * **Protocol contract addresses** (e.g., Uniswap router, Aave pool) * **Fee recipients** or treasury addresses * **Configuration values** shared across multiple tables * **Eliminating duplicate contract definitions** - define one contract with network-scoped constants instead of duplicating for each network ::: :::warning[Constant Resolution] If a network-scoped constant doesn't have a value for a specific network, the operation will fail. Always ensure you define values for all networks you're indexing. ::: *** ### Condition Expressions Use the `if:` field to filter which events trigger operations. #### Comparison Operators | Operator | Meaning | Example | | -------- | ---------------- | -------------------- | | `==` | Equal | `$from == 0x0000...` | | `!=` | Not equal | `$to != 0x0000...` | | `>` | Greater than | `$value > 0` | | `>=` | Greater or equal | `$value >= 1000000` | | `<` | Less than | `$value < 1000000` | | `<=` | Less or equal | `$value <= 100` | :::tip[Nested Fields in Conditions] You can use dot notation for nested tuple/struct fields in conditions too: ```yaml if: "$order.amount > 0 && $order.maker != 0x0000000000000000000000000000000000000000" ``` ::: #### Logical Operators | Operator | Meaning | Example | | -------- | ------- | ------------------------------------------ | | `&&` | AND | `$value > 0 && $from != 0x0000...` | | `\|\|` | OR | `$from == 0x0000... \|\| $to == 0x0000...` | | `!` | NOT | `!($paused == true)` | #### NOT Operator The `!` operator negates an expression. Use it to invert the result of a condition or group of conditions: ```yaml # Skip if paused if: "!($paused == true)" # Skip if either frozen or paused if: "!($frozen == true || $paused == true)" # Only process if NOT a mint AND NOT a burn if: "!($from == 0x0000000000000000000000000000000000000000) && !($to == 0x0000000000000000000000000000000000000000)" ``` :::tip[Parentheses Required] When using `!`, wrap the expression in parentheses: `!($condition)` not `!$condition`. ::: #### Event vs Table References | Syntax | Meaning | When to Use | | ---------- | ---------------------- | --------------------------- | | `$value` | Incoming event value | Compare event data | | `@balance` | Current database value | Compare with existing state | **Example: Only update if the new value exceeds the current balance** ```yaml if: "$value > @balance" ``` This is powerful for: * High water marks (only store if higher) * Conditional updates (only update if changed) * Preventing stale data overwrites :::info[Performance] Conditions with `@` table references are pushed to SQL (`WHERE EXCLUDED.value > table.balance`), so the database handles the comparison efficiently. ::: *** ### Cron Triggers (Scheduled Operations) In addition to event-driven operations, you can trigger table operations on a **schedule** using cron. This is perfect for: * **Periodic data fetching** - Poll on-chain state at regular intervals * **Price feeds** - Update prices from oracles every few seconds/minutes * **Snapshots** - Record state at fixed intervals * **Heartbeat data** - Maintain up-to-date records even when no events occur Tables can have `events`, `cron`, or **both** - giving you maximum flexibility. #### Basic Cron Configuration ```yaml tables: - name: eth_price columns: - name: id type: string - name: price type: int256 cron: // [!code focus] - interval: 5s # Run every 5 seconds // [!code focus] operations: // [!code focus] - type: upsert // [!code focus] where: // [!code focus] id: "eth-usd" // [!code focus] set: // [!code focus] - column: price // [!code focus] action: set // [!code focus] value: $call($contract, "latestAnswer()") // [!code focus] ``` #### Schedule Formats | Format | Example | Description | | --------------- | ----------------------------- | -------------------------------------- | | Simple interval | `5s`, `30s`, `5m`, `1h`, `1d` | Fixed time intervals | | Cron expression | `"*/5 * * * *"` | Standard cron syntax (every 5 minutes) | **Simple Intervals:** * `s` = seconds (e.g., `30s` = every 30 seconds) * `m` = minutes (e.g., `5m` = every 5 minutes) * `h` = hours (e.g., `1h` = every hour) * `d` = days (e.g., `1d` = every day) **Cron Expressions** follow standard cron syntax: ``` β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ minute (0-59) β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ hour (0-23) β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ day of month (1-31) β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ month (1-12) β”‚ β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ day of week (0-6, Sunday=0) β”‚ β”‚ β”‚ β”‚ β”‚ * * * * * ``` Use `interval` **or** `schedule`, not both: ```yaml cron: # Option 1: Simple interval - interval: 5m operations: [...] # Option 2: Cron expression - schedule: "0 * * * *" # Every hour at minute 0 operations: [...] ``` #### Available Variables in Cron Operations Since cron operations don't have event context, only these variables are available: | Variable | Description | | ------------------------ | ----------------------------------------------- | | `$call(...)` | View function calls (same syntax as events) | | `$contract` | Contract address from contract details | | `$rindexer_block_number` | Latest block number at execution time | | `$rindexer_timestamp` | Current timestamp | | Literals | String/number values (e.g., `"eth-usd"`, `100`) | :::warning[No Event Fields in Cron] Event fields like `$from`, `$to`, `$value`, etc. are **NOT available** in cron operations. Cron runs on a schedule, not in response to events, so there's no event data to reference. ::: #### Combining Events and Cron Tables can have both event triggers and cron triggers. This is useful when you want to: * Update on events (immediate reaction) * Also update periodically (ensure freshness) ```yaml tables: - name: token_state columns: - name: holder - name: balance type: uint256 default: "0" - name: last_checked_balance type: uint256 default: "0" # Update balance on Transfer events // [!code focus] events: // [!code focus] - event: Transfer // [!code focus] operations: // [!code focus] - type: upsert // [!code focus] where: // [!code focus] holder: $to // [!code focus] set: // [!code focus] - column: balance // [!code focus] action: add // [!code focus] value: $value // [!code focus] # Also periodically verify balance via view call // [!code focus] cron: // [!code focus] - interval: 1h // [!code focus] operations: // [!code focus] - type: upsert // [!code focus] where: // [!code focus] holder: "0xKnownWhaleAddress" // [!code focus] set: // [!code focus] - column: last_checked_balance // [!code focus] action: set // [!code focus] value: $call($contract, "balanceOf(address)", "0xKnownWhaleAddress") // [!code focus] ``` :::tip[Insert vs Upsert for Cron] * Use `global: true` with `upsert` for single-row tables (current state) * Use `insert` without `global` for time-series/history tables (new row each time) * Insert tables automatically get an auto-incrementing `rindexer_id` as primary key ::: #### Historical Cron Sync Just like event indexing supports replaying historical blocks, cron triggers can also sync historical data. This is useful for: * **Building historical price snapshots** - Replay oracle prices at past blocks * **Backfilling time-series data** - Generate data points at regular block intervals from the past * **Reconstructing historical state** - Capture on-chain state at specific historical moments :::warning[Paid RPC Node Strongly Recommended] Historical cron sync is **very RPC-intensive** - it makes one or more view calls **per block** in the range. For example, syncing 100,000 blocks with `block_interval: 1` means 100,000+ RPC calls. **Free public nodes will be extremely slow** due to rate limiting. rindexer includes adaptive rate limiting that scales down for free nodes, but expect significantly reduced throughput. For production historical cron sync, use a **paid RPC provider** (Alchemy, QuickNode, Infura, etc.) to achieve \~300+ blocks/second instead of \~20 blocks/second on free nodes. ::: ##### Configuration Add `start_block`, `end_block`, and optionally `block_interval` to your cron configuration: ```yaml tables: - name: eth_price_history columns: - name: price type: int256 cron: - schedule: "*/5 * * * *" # Live schedule (used after historical sync) start_block: 18000000 # Start historical sync from this block // [!code focus] end_block: 19000000 # Stop at this block (optional) // [!code focus] block_interval: 100 # Execute every 100 blocks (optional) // [!code focus] operations: - type: insert set: - column: price action: set value: $call($contract, "latestAnswer()") ``` ##### Fields | Field | Required | Description | | ---------------- | -------------------- | ------------------------------------------------------------------------ | | `start_block` | Yes (for historical) | Block number to begin historical sync from | | `end_block` | No | Block number to stop at. If omitted, syncs to latest then continues live | | `block_interval` | No | How many blocks between each execution. Default: `1` (every block) | ##### Behavior 1. **Historical sync first**: If `start_block` is specified, rindexer replays the cron operations from `start_block` forward, executing at each `block_interval` step. 2. **Database state tracking**: Progress is saved to the database, so if you restart, it resumes from where it left off (just like event indexing). 3. **After historical sync completes**: * If `end_block` is specified β†’ The cron **stops completely** (no live mode) * If `end_block` is omitted β†’ Switches to **live mode** using the `interval` or `schedule` ##### Example: Backfill Oracle Prices Every 100 Blocks ```yaml cron: - interval: 15s # Live: poll every 15 seconds start_block: 24184625 # Historical: start from this block block_interval: 100 # Historical: snapshot every 100 blocks network: ethereum operations: - type: insert set: - column: price action: set value: $call($contract, "latestAnswer()") ``` This will: 1. Insert a price snapshot at blocks 24184625, 24184725, 24184825, ... up to the latest block 2. Then switch to live mode, inserting every 15 seconds Full example: ```yaml name: Chainlink description: Demonstrates cron historical sync - replaying cron operations at past blocks repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: 'RPC' storage: postgres: enabled: true drop_each_run: true contracts: - name: ETHUSDFeed details: - network: ethereum address: "0x5f4eC3Df9cbd43714FE2740f5E3616155c5b8419" abi: ./abis/ChainlinkAggregator.json tables: - name: eth_price_historical_only columns: - name: price type: int256 cron: - interval: 15s # Live mode: poll every 15 seconds start_block: 24184625 block_interval: 100 # Execute every 100 blocks network: ethereum operations: - type: insert set: - column: price action: set value: $call($contract, "latestAnswer()") ``` ##### Example: Historical-Only Sync (No Live Mode) For one-time backfills that shouldn't continue running: ```yaml cron: - start_block: 24184625 # Start block end_block: 24284625 # Stop at this block (required for historical-only) // [!code focus] block_interval: 100 # Every 100 blocks network: ethereum operations: - type: insert set: - column: price action: set value: $call($contract, "latestAnswer()") ``` When `end_block` is specified, the cron task stops after reaching that block and does **not** continue with live polling. :::warning[No Schedule Required for Historical-Only] If you specify `start_block` with `end_block` (historical-only mode), you don't need `interval` or `schedule`. The cron will run through the block range and then stop. ::: ##### Performance: Adaptive Rate Limiting rindexer automatically adapts to your RPC node's capabilities: | Node Type | Expected Speed | Behavior | | ------------------- | ---------------- | -------------------------------------------------------- | | **Paid/Enterprise** | \~300 blocks/sec | Scales up to 100 concurrent requests, 1000-block batches | | **Free Public** | \~20 blocks/sec | Automatically scales down when rate limited | The system starts conservatively and scales up aggressively when the RPC responds quickly. If rate limiting is detected (slow responses), it scales back down and waits before retrying. :::tip[block_interval for Large Ranges] For very large historical ranges, use `block_interval` to reduce RPC calls: * `block_interval: 1` = Every block (most data, slowest) * `block_interval: 100` = Every 100 blocks (good balance) * `block_interval: 1000` = Every 1000 blocks (fastest, less granular) ::: :::tip[When to Use Cron vs Events] | Use Case | Recommendation | | -------------------------------- | ----------------------------------------------------------------------- | | Token balances, ownership | **Events** - Transfer events capture all changes | | Price feeds from oracles | **Cron** - Poll at regular intervals | | Protocol state snapshots | **Cron** - Periodic recording | | Real-time metrics | **Both** - Events for immediate updates, cron for periodic verification | | Data that changes without events | **Cron** - Only way to capture non-event state changes | ::: *** ### Auto-Injected Columns Every custom table automatically includes these columns - you don't need to define them: | Column | Type | Description | | --------------------------- | -------------------- | ------------------------------------------------------------ | | `network` | VARCHAR | Network name (omitted if `cross_chain: true`) | | `rindexer_sequence_id` | NUMERIC NOT NULL | Unique ID for deterministic ordering | | `rindexer_block_number` | BIGINT NOT NULL | Block number of the event | | `rindexer_block_timestamp` | TIMESTAMPTZ NOT NULL | Block timestamp of the event (**only if `timestamp: true`**) | | `rindexer_tx_hash` | CHAR(66) NOT NULL | Transaction hash of the event | | `rindexer_block_hash` | CHAR(66) NOT NULL | Block hash of the event | | `rindexer_contract_address` | CHAR(42) NOT NULL | Contract that emitted the event | These let you track when and where each event originated. :::info[Optional Block Timestamp] The `rindexer_block_timestamp` column is only created when you set `timestamp: true` on the table. See [timestamp](#timestamp) for details on when to enable this and its performance implications. ::: :::tip[Why the `rindexer_` prefix?] All auto-injected columns are prefixed with `rindexer_` to avoid conflicts with your own column names. You can safely define columns like `tx_hash`, `contract_address`, etc. without any clashes. ::: *** ### Real-World Examples #### NFT Ownership (ERC721) Track who owns each NFT: ```yaml tables: - name: ownership columns: - name: token_id - name: owner events: - event: Transfer operations: - type: upsert where: token_id: $tokenId set: - column: owner action: set value: $to ``` **Result:** An `ownership` table where you can instantly look up who owns any NFT. *** #### ERC20 Allowances (Approvals) Track how much each spender is approved to spend on behalf of each owner: ```yaml tables: - name: allowances columns: - name: owner - name: spender - name: amount default: "0" events: - event: Approval operations: - type: upsert where: owner: $owner spender: $spender set: - column: amount action: set # Approvals replace, not add value: $value ``` **Result:** An `allowances` table with one row per (owner, spender) pair. Query any approval instantly. :::tip[Approvals Replace, Not Add] Unlike balances where transfers add/subtract, approvals **replace** the previous value. Use `action: set` instead of `action: add`. ::: *** #### ERC1155 Multi-Token Balances (Compound Primary Keys) ERC1155 tokens require tracking balances per **(holder, token\_id)** combination - a compound primary key: ```yaml tables: - name: balances columns: - name: holder - name: token_id - name: balance default: "0" events: # Handle single transfers - event: TransferSingle operations: # Credit recipient - type: upsert where: holder: $to token_id: $id # Compound key: (holder, token_id) if: "$to != 0x0000000000000000000000000000000000000000" set: - column: balance action: add value: $value # Debit sender - type: upsert where: holder: $from token_id: $id if: "$from != 0x0000000000000000000000000000000000000000" set: - column: balance action: subtract value: $value # Handle batch transfers using iterate - event: TransferBatch iterate: - "$ids as token_id" - "$values as amount" operations: # Credit recipient for each token - type: upsert where: holder: $to token_id: $token_id if: "$to != 0x0000000000000000000000000000000000000000" set: - column: balance action: add value: $amount # Debit sender for each token - type: upsert where: holder: $from token_id: $token_id if: "$from != 0x0000000000000000000000000000000000000000" set: - column: balance action: subtract value: $amount ``` **Result:** A `balances` table with one row per (holder, token\_id) pair. :::tip[Compound Primary Keys] Any columns in the `where` clause become part of the primary key. Use multiple columns for: * **ERC1155**: (holder, token\_id) * **LP Positions**: (user, pool\_address) * **Staking by Pool**: (staker, pool\_id) * **Votes by Proposal**: (voter, proposal\_id) ::: *** #### Token Supply Tracking (Mints & Burns) Track total supply, minted, and burned amounts with a global table: ```yaml tables: - name: supply global: true # One row per network columns: - name: total_supply type: uint256 default: "0" - name: total_minted type: uint256 default: "0" - name: total_burned type: uint256 default: "0" events: - event: Transfer operations: # Mint (from zero address) - type: upsert if: "$from == 0x0000000000000000000000000000000000000000" set: - column: total_supply action: add value: $value - column: total_minted action: add value: $value # Burn (to zero address) - type: upsert if: "$to == 0x0000000000000000000000000000000000000000" set: - column: total_supply action: subtract value: $value - column: total_burned action: add value: $value ``` **Result:** A single row per network with live supply metrics. *** #### Cross-Chain Aggregation Track total balance across Ethereum, Arbitrum, Optimism, and more: ```yaml name: CrossChainUSDC project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co - name: arbitrum chain_id: 42161 rpc: https://arbitrum.gateway.tenderly.co - name: optimism chain_id: 10 rpc: https://optimism.gateway.tenderly.co contracts: - name: USDC details: - network: ethereum address: "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48" start_block: 18600000 - network: arbitrum address: "0xaf88d065e77c8cC2239327C5EDb3A432268e5831" start_block: 150000000 - network: optimism address: "0x0b2c639c533813f4aa9d7837caf62653d097ff85" start_block: 112000000 abi: ./abis/ERC20.json tables: - name: total_balances cross_chain: true # Aggregate across ALL networks columns: - name: holder - name: balance default: "0" events: - event: Transfer operations: - type: upsert where: holder: $to if: "$to != 0x0000000000000000000000000000000000000000" set: - column: balance action: add value: $value - type: upsert where: holder: $from if: "$from != 0x0000000000000000000000000000000000000000" set: - column: balance action: subtract value: $value ``` **Result:** One row per holder with their **total** balance across all chains. *** #### DEX Pool State (Uniswap V2/V3 Style) Track pool reserves, liquidity, and trading metrics: ```yaml tables: # Pool state - reserves and liquidity - name: pool_state global: true # One row per pool per network columns: - name: reserve0 type: uint256 default: "0" - name: reserve1 type: uint256 default: "0" - name: total_supply type: uint256 default: "0" events: - event: Sync operations: - type: upsert set: - column: reserve0 action: set value: $reserve0 - column: reserve1 action: set value: $reserve1 - event: Transfer # LP token mints/burns operations: # Mint (from zero address) - type: upsert if: "$from == 0x0000000000000000000000000000000000000000" set: - column: total_supply action: add value: $value # Burn (to zero address) - type: upsert if: "$to == 0x0000000000000000000000000000000000000000" set: - column: total_supply action: subtract value: $value # Trading metrics - name: trading_metrics global: true columns: - name: swap_count type: uint64 default: "0" - name: volume0 type: uint256 default: "0" - name: volume1 type: uint256 default: "0" events: - event: Swap operations: - type: upsert set: - column: swap_count action: increment - column: volume0 action: add value: $amount0In - column: volume1 action: add value: $amount1In ``` **Result:** Complete pool state with reserves, supply, and volume metrics. :::tip[Factory Indexing for Multiple Pools] To index all pools from a DEX factory (Uniswap, SushiSwap, etc.), see [Factory Indexing](#factory-indexing-with-tables). ::: *** #### Governance Votes Track votes per proposal with compound primary keys: ```yaml tables: - name: votes columns: - name: proposal_id - name: voter - name: support # 0 = against, 1 = for, 2 = abstain - name: voting_power default: "0" events: - event: VoteCast operations: - type: upsert where: proposal_id: $proposalId voter: $voter # Compound key: (proposal_id, voter) set: - column: support action: set value: $support - column: voting_power action: set value: $votes - name: proposal_totals columns: - name: proposal_id - name: for_votes default: "0" - name: against_votes default: "0" - name: abstain_votes default: "0" events: - event: VoteCast operations: - type: upsert where: proposal_id: $proposalId if: "$support == 1" set: - column: for_votes action: add value: $votes - type: upsert where: proposal_id: $proposalId if: "$support == 0" set: - column: against_votes action: add value: $votes - type: upsert where: proposal_id: $proposalId if: "$support == 2" set: - column: abstain_votes action: add value: $votes ``` **Result:** Two tables - individual votes by (proposal, voter) and aggregated totals per proposal. *** #### Price High/Low Tracker Track the highest and lowest prices using `max` and `min` actions: ```yaml tables: - name: price_extremes global: true columns: - name: highest_price type: uint256 default: "0" - name: lowest_price type: uint256 default: "115792089237316195423570985008687907853269984665640564039457584007913129639935" # uint256 max events: - event: PriceUpdate operations: - type: upsert set: - column: highest_price action: max value: $price - column: lowest_price action: min value: $price ``` *** #### Chainlink Price Oracle (Cron) Track ETH/USD price from Chainlink with periodic updates: ```yaml contracts: - name: ETHUSDFeed details: - network: ethereum address: "0x5f4eC3Df9cbd43714FE2740f5E3616155c5b8419" abi: ./abis/ChainlinkAggregator.json tables: # Global table - single row updated via upsert - name: eth_price global: true columns: - name: price type: int256 - name: decimals type: uint8 cron: - interval: 5s network: ethereum operations: - type: upsert set: - column: price action: set value: $call($contract, "latestAnswer()") - column: decimals action: set value: $call($contract, "decimals()") # History table - new row inserted each time - name: eth_price_history columns: - name: price type: int256 cron: - interval: 5s network: ethereum operations: - type: insert # Insert creates new rows - no where clause set: - column: price action: set value: $call($contract, "latestAnswer()") ``` *** #### Registry with Delete (Whitelist/Blacklist) Track active entries in a registry where items can be added and removed: ```yaml tables: - name: verified_tokens columns: - name: token_address - name: name - name: symbol events: - event: TokenAdded operations: - type: upsert where: token_address: $token set: - column: name action: set value: $name - column: symbol action: set value: $symbol - event: TokenRemoved operations: - type: delete # Remove from registry entirely where: token_address: $token ``` **Result:** Only currently verified tokens exist in the table. Removed tokens are deleted, not marked inactive. :::tip[When to Use Delete] Use `delete` when you want rows **completely removed** from the database: * **Registries/Whitelists**: Token lists, verified contracts, approved operators * **Active positions only**: Remove closed positions instead of marking them closed * **Membership lists**: DAO members, stakers, liquidity providers If you need historical data, use `upsert` with a status column instead. ::: *** #### Factory Indexing with Tables Many protocols deploy contracts dynamically - Uniswap creates pools, Aave deploys markets, lending protocols spin up vaults. **Factory indexing** discovers these contracts automatically, and **Tables** can aggregate their data. ```yaml name: Uniswap project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: # The factory contract - discovers pool addresses - name: Factory details: - network: ethereum address: "0x1F98431c8aD98523631AE4a59f267346ea31F984" start_block: 21000000 abi: ./abis/UniswapV3Factory.json include_events: - PoolCreated # Factory-indexed pools with custom tables - name: Pool details: - network: ethereum start_block: 21000000 factory: name: Factory address: "0x1F98431c8aD98523631AE4a59f267346ea31F984" abi: ./abis/UniswapV3Factory.json event_name: PoolCreated input_name: "pool" # Field containing the new pool address abi: ./abis/UniswapV3Pool.json tables: # Aggregate metrics per pool - name: pool_metrics columns: - name: pool_address - name: swap_count type: uint64 default: "0" - name: total_volume_token0 type: int256 default: "0" - name: total_volume_token1 type: int256 default: "0" events: - event: Swap operations: - type: upsert where: pool_address: $rindexer_contract_address # The pool that emitted the event set: - column: swap_count action: increment - column: total_volume_token0 action: add value: $amount0 - column: total_volume_token1 action: add value: $amount1 ``` **How it works:** 1. The factory contract (`Factory`) is indexed first 2. When `PoolCreated` events are found, rindexer automatically starts indexing those pool addresses 3. `Swap` events from all discovered pools update the custom tables 4. `$rindexer_contract_address` references the specific pool that emitted each event **Result:** Aggregated metrics for every Uniswap V3 pool, discovered automatically. :::info[Factory Indexing Details] For full factory indexing configuration options (multiple input fields, token indexing, etc.), see the [Factory Indexing documentation](/docs/start-building/yaml-config/contracts#factory-indexing). ::: *** #### Factory Cron Operations Cron operations also work with factory-indexed contracts. This is powerful for periodically polling state from all discovered contracts - like getting liquidity from all Uniswap pools or prices from all oracle instances. ```yaml name: Uniswap project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: # Factory-indexed pools with cron to poll liquidity - name: Pool details: - network: ethereum start_block: 24152749 end_block: 24153452 factory: name: Factory address: "0x1F98431c8aD98523631AE4a59f267346ea31F984" abi: ./abis/UniswapV3Factory.json event_name: PoolCreated input_name: "pool" abi: ./abis/UniswapV3Pool.json # Factory contracts require at least one event for the dependency system // [!code focus] # (even for cron-only tables). This event doesn't need to be used. // [!code focus] include_events: // [!code focus] - Swap // [!code focus] tables: # Track liquidity for each discovered pool using cron - name: pool_liquidity columns: - name: pool_address type: address - name: liquidity type: uint128 - name: tick type: int32 cron: - interval: 30s start_block: 24152749 end_block: 24153452 network: ethereum operations: - type: upsert where: pool_address: $contract # $contract iterates over ALL discovered pools // [!code focus] set: - column: liquidity action: set value: $call($contract, "liquidity()") - column: tick action: set value: $call($contract, "slot0()")[1] # Access tuple element by index // [!code focus] ``` **How factory cron works:** 1. Factory event indexing discovers pool addresses first (from `PoolCreated` events) 2. Cron scheduler waits until addresses are discovered before starting 3. On each cron tick, `$contract` iterates over **all** discovered addresses 4. View calls are made to each pool, and results are stored per pool 5. **Birth block optimization**: Pools are only queried for blocks >= their creation block 6. New pools discovered later are automatically included on subsequent ticks **Accessing tuple return values:** Many Solidity functions return multiple values. Use `[index]` to access specific elements: ```yaml # slot0() returns (sqrtPriceX96, tick, observationIndex, ...) value: $call($contract, "slot0()")[0] # Get sqrtPriceX96 (position 0) value: $call($contract, "slot0()")[1] # Get tick (position 1) # getReserves() returns (reserve0, reserve1, blockTimestampLast) value: $call($contract, "getReserves()")[0] # Get reserve0 value: $call($contract, "getReserves()")[1] # Get reserve1 ``` :::warning[Factory Contracts Require include_events] Even for cron-only tables, factory contracts must specify at least one event in `include_events`. This is required by the dependency system that ensures factory events are processed before child contract operations. The event doesn't need to be used in your tables - it just needs to exist. ::: :::warning[RPC Intensive] Factory cron with many discovered addresses (e.g., 1000+ pools) makes many RPC calls per tick. Use a paid RPC node and consider using `block_interval` to reduce frequency for historical sync. ::: *** ### Putting It All Together A complete indexer with multiple tables: ```yaml name: DeFiDashboard project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: Token details: - network: ethereum address: "0x..." start_block: 18600000 abi: ./abis/ERC20.json tables: # Table 1: Individual balances - name: balances columns: - name: holder - name: balance default: "0" events: - event: Transfer operations: - type: upsert where: holder: $to if: "$to != 0x0000000000000000000000000000000000000000" set: - column: balance action: add value: $value - type: upsert where: holder: $from if: "$from != 0x0000000000000000000000000000000000000000" set: - column: balance action: subtract value: $value # Table 2: Global metrics - name: metrics global: true columns: - name: total_supply type: uint256 default: "0" - name: transfer_count type: uint256 default: "0" events: - event: Transfer operations: # Track mints - type: upsert if: "$from == 0x0000000000000000000000000000000000000000" set: - column: total_supply action: add value: $value # Track burns - type: upsert if: "$to == 0x0000000000000000000000000000000000000000" set: - column: total_supply action: subtract value: $value # Count all transfers - type: upsert set: - column: transfer_count action: increment ``` *** ### Querying Your Tables with GraphQL Once you define custom tables, rindexer **automatically generates a full GraphQL API** to query them. No extra configuration needed - just enable GraphQL and your tables are instantly queryable. #### Enable GraphQL ```yaml [rindexer.yaml] storage: postgres: enabled: true graphql: enabled: true ``` #### Start the GraphQL Server ```bash rindexer start all # Starts both indexer and GraphQL server ``` GraphQL will be available at `http://localhost:3001/graphql` with a playground at `http://localhost:3001/playground`. #### Example Queries For a `balances` table, rindexer automatically generates queries like: ```graphql # Get all balances query { allBalances(first: 100, orderBy: BALANCE_DESC) { nodes { holder balance network lastUpdatedBlock lastUpdatedAt } pageInfo { hasNextPage endCursor } } } # Get a specific holder's balance query { allBalances(condition: { holder: "0x..." }) { nodes { holder balance network } } } # Filter by network query { allBalances(condition: { network: "ethereum" }, first: 50) { nodes { holder balance } } } ``` #### What Gets Generated For each custom table, you get: | Query | Description | | ----------------- | ------------------------------------------------------- | | `all{TableName}` | Query all rows with filtering, pagination, and ordering | | `{tableName}ById` | Get a specific row by primary key | All your columns become queryable fields, including the auto-injected metadata columns (`network`, `lastUpdatedBlock`, `lastUpdatedAt`, `txHash`, etc.). :::tip[Full API Documentation] For complete GraphQL API details including filtering, pagination, and ordering options, see the [GraphQL API documentation](/docs/accessing-data/graphql). ::: *** ### Schema Migration When you modify your custom tables in YAML (add columns, remove columns, change primary keys), rindexer **automatically detects and handles schema changes** when you run `rindexer start`. :::tip[PostgreSQL & ClickHouse Support] Schema migration works with both PostgreSQL and ClickHouse. For ClickHouse, "primary key" changes refer to `ORDER BY` clause changes (which serve the same purpose in ClickHouse's ReplacingMergeTree engine). ::: #### How It Works On startup, rindexer compares your YAML table definitions against the actual database schema and: | Change Type | Behavior | | ----------------------- | -------------------------------------------------------- | | **New column added** | Auto-applies the change (adds column with default value) | | **Column removed** | Prompts you to confirm deletion | | **Primary key changed** | Prompts you to confirm (may fail if duplicates exist) | | **Column type changed** | Warns you - requires manual migration | #### Adding New Columns New columns are automatically added with their default value from YAML: ```yaml tables: - name: balances columns: - name: holder - name: balance default: "0" - name: last_activity # NEW: will be auto-added default: "0" ``` When you run `rindexer start`, the column is added: ``` [rindexer] Schema changes detected: βœ“ Adding column 'last_activity' (NUMERIC) DEFAULT 0 to table 'my_indexer_usdc.balances' β†’ Column added successfully ``` Existing rows will have the default value (or `NULL` if no default specified). #### Removing Columns If you remove a column from your YAML, rindexer will prompt before deleting: ``` [rindexer] Schema changes detected: ? Column 'old_field' exists in database but not in YAML for table 'my_indexer_usdc.balances' Delete this column? This will permanently remove data [y/N]: ``` * Press `y` to delete the column and its data * Press `n` (or Enter) to keep the column - rindexer will ignore it during indexing #### Changing Primary Keys If you modify the `where` clause (which determines the primary key), rindexer will prompt: ``` [rindexer] Schema changes detected: ? Primary key change detected for table 'my_indexer_usdc.balances': Current: (network, holder) New: (network, holder, token_id) Change primary key? This may fail if data has duplicates [y/N]: ``` :::warning[Duplicate Data] Primary key changes may fail if your existing data has duplicate values for the new key columns. You may need to clean up data manually before the change can be applied. ::: #### Type Changes (Manual Migration Required) If you change a column's type, rindexer will warn you but cannot automatically migrate: ``` [rindexer] Schema changes detected: ! Column type change detected for 'amount' in table 'my_indexer_usdc.balances': Current: bigint New: numeric Type changes require manual migration. Please backup your data and handle this manually. ``` For type changes, you'll need to: 1. Backup your data 2. Drop and recreate the table, or 3. Manually ALTER the column type with appropriate casting #### CI/CD Automation with `--yes` For automated deployments where interactive prompts aren't possible, use the `--yes` flag: ```bash rindexer start all --yes ``` With `--yes`: * New columns are still auto-added (same as normal) * Column deletions are auto-confirmed * Primary key changes are auto-confirmed * Type change warnings are still shown (no auto-fix) :::warning[Use with Caution] The `--yes` flag will automatically delete columns and change primary keys. Make sure your CI/CD pipeline has proper safeguards and backups before using this flag. ::: #### Best Practices 1. **Test schema changes locally first** - Run `rindexer start` locally to see what changes will be applied 2. **Backup before primary key changes** - PK changes can fail and may require manual cleanup 3. **Avoid type changes when possible** - If you need a different type, consider adding a new column instead 4. **Use defaults for new columns** - Adding `default: "0"` ensures existing rows have valid values *** ### Tables vs Raw Event Logging Custom Tables work **independently** from raw event logging (`include_events`): | Config | What Happens | | ---------------------------------- | ------------------------------------------------------------------ | | `tables` only | Only custom tables are created and populated. No raw event tables. | | `include_events` only | Only raw event tables are created (traditional logging). | | Both `tables` and `include_events` | Both custom tables AND raw event tables are created. | :::tip[Recommended: Tables Only] For most use cases, just use `tables` without `include_events`. This gives you exactly the data you need without wasting storage on raw event logs you won't use. ::: **Example: Tables only (no raw event storage)** ```yaml contracts: - name: USDC abi: ./abis/ERC20.json details: - network: ethereum address: "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48" start_block: 18600000 # No include_events = no raw event table tables: - name: balances # ... table definition ``` **Example: Both tables AND raw events** ```yaml contracts: - name: USDC abi: ./abis/ERC20.json details: - network: ethereum address: "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48" start_block: 18600000 include_events: - Transfer # Creates raw transfer table tables: - name: balances # ... table definition ``` *** ### Validation Errors rindexer validates your configuration at startup and provides clear error messages: | Error | Meaning | | -------------------------------------- | -------------------------------------------------------------------- | | `Event 'X' not found in ABI` | The event name in `tables.events` doesn't exist in the contract ABI | | `Field '$X' not found in event ABI` | The event field you referenced (e.g., `$foo`) doesn't exist | | `Column 'X' not found in table fields` | The column name in `set` or `where` doesn't match any defined column | | `Invalid condition expression` | Syntax error in your `if:` condition | All errors include the table name, event name, and contract name to help you locate the issue. *** ### For Rust Project Users :::warning[Advanced Users] If you're using a **Rust project**, you're an advanced user who writes custom indexing logic. Custom Tables are designed for **no-code projects**. In Rust, you have full control to build your own database schemas and update logic. ::: *** ### Next Steps * [YAML Config Reference](/docs/start-building/yaml-config) - Full configuration options * [Running Your Indexer](/docs/start-building/running) - Start indexing * [GraphQL API](/docs/accessing-data/graphql) - Query your data ## Cloudflare Queues :::info rindexer streams can be used without any other storage providers. It can also be used with storage providers. ::: rindexer allows you to stream blockchain events to [Cloudflare Queues](https://developers.cloudflare.com/queues/), enabling realtime processing of blockchain data in your Cloudflare Workers. This integration provides guaranteed message delivery, global distribution, and seamless integration with your Cloudflare-based backend infrastructure. This goes under the [contracts](/docs/start-building/yaml-config/contracts) or [native\_transfers](/docs/start-building/yaml-config/native-transfers) section of the YAML configuration file. ### Configuration with rindexer Cloudflare Queues configuration requires your Cloudflare API token, account ID, and queue definitions. ### Example :::code-group ```yaml [contract events] name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] cloudflare_queues: // [!code focus] api_token: ${CLOUDFLARE_API_TOKEN} // [!code focus] account_id: ${CLOUDFLARE_ACCOUNT_ID} // [!code focus] queues: // [!code focus] - queue_id: blockchain-transfers // [!code focus] networks: // [!code focus] - ethereum // [!code focus] events: // [!code focus] - event_name: Transfer // [!code focus] alias: RocketPoolTransfer ``` ```yaml [native transfers] name: ETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: - network: ethereum streams: // [!code focus] cloudflare_queues: // [!code focus] api_token: ${CLOUDFLARE_API_TOKEN} // [!code focus] account_id: ${CLOUDFLARE_ACCOUNT_ID} // [!code focus] queues: // [!code focus] - queue_id: native-transfers // [!code focus] networks: // [!code focus] - ethereum // [!code focus] events: // [!code focus] - event_name: NativeTransfer // [!code focus] ``` ::: ### Message Format The message sent to your Cloudflare Queue is already decoded and parsed into a JSON object with a `message_id` field added for tracking. * `message_id` - Unique identifier for this message from rindexer * `event_name` - The name of the event * `event_signature_hash` - The event signature hash (keccak256 hash of the event signature) * `body` > `event_data` - Array of decoded event data with transaction information * `network` - The network the event was emitted on For example a transfer event would look like: ```json { "message_id": "rindexer_stream__blockchain-transfers-transfer-chunk-0", "event_name": "Transfer", "event_signature_hash": "0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef", "body": { "event_data": [ { "from": "0x0338ce5020c447f7e668dc2ef778025ce3982662", "to": "0x0338ce5020c447f7e668dc2ef778025ce3982662", "value": "1000000000000000000", "tx_information": { "address": "0xae78736cd615f374d3085123a210448e74fc6393", "block_hash": "0x8461da7a1d4b47190a01fa6eae219be40aacffab0dd64af7259b2d404572c3d9", "block_number": "18718011", "log_index": "0", "network": "ethereum", "transaction_hash": "0x145c6705ffbf461e85d08b4a7f5850d6b52a7364d93a057722ca1194034f3ba4", "transaction_index": "0" } } ] }, "network": "ethereum" } ``` ### Cloudflare Worker Consumer Here's an example of a Cloudflare Worker that consumes messages from your queues: ```javascript export default { async queue(batch, env, ctx) { for (const message of batch.messages) { try { for (const eventInfo of message.body.event_data) { console.log(eventInfo); // Result: // { // "from": "0xf081470f5c6fbccf48cc4e5b82dd926409dcdd67", // "to": "0x58bd88f0c826bdc2d8adaf66abb66bb99d961a3d", // "transaction_information": { // "address": "0xae78736cd615f374d3085123a210448e74fc6393", // "block_hash": "0x58d7fd9aab0a4023f812dd0919d70f63ffa9a92ee26d83d34b04fbb000b74e9b", // "block_timestamp": "0x6567a397", // "log_index": "0x1b0", // "network": "ethereum", // "transaction_hash": "0x7cef018bc6090ac7d5a73d6ffe1975c6d52353cad315de9b7c771db6648c7a44", // "block_number": 18679752, // "chain_id": 1, // "transaction_index": 148 // }, // "value": "45336822342319436" // } } message.ack(); } catch (error) { console.error('Error processing message:', error); message.retry(); } } } }; ``` ### YAML Config #### api\_token Your Cloudflare API token with permissions to access Queues API. :::info We strongly recommend using environment variables for your API token. ::: ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] cloudflare_queues: // [!code focus] api_token: ${CLOUDFLARE_API_TOKEN} // [!code focus] ``` #### account\_id Your Cloudflare Account ID where your queues are created. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] cloudflare_queues: // [!code focus] api_token: ${CLOUDFLARE_API_TOKEN} account_id: ${CLOUDFLARE_ACCOUNT_ID} // [!code focus] ``` #### queues This is an array allowing you to configure multiple queues with different settings. ##### queue\_id The ID of the Cloudflare Queue to send messages to. This queue must already exist in your Cloudflare account. You can find the queue ID in your Cloudflare dashboard or via the Queues API. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] cloudflare_queues: // [!code focus] api_token: ${CLOUDFLARE_API_TOKEN} account_id: ${CLOUDFLARE_ACCOUNT_ID} queues: // [!code focus] - queue_id: blockchain-transfers // [!code focus] ``` #### networks This is an array of networks you want to stream to this queue. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] cloudflare_queues: // [!code focus] api_token: ${CLOUDFLARE_API_TOKEN} account_id: ${CLOUDFLARE_ACCOUNT_ID} queues: // [!code focus] - queue_id: blockchain-transfers networks: // [!code focus] - ethereum // [!code focus] ``` #### events This is an array of events you want to stream to this queue. ##### event\_name This is the name of the event you want to stream to this queue, must match the ABI event name. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] cloudflare_queues: // [!code focus] api_token: ${CLOUDFLARE_API_TOKEN} account_id: ${CLOUDFLARE_ACCOUNT_ID} queues: // [!code focus] - queue_id: blockchain-transfers networks: - ethereum events: // [!code focus] - event_name: Transfer // [!code focus] ``` ##### alias This is an optional `alias` you wish to assign to the event you want published to this Queue. It is paired with the event name and allows consumers to have unique discriminator keys in the event of naming conflicts. E.g Transfer (ERC20) and Transfer (ERC721). ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] cloudflare_queues: // [!code focus] api_token: ${CLOUDFLARE_API_TOKEN} account_id: ${CLOUDFLARE_ACCOUNT_ID} queues: // [!code focus] - queue_id: blockchain-transfers networks: - ethereum events: // [!code focus] - event_name: Transfer // [!code focus] alias: RocketPoolTransfer // [!code focus] ``` #### conditions This accepts an array of conditions you want to apply to the event data before sending to the queue. :::info This is optional, if you do not provide any conditions all data will be streamed. ::: You may want to filter on the stream based on the event data, if the event data has not got an index on the on the solidity event you can not filter it over the logs. The `conditions` filter is here to help you with this, based on your ABI you can filter on the event data. rindexer has enabled a special syntax which allows you to define on your ABI fields what you want to filter on. 1. `>` - higher then (for numbers only) 2. `<` - lower then (for numbers only) 3. `=` - equals 4. `>=` - higher then or equals (for numbers only) 5. `<=` - lower then or equals (for numbers only) 6. `||` - or 7. `&&` - and So lets look at an example lets say i only want to get transfer events which are higher then `2000000000000000000` RETH wei ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] cloudflare_queues: // [!code focus] api_token: ${CLOUDFLARE_API_TOKEN} account_id: ${CLOUDFLARE_ACCOUNT_ID} queues: // [!code focus] - queue_id: blockchain-transfers networks: - ethereum events: // [!code focus] - event_name: Transfer // [!code focus] conditions: // [!code focus] - "value": ">=2000000000000000000" // [!code focus] ``` We use the ABI input name `value` to filter on the value field, you can find these names in the ABI file. ```json { "anonymous":false, "inputs":[ { "indexed":true, "internalType":"address", "name":"from", "type":"address" }, { "indexed":true, "internalType":"address", "name":"to", "type":"address" }, { "indexed":false, "internalType":"uint256", "name":"value", // [!code focus] "type":"uint256" } ], "name":"Transfer", "type":"event" } ``` You can use the `||` or `&&` to combine conditions. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] cloudflare_queues: // [!code focus] api_token: ${CLOUDFLARE_API_TOKEN} account_id: ${CLOUDFLARE_ACCOUNT_ID} queues: // [!code focus] - queue_id: blockchain-transfers networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "value": ">=2000000000000000000 && value <=4000000000000000000" // [!code focus] ``` You can use the `=` to filter on other aspects like the `from` or `to` address. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] cloudflare_queues: // [!code focus] api_token: ${CLOUDFLARE_API_TOKEN} account_id: ${CLOUDFLARE_ACCOUNT_ID} queues: // [!code focus] - queue_id: blockchain-transfers networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "from": "0x0338ce5020c447f7e668dc2ef778025ce3982662 || 0x0338ce5020c447f7e668dc2ef778025ce398266u" // [!code focus] - "value": ">=2000000000000000000 || value <=4000000000000000000" // [!code focus] ``` :::info Note we advise you to filer any `indexed` fields in the contract details in the `rindexer.yaml` file. As these can be filtered out on the request level and not filtered out in rindexer itself. You can read more about it [here](/docs/start-building/yaml-config/contracts#indexed_1-indexed_2-indexed_3). ::: If you have a tuple and you want to get that value you just use the object notation. For example lets say we want to only get the events for `profileId` from the `quoteParams` tuple which equals `1`: ```json { "anonymous": false, "inputs": [ { "components": [ { "internalType": "uint256", "name": "profileId", // [!code focus] "type": "uint256" }, ... ], "indexed": false, "internalType": "struct Types.QuoteParams", "name": "quoteParams", // [!code focus] "type": "tuple" }, ... ], "name": "QuoteCreated", // [!code focus] "type": "event" } ``` ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] cloudflare_queues: // [!code focus] api_token: ${CLOUDFLARE_API_TOKEN} account_id: ${CLOUDFLARE_ACCOUNT_ID} queues: // [!code focus] - queue_id: blockchain-transfers networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "quoteParams.profileId": "=1" // [!code focus] ``` ## Streams :::info rindexer streams can be used without any other storage providers. It can also be used with storage providers. ::: rindexer supports streaming data from rindexer to anywhere you want. This allows you to build your own data indexing solutions in any language you wish, alongside stream the data to any location you want with tons of use cases. Streams also support advanced filtering and conditions which allows you to filter the data before it is streamed. This can all be done using no-code and set in the YAML configuration file. :::info Rust projects do not get exposed to the stream clients yet but it can easily be exposed in the future. ::: Note you can use all the streams together they are independent of each other, so if you wanted to us `kafka`, `webhooks`, `rabbitmq`, `sns`, `redis`, and `cloudflare_queues` together you can do that. Supported stream providers: * [Webhooks](/docs/start-building/streams/webhooks) - Fire webhooks to your own APIs * [Kafka](/docs/start-building/streams/kafka) - Find out more about [Apache Kafka](https://kafka.apache.org/) * [RabbitMQ](/docs/start-building/streams/rabbitmq) - Find out more about [RabbitMQ](https://www.rabbitmq.com/) * [SNS/SQS](/docs/start-building/streams/sns) - Find out more about [Simple Notification Service](https://aws.amazon.com/sns/) and [Simple Queue Service](https://aws.amazon.com/sqs/) * [Redis Streams](/docs/start-building/streams/redis) - Find out more about [Redis Streams](https://redis.io/docs/latest/develop/data-types/streams/) * [Cloudflare Queues](/docs/start-building/streams/cloudflare-queues) - Find out more about [Cloudflare Queues](https://developers.cloudflare.com/queues/) ## Kafka :::warn Kafka streams do not work with windows from the CLI installation, it will panic if you try to use it with windows. If you are on windows and want to use kafka streams you should use the docker image. ::: :::info **Feature Gate:** If you're including the rindexer crate in your Rust project, Kafka support is gated behind the `kafka` feature flag. Add it to your `Cargo.toml`: ```toml [dependencies] rindexer = { version = "*", features = ["kafka"] } ``` **Docker Images:** Kafka support is enabled by default in all official rindexer Docker images - no additional configuration needed. ::: :::info rindexer streams can be used without any other storage providers. It can also be used with storage providers. ::: rindexer allows you to configure [Kafka](https://kafka.apache.org/) to stream any data to. This goes under the [contracts](/docs/start-building/yaml-config/contracts) or [native\_transfers](/docs/start-building/yaml-config/native-transfers) section of the YAML configuration file. Find out more about [Kafka](https://kafka.apache.org/). rindexer kafka integration supports SSL queues and none SSL queues. ### Configuration with rindexer `kafka` property accepts an array of `topics` allowing you to split up the streams any way you wish. ### Example Kafka has to be configured to use SASL\_SSL or PLAINTEXT. You can read more about it [here](https://kafka.apache.org/documentation/#security_sasl). :::code-group ```yaml [none-ssl] name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] kafka: // [!code focus] brokers: // [!code focus] - ${KAFKA_BROKER_URL_1} // [!code focus] - ${KAFKA_BROKER_URL_2} // [!code focus] acks: all // [!code focus] security_protocol: PLAINTEXT // [!code focus] topics: // [!code focus] - topic: test-topic // [!code focus] # key is optional // [!code focus] key: my-routing-key // [!code focus] networks: // [!code focus] - ethereum // [!code focus] events: // [!code focus] - event_name: Transfer // [!code focus] alias: RocketPoolTransfer ``` ```yaml [ssl] name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] kafka: // [!code focus] brokers: // [!code focus] - ${KAFKA_BROKER_URL_1} // [!code focus] - ${KAFKA_BROKER_URL_2} // [!code focus] acks: all // [!code focus] security_protocol: SASL_SSL // [!code focus] sasl_mechanisms: PLAIN // [!code focus] sasl_username: $ // [!code focus] sasl_password: $ // [!code focus] topics: - topic: test-topic # key is optional // [!code focus] key: my-routing-key networks: - ethereum events: - event_name: Transfer ``` ```yaml [native transfers (ssl)] name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: - network: ethereum streams: // [!code focus] kafka: // [!code focus] brokers: // [!code focus] - ${KAFKA_BROKER_URL_1} // [!code focus] - ${KAFKA_BROKER_URL_2} // [!code focus] acks: all // [!code focus] security_protocol: SASL_SSL // [!code focus] sasl_mechanisms: PLAIN // [!code focus] sasl_username: $ // [!code focus] sasl_password: $ // [!code focus] topics: - topic: test-topic # key is optional // [!code focus] key: my-routing-key networks: - ethereum events: - event_name: NativeTransfer // [!code focus] ``` ::: ### Response :::info Note SNS/SQS may wrap the message body into their own object so the below is just what we send to the stream. ::: The response sent to you is already decoded and parsed into a JSON object. * `event_name` - The name of the event * `event_signature_hash` - The event signature hash example the keccak256 hash of "Transfer(address,address,uint256)", this is topics\[0] in the logs * `event_data` - The event data which has all the event fields decoded and the transaction information which is under `transaction_information` * `network` - The network the event was emitted on For example a transfer event would look like: ```json { "event_name": "Transfer", "event_signature_hash": "0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef", "event_data": { "from": "0x0338ce5020c447f7e668dc2ef778025ce3982662", "to": "0x0338ce5020c447f7e668dc2ef778025ce3982662", "value": "1000000000000000000", "transaction_information": { "address": "0xae78736cd615f374d3085123a210448e74fc6393", "block_hash": "0x8461da7a1d4b47190a01fa6eae219be40aacffab0dd64af7259b2d404572c3d9", "block_number": "18718011", "log_index": "0", "network": "ethereum", "transaction_hash": "0x145c6705ffbf461e85d08b4a7f5850d6b52a7364d93a057722ca1194034f3ba4", "transaction_index": "0" } }, "network": "ethereum" } ``` ### brokers You define the kafka brokers you wish to connect to, you can pass in multiple brokers if you wish. A single broker will of course work as well. :::info We advise brokers should be set in your environment variables. ::: ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] kafka: // [!code focus] brokers: // [!code focus] - ${KAFKA_BROKER_URL_1} // [!code focus] - ${KAFKA_BROKER_URL_2} // [!code focus] ``` ### acks * `acks=0` - When acks=0 producers consider messages as "written successfully" the moment the message was sent without waiting for the broker to accept it at all. * `acks=1` - When acks=1 , producers consider messages as "written successfully" when the message was acknowledged by only the leader. * `acks=all` - When acks=all, producers consider messages as "written successfully" when the message is accepted by all in-sync replicas (ISR). ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] kafka: // [!code focus] brokers: - ${KAFKA_BROKER_URL_1} - ${KAFKA_BROKER_URL_2} # all or 0 or 1 acks: all // [!code focus] security_protocol: SASL_SSL // [!code focus] ``` ### security\_protocol This is either `PLAINTEXT` or `SASL_SSL`. You can read more about it [here](https://kafka.apache.org/documentation/#security_sasl). ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] kafka: // [!code focus] brokers: - ${KAFKA_BROKER_URL_1} - ${KAFKA_BROKER_URL_2} acks: all security_protocol: SASL_SSL // [!code focus] ``` ### sasl\_mechanisms :::info This is optional, if you are using SASL\_SSL you will need to provide this. ::: ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] kafka: // [!code focus] brokers: - ${KAFKA_BROKER_URL_1} - ${KAFKA_BROKER_URL_2} acks: all security_protocol: SASL_SSL sasl_mechanisms: PLAIN // [!code focus] ``` ### sasl\_username :::info This is optional, if you are using SASL\_SSL you will need to provide this.
We advise you to put this in your environment variables. ::: ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] kafka: // [!code focus] brokers: - ${KAFKA_BROKER_URL_1} - ${KAFKA_BROKER_URL_2} acks: all security_protocol: SASL_SSL sasl_mechanisms: PLAIN sasl_username: $ // [!code focus] ``` ### sasl\_password :::info This is optional, if you are using SASL\_SSL you will need to provide this.
We advise you to put this in your environment variables. ::: ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] kafka: // [!code focus] brokers: - ${KAFKA_BROKER_URL_1} - ${KAFKA_BROKER_URL_2} acks: all security_protocol: SASL_SSL sasl_mechanisms: PLAIN sasl_username: $ sasl_password: $ // [!code focus] ``` ### topics This is an array of topics you want to stream to this kafka. #### topic This is the topic name. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] kafka: // [!code focus] brokers: - ${KAFKA_BROKER_URL_1} - ${KAFKA_BROKER_URL_2} acks: all security_protocol: SASL_SSL sasl_mechanisms: PLAIN sasl_username: $ sasl_password: $ topics: - topic: test-topic // [!code focus] ``` #### key :::info This is optional ::: You can route your messages to a specific partition in the topic, this is useful if you have multiple consumers on the same topic. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] kafka: // [!code focus] brokers: - ${KAFKA_BROKER_URL_1} - ${KAFKA_BROKER_URL_2} acks: all security_protocol: SASL_SSL sasl_mechanisms: PLAIN sasl_username: $ sasl_password: $ topics: - topic: test-topic key: my-routing-key // [!code focus] networks: - ethereum events: - event_name: Transfer ``` ### networks This is an array of networks you want to stream to this kafka. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] kafka: // [!code focus] brokers: - ${KAFKA_BROKER_URL_1} - ${KAFKA_BROKER_URL_2} acks: all security_protocol: SASL_SSL sasl_mechanisms: PLAIN sasl_username: $ sasl_password: $ topics: - topic: test-topic key: my-routing-key networks: // [!code focus] - ethereum // [!code focus] events: - event_name: Transfer ``` ### events This is an array of events you want to stream to this kafka. #### event\_name This is the name of the event you want to stream to this kafka, must match the ABI event name. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] kafka: // [!code focus] brokers: - ${KAFKA_BROKER_URL_1} - ${KAFKA_BROKER_URL_2} acks: all security_protocol: SASL_SSL sasl_mechanisms: PLAIN sasl_username: $ sasl_password: $ topics: - topic: test-topic key: my-routing-key networks: - ethereum events: // [!code focus] - event_name: Transfer // [!code focus] ``` ##### alias This is an optional `alias` you wish to assign to the event you want to stream to this Kafka topic. It is paired with the event name and allows consumers to have unique discriminator keys in the event of naming conflicts. E.g Transfer (ERC20) and Transfer (ERC721). ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] kafka: // [!code focus] brokers: - ${KAFKA_BROKER_URL_1} - ${KAFKA_BROKER_URL_2} acks: all security_protocol: SASL_SSL sasl_mechanisms: PLAIN sasl_username: $ sasl_password: $ topics: - topic: test-topic key: my-routing-key networks: - ethereum events: // [!code focus] - event_name: Transfer // [!code focus] alias: RocketPoolTransfer // [!code focus] ``` #### conditions This accepts an array of conditions you want to apply to the event data before streaming to this kafka. :::info This is optional, if you do not provide any conditions all data will be streamed. ::: You may want to filter on the stream based on the event data, if the event data has not got an index on the on the solidity event you can not filter it over the logs. The `conditions` filter is here to help you with this, based on your ABI you can filter on the event data. rindexer has enabled a special syntax which allows you to define on your ABI fields what you want to filter on. 1. `>` - higher then (for numbers only) 2. `<` - lower then (for numbers only) 3. `=` - equals 4. `>=` - higher then or equals (for numbers only) 5. `<=` - lower then or equals (for numbers only) 6. `||` - or 7. `&&` - and So lets look at an example lets say i only want to get transfer events which are higher then `2000000000000000000` RETH wei ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] kafka: // [!code focus] brokers: - ${KAFKA_BROKER_URL_1} - ${KAFKA_BROKER_URL_2} acks: all security_protocol: SASL_SSL sasl_mechanisms: PLAIN sasl_username: $ sasl_password: $ topics: - topic: test-topic key: my-routing-key networks: - ethereum events: // [!code focus] - event_name: Transfer // [!code focus] conditions: // [!code focus] - "value": ">=2000000000000000000" // [!code focus] ``` We use the ABI input name `value` to filter on the value field, you can find these names in the ABI file. ```json { "anonymous":false, "inputs":[ { "indexed":true, "internalType":"address", "name":"from", "type":"address" }, { "indexed":true, "internalType":"address", "name":"to", "type":"address" }, { "indexed":false, "internalType":"uint256", "name":"value", // [!code focus] "type":"uint256" } ], "name":"Transfer", "type":"event" } ``` You can use the `||` or `&&` to combine conditions. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] kafka: // [!code focus] brokers: - ${KAFKA_BROKER_URL_1} - ${KAFKA_BROKER_URL_2} acks: all security_protocol: SASL_SSL sasl_mechanisms: PLAIN sasl_username: $ sasl_password: $ topics: - topic: test-topic key: my-routing-key networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "value": ">=2000000000000000000 && value <=4000000000000000000" // [!code focus] ``` You can use the `=` to filter on other aspects like the `from` or `to` address. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] kafka: // [!code focus] brokers: - ${KAFKA_BROKER_URL_1} - ${KAFKA_BROKER_URL_2} acks: all security_protocol: SASL_SSL sasl_mechanisms: PLAIN sasl_username: $ sasl_password: $ topics: - topic: test-topic key: my-routing-key networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "from": "0x0338ce5020c447f7e668dc2ef778025ce3982662 || 0x0338ce5020c447f7e668dc2ef778025ce398266u" // [!code focus] - "value": ">=2000000000000000000 || value <=4000000000000000000" // [!code focus] ``` :::info Note we advise you to filer any `indexed` fields in the contract details in the `rindexer.yaml` file. As these can be filtered out on the request level and not filtered out in rindexer itself. You can read more about it [here](/docs/start-building/yaml-config/contracts#indexed_1-indexed_2-indexed_3). ::: If you have a tuple and you want to get that value you just use the object notation. For example lets say we want to only get the events for `profileId` from the `quoteParams` tuple which equals `1`: ```json { "anonymous": false, "inputs": [ { "components": [ { "internalType": "uint256", "name": "profileId", // [!code focus] "type": "uint256" }, ... ], "indexed": false, "internalType": "struct Types.QuoteParams", "name": "quoteParams", // [!code focus] "type": "tuple" }, ... ], "name": "QuoteCreated", // [!code focus] "type": "event" } ``` ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] kafka: // [!code focus] brokers: - ${KAFKA_BROKER_URL_1} - ${KAFKA_BROKER_URL_2} acks: all security_protocol: SASL_SSL sasl_mechanisms: PLAIN sasl_username: $ sasl_password: $ topics: - topic: test-topic key: my-routing-key networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "quoteParams.profileId": "=1" // [!code focus] ``` ## RabbitMQ :::info rindexer streams can be used without any other storage providers. It can also be used with storage providers. ::: rindexer allows you to configure [RabbitMQ](https://www.rabbitmq.com/) to stream any data to. This goes under the [contracts](/docs/start-building/yaml-config/contracts) or [native\_transfers](/docs/start-building/yaml-config/native-transfers) section of the YAML configuration file. Find out more about [RabbitMQ](https://www.rabbitmq.com/). rindexer rabbitmq integration supports `direct`, `topic` and `fanout` exchanges. You can read more about what they do differently [here](https://medium.com/trendyol-tech/rabbitmq-exchange-types-d7e1f51ec825). ### Configuration with rindexer `rabbitmq` property accepts an array of `exchanges` allowing you to split up the streams any way you wish. ### Example :::code-group ```yaml [direct] name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] rabbitmq: // [!code focus] # we advise to put this in a environment variables // [!code focus] url: ${RABBITMQ_URL} // [!code focus] exchanges: // [!code focus] - exchange: transfer // [!code focus] # expected one of `direct`, `topic` or `fanout` // [!code focus] exchange_type: direct // [!code focus] routing_key: my-routing-key // [!code focus] networks: // [!code focus] - ethereum // [!code focus] events: // [!code focus] - event_name: Transfer // [!code focus] alias: RocketPoolTransfer ``` ```yaml [topic] name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] rabbitmq: // [!code focus] # we advise to put this in a environment variables // [!code focus] url: ${RABBITMQ_URL} // [!code focus] exchanges: // [!code focus] - exchange: transfer // [!code focus] # expected one of `direct`, `topic` or `fanout` // [!code focus] exchange_type: topic // [!code focus] routing_key: my-routing-key // [!code focus] networks: // [!code focus] - ethereum // [!code focus] events: // [!code focus] - event_name: Transfer // [!code focus] ``` ```yaml [fanout] name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] rabbitmq: // [!code focus] # we advise to put this in a environment variables // [!code focus] url: ${RABBITMQ_URL} // [!code focus] exchanges: // [!code focus] - exchange: transfer // [!code focus] # expected one of `direct`, `topic` or `fanout` // [!code focus] exchange_type: fanout // [!code focus] networks: // [!code focus] - ethereum // [!code focus] events: // [!code focus] - event_name: Transfer // [!code focus] ``` ```yaml [native transfers (fanout)] name: ETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: - network: ethereum streams: // [!code focus] rabbitmq: // [!code focus] # we advise to put this in a environment variables // [!code focus] url: ${RABBITMQ_URL} // [!code focus] exchanges: // [!code focus] - exchange: transfer // [!code focus] # expected one of `direct`, `topic` or `fanout` // [!code focus] exchange_type: fanout // [!code focus] networks: // [!code focus] - ethereum // [!code focus] events: // [!code focus] - event_name: NativeTransfer // [!code focus] ``` ::: ### Response :::info Note SNS/SQS may wrap the message body into their own object so the below is just what we send to the stream. ::: The response sent to you is already decoded and parsed into a JSON object. * `event_name` - The name of the event * `event_signature_hash` - The event signature hash example the keccak256 hash of "Transfer(address,address,uint256)", this is topics\[0] in the logs * `event_data` - The event data which has all the event fields decoded and the transaction information which is under `transaction_information` * `network` - The network the event was emitted on For example a transfer event would look like: ```json { "event_name": "Transfer", "event_signature_hash": "0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef", "event_data": { "from": "0x0338ce5020c447f7e668dc2ef778025ce3982662", "to": "0x0338ce5020c447f7e668dc2ef778025ce3982662", "value": "1000000000000000000", "transaction_information": { "address": "0xae78736cd615f374d3085123a210448e74fc6393", "block_hash": "0x8461da7a1d4b47190a01fa6eae219be40aacffab0dd64af7259b2d404572c3d9", "block_number": "18718011", "log_index": "0", "network": "ethereum", "transaction_hash": "0x145c6705ffbf461e85d08b4a7f5850d6b52a7364d93a057722ca1194034f3ba4", "transaction_index": "0" } }, "network": "ethereum" } ``` ### url This is the rabbitmq connection url we advise to put this in a environment variable. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] rabbitmq: // [!code focus] # we advise to put this in a environment variables // [!code focus] url: ${RABBITMQ_URL} // [!code focus] ``` ### exchanges This is an array of exchanges you want to stream to this rabbitmq. #### exchange This is the exchange name. ```yaml [standard] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] rabbitmq: // [!code focus] # we advise to put this in a environment variables url: ${RABBITMQ_URL} exchanges: // [!code focus] - exchange: transfer // [!code focus] ``` #### exchange\_type This is the exchange type, you can read more about them [here](https://medium.com/trendyol-tech/rabbitmq-exchange-types-d7e1f51ec825). rindexer supports `direct`, `topic` and `fanout` exchanges. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] rabbitmq: // [!code focus] # we advise to put this in a environment variables url: ${RABBITMQ_URL} exchanges: // [!code focus] - exchange: transfer # expected one of `direct`, `topic` or `fanout` exchange_type: direct // [!code focus] ``` #### routing\_key This is the routing key for the exchange. You do not need to provide this if you are using a `fanout` exchange. This is mandatory for `direct` and `topic` exchanges. :::info This is optional for `fanout` exchanges, required for `direct` and `topic` exchanges. ::: ```yaml [fifo] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] rabbitmq: // [!code focus] # we advise to put this in a environment variables url: ${RABBITMQ_URL} exchanges: // [!code focus] - exchange: transfer # expected one of `direct`, `topic` or `fanout` exchange_type: direct routing_key: my-routing-key // [!code focus] ``` ### networks This is an array of networks you want to stream to this rabbitmq. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] rabbitmq: // [!code focus] # we advise to put this in a environment variables // [!code focus] url: ${RABBITMQ_URL} exchanges: // [!code focus] - exchange: transfer # expected one of `direct`, `topic` or `fanout` exchange_type: direct routing_key: my-routing-key networks: // [!code focus] - ethereum // [!code focus] ``` ### events This is an array of events you want to stream to this rabbitmq. #### event\_name This is the name of the event you want to stream to this rabbitmq, must match the ABI event name. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] rabbitmq: // [!code focus] # we advise to put this in a environment variables // [!code focus] url: ${RABBITMQ_URL} exchanges: // [!code focus] - exchange: transfer # expected one of `direct`, `topic` or `fanout` exchange_type: direct routing_key: my-routing-key networks: - ethereum events: // [!code focus] - event_name: Transfer // [!code focus] ``` ##### alias This is an optional `alias` you wish to assign to the event you want to stream to this RabbitMQ exchange or queue. It is paired with the event name and allows consumers to have unique discriminator keys in the event of naming conflicts. E.g Transfer (ERC20) and Transfer (ERC721). ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] rabbitmq: // [!code focus] # we advise to put this in a environment variables // [!code focus] url: ${RABBITMQ_URL} exchanges: // [!code focus] - exchange: transfer # expected one of `direct`, `topic` or `fanout` exchange_type: direct routing_key: my-routing-key networks: - ethereum events: // [!code focus] - event_name: Transfer // [!code focus] alias: RocketPoolTransfer // [!code focus] ``` #### conditions This accepts an array of conditions you want to apply to the event data before streaming to this rabbitmq. :::info This is optional, if you do not provide any conditions all data will be streamed. ::: You may want to filter on the stream based on the event data, if the event data has not got an index on the on the solidity event you can not filter it over the logs. The `conditions` filter is here to help you with this, based on your ABI you can filter on the event data. rindexer has enabled a special syntax which allows you to define on your ABI fields what you want to filter on. 1. `>` - higher then (for numbers only) 2. `<` - lower then (for numbers only) 3. `=` - equals 4. `>=` - higher then or equals (for numbers only) 5. `<=` - lower then or equals (for numbers only) 6. `||` - or 7. `&&` - and So lets look at an example lets say i only want to get transfer events which are higher then `2000000000000000000` RETH wei ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] rabbitmq: // [!code focus] # we advise to put this in a environment variables // [!code focus] url: ${RABBITMQ_URL} exchanges: // [!code focus] - exchange: transfer # expected one of `direct`, `topic` or `fanout` exchange_type: direct routing_key: my-routing-key networks: - ethereum events: // [!code focus] - event_name: Transfer // [!code focus] conditions: // [!code focus] - "value": ">=2000000000000000000" // [!code focus] ``` We use the ABI input name `value` to filter on the value field, you can find these names in the ABI file. ```json { "anonymous":false, "inputs":[ { "indexed":true, "internalType":"address", "name":"from", "type":"address" }, { "indexed":true, "internalType":"address", "name":"to", "type":"address" }, { "indexed":false, "internalType":"uint256", "name":"value", // [!code focus] "type":"uint256" } ], "name":"Transfer", "type":"event" } ``` You can use the `||` or `&&` to combine conditions. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] rabbitmq: // [!code focus] # we advise to put this in a environment variables // [!code focus] url: ${RABBITMQ_URL} exchanges: // [!code focus] - exchange: transfer # expected one of `direct`, `topic` or `fanout` exchange_type: direct routing_key: my-routing-key networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "value": ">=2000000000000000000 && value <=4000000000000000000" // [!code focus] ``` You can use the `=` to filter on other aspects like the `from` or `to` address. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] rabbitmq: // [!code focus] # we advise to put this in a environment variables // [!code focus] url: ${RABBITMQ_URL} exchanges: // [!code focus] - exchange: transfer # expected one of `direct`, `topic` or `fanout` exchange_type: direct routing_key: my-routing-key networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "from": "0x0338ce5020c447f7e668dc2ef778025ce3982662 || 0x0338ce5020c447f7e668dc2ef778025ce398266u" // [!code focus] - "value": ">=2000000000000000000 || value <=4000000000000000000" // [!code focus] ``` :::info Note we advise you to filer any `indexed` fields in the contract details in the `rindexer.yaml` file. As these can be filtered out on the request level and not filtered out in rindexer itself. You can read more about it [here](/docs/start-building/yaml-config/contracts#indexed_1-indexed_2-indexed_3). ::: If you have a tuple and you want to get that value you just use the object notation. For example lets say we want to only get the events for `profileId` from the `quoteParams` tuple which equals `1`: ```json { "anonymous": false, "inputs": [ { "components": [ { "internalType": "uint256", "name": "profileId", // [!code focus] "type": "uint256" }, ... ], "indexed": false, "internalType": "struct Types.QuoteParams", "name": "quoteParams", // [!code focus] "type": "tuple" }, ... ], "name": "QuoteCreated", // [!code focus] "type": "event" } ``` ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] rabbitmq: // [!code focus] # we advise to put this in a environment variables // [!code focus] url: ${RABBITMQ_URL} exchanges: // [!code focus] - exchange: transfer # expected one of `direct`, `topic` or `fanout` exchange_type: direct routing_key: my-routing-key networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "quoteParams.profileId": "=1" // [!code focus] ``` ## Redis Streams :::info rindexer streams can be used without any other storage providers. It can also be used with storage providers. ::: rindexer allows you to configure [Redis Streams](https://redis.io/docs/latest/develop/data-types/streams/) to stream any data to. This goes under the [contracts](docs/start-building/yaml-config/contracts) or [native\_transfers](/docs/start-building/yaml-config/native-transfers) section of the YAML configuration file. Found out more about [Redis Streams](https://redis.io/docs/latest/develop/data-types/streams/). ### Configuration with rindexer `redis` `streams` property accepts an array allowing you to split up the streams any way you wish. ### Example :::code-group ```yaml [contract events] name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] redis: // [!code focus] connection_uri: ${REDIS_CONNECTION_URI} // [!code focus] streams: // [!code focus] - stream_name: "ethereum_rocketpool_transfer_stream" // [!code focus] networks: // [!code focus] - ethereum // [!code focus] events: // [!code focus] - event_name: Transfer // [!code focus] alias: RocketPoolTransfer ``` ```yaml [native transfers] name: ETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: - network: ethereum streams: // [!code focus] redis: // [!code focus] connection_uri: ${REDIS_CONNECTION_URI} // [!code focus] streams: // [!code focus] - stream_name: "ethereum_transfer_stream" // [!code focus] networks: // [!code focus] - ethereum // [!code focus] events: // [!code focus] - event_name: NativeTransfer // [!code focus] alias: Transfer ``` ::: ### Response :::info Redis streams may wrap the message body into their own object so the below is just what we send to the stream. ::: The response sent to you is already decoded and parsed into a JSON stringify object. * `event_name` - The name of the event * `event_signature_hash` - The event signature hash example the keccak256 hash of "Transfer(address,address,uint256)", this is topics\[0] in the logs * `event_data` - The event data which has all the event fields decoded and the transaction information which is under `transaction_information` * `network` - The network the event was emitted on For example a transfer event would look like: ```json { "event_name": "Transfer", "event_signature_hash": "0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef", "event_data": { "from": "0x0338ce5020c447f7e668dc2ef778025ce3982662", "to": "0x0338ce5020c447f7e668dc2ef778025ce3982662", "value": "1000000000000000000", "transaction_information": { "address": "0xae78736cd615f374d3085123a210448e74fc6393", "block_hash": "0x8461da7a1d4b47190a01fa6eae219be40aacffab0dd64af7259b2d404572c3d9", "block_number": "18718011", "log_index": "0", "network": "ethereum", "transaction_hash": "0x145c6705ffbf461e85d08b4a7f5850d6b52a7364d93a057722ca1194034f3ba4", "transaction_index": "0" } }, "network": "ethereum" } ``` ### connection\_uri This is the Redis connection url we advise to put this in a environment variable. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] redis: // [!code focus] # we advise to put this in a environment variables // [!code focus] connection_uri: ${REDIS_CONNECTION_URI} // [!code focus] ``` ### streams This is where you configure each of the Redis Streams you want to push to. #### stream\_name The name of the stream that you are streaming events to. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] redis: // [!code focus] connection_uri: ${REDIS_CONNECTION_URI} streams: // [!code focus] - stream_name: "ethereum_rocketpool_transfer_stream" // [!code focus] ``` #### events This is an array of events you want to stream to this Redis Stream.. ##### event\_name This is the name of the event you want to stream to this Redis Stream, must match the ABI event name. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] redis: // [!code focus] connection_uri: ${REDIS_CONNECTION_URI} streams: // [!code focus] - stream_name: "ethereum_rocketpool_transfer_stream" networks: - ethereum events: // [!code focus] - event_name: Transfer // [!code focus] ``` ##### alias This is an optional `alias` you wish to assign to the event you want to stream to this Redis Stream. It is paired with the event name and allows consumers to have unique discriminator keys in the event of naming conflicts. E.g Transfer (ERC20) and Transfer (ERC721). ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] redis: // [!code focus] connection_uri: ${REDIS_CONNECTION_URI} streams: // [!code focus] - stream_name: "ethereum_rocketpool_transfer_stream" networks: - ethereum events: // [!code focus] - event_name: Transfer // [!code focus] alias: RocketPoolTransfer // [!code focus] ``` ##### conditions This accepts an array of conditions you want to apply to the event data before streaming to this Redis Stream topic. :::info This is optional, if you do not provide any conditions all data will be streamed. ::: You may want to filter on the stream based on the event data, if the event data has not got an index on the on the solidity event you can not filter it over the logs. The `conditions` filter is here to help you with this, based on your ABI you can filter on the event data. rindexer has enabled a special syntax which allows you to define on your ABI fields what you want to filter on. 1. `>` - higher then (for numbers only) 2. `<` - lower then (for numbers only) 3. `=` - equals 4. `>=` - higher then or equals (for numbers only) 5. `<=` - lower then or equals (for numbers only) 6. `||` - or 7. `&&` - and So lets look at an example lets say i only want to get transfer events which are higher then `2000000000000000000` RETH wei ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] redis: // [!code focus] connection_uri: ${REDIS_CONNECTION_URI} streams: // [!code focus] - stream_name: "ethereum_rocketpool_transfer_stream" networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "value": ">=2000000000000000000" // [!code focus] ``` We use the ABI input name `value` to filter on the value field, you can find these names in the ABI file. ```json { "anonymous":false, "inputs":[ { "indexed":true, "internalType":"address", "name":"from", "type":"address" }, { "indexed":true, "internalType":"address", "name":"to", "type":"address" }, { "indexed":false, "internalType":"uint256", "name":"value", // [!code focus] "type":"uint256" } ], "name":"Transfer", "type":"event" } ``` You can use the `||` or `&&` to combine conditions. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] redis: // [!code focus] connection_uri: ${REDIS_CONNECTION_URI} streams: // [!code focus] - stream_name: "ethereum_rocketpool_transfer_stream" networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "value": ">=2000000000000000000 && value <=4000000000000000000" // [!code focus] ``` You can use the `=` to filter on other aspects like the `from` or `to` address. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] redis: // [!code focus] connection_uri: ${REDIS_CONNECTION_URI} streams: // [!code focus] - stream_name: "ethereum_rocketpool_transfer_stream" networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "from": "0x0338ce5020c447f7e668dc2ef778025ce3982662 || 0x0338ce5020c447f7e668dc2ef778025ce398266u" // [!code focus] - "value": ">=2000000000000000000 || value <=4000000000000000000" // [!code focus] ``` :::info Note we advise you to filer any `indexed` fields in the contract details in the `rindexer.yaml` file. As these can be filtered out on the request level and not filtered out in rindexer itself. You can read more about it [here](/docs/start-building/yaml-config/contracts#indexed_1-indexed_2-indexed_3). ::: If you have a tuple and you want to get that value you just use the object notation. For example lets say we want to only get the events for `profileId` from the `quoteParams` tuple which equals `1`: ```json { "anonymous": false, "inputs": [ { "components": [ { "internalType": "uint256", "name": "profileId", // [!code focus] "type": "uint256" }, ... ], "indexed": false, "internalType": "struct Types.QuoteParams", "name": "quoteParams", // [!code focus] "type": "tuple" }, ... ], "name": "QuoteCreated", // [!code focus] "type": "event" } ``` ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] redis: // [!code focus] connection_uri: ${REDIS_CONNECTION_URI} streams: // [!code focus] - stream_name: "ethereum_rocketpool_transfer_stream" networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "quoteParams.profileId": "=1" // [!code focus] ``` ## SNS / SQS :::info rindexer streams can be used without any other storage providers. It can also be used with storage providers. ::: rindexer allows you to configure AWS SNS and AWS SQS to stream any data to. This goes under the [contracts](/docs/start-building/yaml-config/contracts) or [native\_transfers](/docs/start-building/yaml-config/native-transfers) section of the YAML configuration file. Find out more about [Simple Notification Service](https://aws.amazon.com/sns/) and [Simple Queue Service](https://aws.amazon.com/sqs/) ### Configuration with rindexer `sns` `topics` property accepts an array allowing you to split up the streams any way you wish. ### Example :::code-group ```yaml [contract events] name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] sns: // [!code focus] aws_config: // [!code focus] region: us-east-1 // [!code focus] access_key: ${AWS_ACCESS_KEY_ID} // [!code focus] secret_key: ${AWS_SECRET_ACCESS_KEY} // [!code focus] # session_token is optional // [!code focus] session_token: ${AWS_SESSION_TOKEN} // [!code focus] # endpoint_url is optional // [!code focus] endpoint_url: ${ENDPOINT_URL} // ![code focus] topics: // [!code focus] - topic_arn: "arn:aws:sns:us-east-1:664643779377:test" // [!code focus] networks: // [!code focus] - ethereum // [!code focus] events: // [!code focus] - event_name: Transfer // [!code focus] alias: RocketPoolTransfer ``` ```yaml [native transfers] name: ETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: - network: ethereum streams: // [!code focus] sns: // [!code focus] aws_config: // [!code focus] region: us-east-1 // [!code focus] access_key: ${AWS_ACCESS_KEY_ID} // [!code focus] secret_key: ${AWS_SECRET_ACCESS_KEY} // [!code focus] # session_token is optional // [!code focus] session_token: ${AWS_SESSION_TOKEN} // [!code focus] # endpoint_url is optional // [!code focus] endpoint_url: ${ENDPOINT_URL} // ![code focus] topics: // [!code focus] - topic_arn: "arn:aws:sns:us-east-1:664643779377:ethereum-transfers" // [!code focus] networks: // [!code focus] - ethereum // [!code focus] ``` ::: ### Response :::info Note SNS/SQS may wrap the message body into their own object so the below is just what we send to the stream. ::: The response sent to you is already decoded and parsed into a JSON stringify object. * `event_name` - The name of the event * `event_signature_hash` - The event signature hash example the keccak256 hash of "Transfer(address,address,uint256)", this is topics\[0] in the logs * `event_data` - The event data which has all the event fields decoded and the transaction information which is under `transaction_information` * `network` - The network the event was emitted on For example a transfer event would look like: ```json { "event_name": "Transfer", "event_signature_hash": "0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef", "event_data": { "from": "0x0338ce5020c447f7e668dc2ef778025ce3982662", "to": "0x0338ce5020c447f7e668dc2ef778025ce3982662", "value": "1000000000000000000", "transaction_information": { "address": "0xae78736cd615f374d3085123a210448e74fc6393", "block_hash": "0x8461da7a1d4b47190a01fa6eae219be40aacffab0dd64af7259b2d404572c3d9", "block_number": "18718011", "log_index": "0", "network": "ethereum", "transaction_hash": "0x145c6705ffbf461e85d08b4a7f5850d6b52a7364d93a057722ca1194034f3ba4", "transaction_index": "0" } }, "network": "ethereum" } ``` ### aws\_config This is the AWS configuration for the SNS client. #### region The AWS region to connect to. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] sns: // [!code focus] aws_config: // [!code focus] region: us-east-1 // [!code focus] ``` #### access\_key :::info We advise you to put this in a environment variables. ::: The AWS access key to connect to. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] sns: // [!code focus] aws_config: // [!code focus] region: us-east-1 access_key: ${AWS_ACCESS_KEY_ID} // [!code focus] ``` #### secret\_key :::info We advise you to put this in a environment variables. ::: The AWS secret key to connect to. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] sns: // [!code focus] aws_config: // [!code focus] region: us-east-1 access_key: ${AWS_ACCESS_KEY_ID} secret_key: ${AWS_SECRET_ACCESS_KEY} // [!code focus] ``` #### session\_token :::info This is optional ::: :::info We advise you to put this in a environment variables. ::: The AWS session token to connect to. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] sns: // [!code focus] aws_config: // [!code focus] region: us-east-1 access_key: ${AWS_ACCESS_KEY_ID} secret_key: ${AWS_SECRET_ACCESS_KEY} session_token: ${AWS_SESSION_TOKEN} // [!code focus] ``` #### endpoint\_url :::info This is optional ::: :::info We advise you to put this in a environment variables. ::: The AWS endpoint to connect to. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] sns: // [!code focus] aws_config: // [!code focus] region: us-east-1 access_key: ${AWS_ACCESS_KEY_ID} secret_key: ${AWS_SECRET_ACCESS_KEY} session_token: ${AWS_SESSION_TOKEN} endpoint_url: ${ENDPOINT_URL} // ![code focus] ``` ### topics This is an array of topics you want to stream to this sns. #### topic\_arn This is your SNS topic arn. It supports first-in-first-out and standard topics. You can read about the different here [here](https://aws.amazon.com/sns/features/). :::code-group ```yaml [standard] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] sns: // [!code focus] aws_config: region: us-east-1 access_key: ${AWS_ACCESS_KEY_ID} secret_key: ${AWS_SECRET_ACCESS_KEY} # session_token is optional session_token: ${AWS_SESSION_TOKEN} # endpoint_url is optional endpoint_url: ${ENDPOINT_URL} topics: // [!code focus] - topic_arn: "arn:aws:sns:us-east-1:664643779377:test" // [!code focus] ``` ```yaml [fifo] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] sns: // [!code focus] aws_config: region: us-east-1 access_key: ${AWS_ACCESS_KEY_ID} secret_key: ${AWS_SECRET_ACCESS_KEY} # session_token is optional session_token: ${AWS_SESSION_TOKEN} # endpoint_url is optional endpoint_url: ${ENDPOINT_URL} topics: // [!code focus] - topic_arn: "arn:aws:sns:us-east-1:664643779377:test.fifo" // [!code focus] ``` ::: ##### networks This is an array of networks you want to stream to this webhook. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] sns: // [!code focus] aws_config: region: us-east-1 access_key: ${AWS_ACCESS_KEY_ID} secret_key: ${AWS_SECRET_ACCESS_KEY} # session_token is optional session_token: ${AWS_SESSION_TOKEN} # endpoint_url is optional endpoint_url: ${ENDPOINT_URL} topics: // [!code focus] - topic_arn: "arn:aws:sns:us-east-1:664643779377:test" networks: // [!code focus] - ethereum // [!code focus] ``` #### events This is an array of events you want to stream to this SNS topic. ##### event\_name This is the name of the event you want to stream to this SNS topic, must match the ABI event name. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] sns: // [!code focus] aws_config: region: us-east-1 access_key: ${AWS_ACCESS_KEY_ID} secret_key: ${AWS_SECRET_ACCESS_KEY} # session_token is optional session_token: ${AWS_SESSION_TOKEN} # endpoint_url is optional endpoint_url: ${ENDPOINT_URL} topics: // [!code focus] - topic_arn: "arn:aws:sns:us-east-1:664643779377:test" networks: - ethereum events: // [!code focus] - event_name: Transfer // [!code focus] ``` ##### alias This is an optional `alias` you wish to assign to the event you want to stream to this SNS topic. It is paired with the event name and allows consumers to have unique discriminator keys in the event of naming conflicts. E.g Transfer (ERC20) and Transfer (ERC721). ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] sns: // [!code focus] aws_config: region: us-east-1 access_key: ${AWS_ACCESS_KEY_ID} secret_key: ${AWS_SECRET_ACCESS_KEY} # session_token is optional session_token: ${AWS_SESSION_TOKEN} # endpoint_url is optional endpoint_url: ${ENDPOINT_URL} topics: // [!code focus] - topic_arn: "arn:aws:sns:us-east-1:664643779377:test" networks: - ethereum events: // [!code focus] - event_name: Transfer // [!code focus] alias: RocketPoolTransfer // [!code focus] ``` ##### conditions This accepts an array of conditions you want to apply to the event data before streaming to this SNS topic. :::info This is optional, if you do not provide any conditions all data will be streamed. ::: You may want to filter on the stream based on the event data, if the event data has not got an index on the on the solidity event you can not filter it over the logs. The `conditions` filter is here to help you with this, based on your ABI you can filter on the event data. rindexer has enabled a special syntax which allows you to define on your ABI fields what you want to filter on. 1. `>` - higher then (for numbers only) 2. `<` - lower then (for numbers only) 3. `=` - equals 4. `>=` - higher then or equals (for numbers only) 5. `<=` - lower then or equals (for numbers only) 6. `||` - or 7. `&&` - and So lets look at an example lets say i only want to get transfer events which are higher then `2000000000000000000` RETH wei ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] sns: // [!code focus] aws_config: region: us-east-1 access_key: ${AWS_ACCESS_KEY_ID} secret_key: ${AWS_SECRET_ACCESS_KEY} # session_token is optional session_token: ${AWS_SESSION_TOKEN} # endpoint_url is optional endpoint_url: ${ENDPOINT_URL} topics: // [!code focus] - topic_arn: "arn:aws:sns:us-east-1:664643779377:test" networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "value": ">=2000000000000000000" // [!code focus] ``` We use the ABI input name `value` to filter on the value field, you can find these names in the ABI file. ```json { "anonymous":false, "inputs":[ { "indexed":true, "internalType":"address", "name":"from", "type":"address" }, { "indexed":true, "internalType":"address", "name":"to", "type":"address" }, { "indexed":false, "internalType":"uint256", "name":"value", // [!code focus] "type":"uint256" } ], "name":"Transfer", "type":"event" } ``` You can use the `||` or `&&` to combine conditions. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] sns: // [!code focus] aws_config: region: us-east-1 access_key: ${AWS_ACCESS_KEY_ID} secret_key: ${AWS_SECRET_ACCESS_KEY} # session_token is optional session_token: ${AWS_SESSION_TOKEN} # endpoint_url is optional endpoint_url: ${ENDPOINT_URL} topics: // [!code focus] - topic_arn: "arn:aws:sns:us-east-1:664643779377:test" networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "value": ">=2000000000000000000 && value <=4000000000000000000" // [!code focus] ``` You can use the `=` to filter on other aspects like the `from` or `to` address. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] sns: // [!code focus] aws_config: region: us-east-1 access_key: ${AWS_ACCESS_KEY_ID} secret_key: ${AWS_SECRET_ACCESS_KEY} # session_token is optional session_token: ${AWS_SESSION_TOKEN} # endpoint_url is optional endpoint_url: ${ENDPOINT_URL} topics: // [!code focus] - topic_arn: "arn:aws:sns:us-east-1:664643779377:test" networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "from": "0x0338ce5020c447f7e668dc2ef778025ce3982662 || 0x0338ce5020c447f7e668dc2ef778025ce398266u" // [!code focus] - "value": ">=2000000000000000000 || value <=4000000000000000000" // [!code focus] ``` :::info Note we advise you to filer any `indexed` fields in the contract details in the `rindexer.yaml` file. As these can be filtered out on the request level and not filtered out in rindexer itself. You can read more about it [here](/docs/start-building/yaml-config/contracts#indexed_1-indexed_2-indexed_3). ::: If you have a tuple and you want to get that value you just use the object notation. For example lets say we want to only get the events for `profileId` from the `quoteParams` tuple which equals `1`: ```json { "anonymous": false, "inputs": [ { "components": [ { "internalType": "uint256", "name": "profileId", // [!code focus] "type": "uint256" }, ... ], "indexed": false, "internalType": "struct Types.QuoteParams", "name": "quoteParams", // [!code focus] "type": "tuple" }, ... ], "name": "QuoteCreated", // [!code focus] "type": "event" } ``` ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] sns: // [!code focus] aws_config: region: us-east-1 access_key: ${AWS_ACCESS_KEY_ID} secret_key: ${AWS_SECRET_ACCESS_KEY} # session_token is optional session_token: ${AWS_SESSION_TOKEN} # endpoint_url is optional endpoint_url: ${ENDPOINT_URL} topics: // [!code focus] - topic_arn: "arn:aws:sns:us-east-1:664643779377:test" networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "quoteParams.profileId": "=1" // [!code focus] ``` ## Webhooks :::info rindexer streams can be used without any other storage providers. It can also be used with storage providers. ::: rindexer allows you to configure webhooks to fire based on your conditions to another API. This goes under the [contracts](/docs/start-building/yaml-config/contracts) or [native\_transfers](/docs/start-building/yaml-config/native-transfers) section of the YAML configuration file. ### Configuration with rindexer `webhooks` property accepts an array allowing you to split up the webhooks any way you wish. ### Example :::code-group ```yaml [contract events] name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] webhooks: // [!code focus] - endpoint: YOUR_WEBHOOK_URL // [!code focus] shared_secret: ${RINDEXER_WEBHOOK_SHARED_SECRET} // [!code focus] networks: // [!code focus] - ethereum // [!code focus] events: // [!code focus] - event_name: Transfer // [!code focus] alias: RocketPoolTransfer ``` ```yaml [native transfers] name: ETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: - network: ethereum streams: // [!code focus] webhooks: // [!code focus] - endpoint: YOUR_WEBHOOK_URL // [!code focus] shared_secret: ${RINDEXER_WEBHOOK_SHARED_SECRET} // [!code focus] networks: // [!code focus] - ethereum // [!code focus] events: // [!code focus] - event_name: NativeTransfer // [!code focus] ``` ::: ### Response The response sent to you is already decoded and parsed into a JSON object. * `event_name` - The name of the event * `event_signature_hash` - The event signature hash example the keccak256 hash of "Transfer(address,address,uint256)", this is topics\[0] in the logs * `event_data` - The event data which has all the event fields decoded and the transaction information which is under `transaction_information` * `network` - The network the event was emitted on For example a transfer event would look like: ```json { "event_name": "Transfer", "event_signature_hash": "0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef", "event_data": { "from": "0x0338ce5020c447f7e668dc2ef778025ce3982662", "to": "0x0338ce5020c447f7e668dc2ef778025ce3982662", "value": "1000000000000000000", "transaction_information": { "address": "0xae78736cd615f374d3085123a210448e74fc6393", "block_hash": "0x8461da7a1d4b47190a01fa6eae219be40aacffab0dd64af7259b2d404572c3d9", "block_number": "18718011", "log_index": "0", "network": "ethereum", "transaction_hash": "0x145c6705ffbf461e85d08b4a7f5850d6b52a7364d93a057722ca1194034f3ba4", "transaction_index": "0" } }, "network": "ethereum" } ``` ### endpoint This is your webhook url. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] webhooks: // [!code focus] - endpoint: YOUR_WEBHOOK_URL // [!code focus] ``` ### shared\_secret This is the shared secret you want to use to authenticate the webhook so you know it ha came from rindexer. This is always injected in the header as `x-rindexer-shared-secret`. :::info We advise you to put this in a environment variables. ::: ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] webhooks: // [!code focus] - endpoint: YOUR_WEBHOOK_URL shared_secret: ${RINDEXER_WEBHOOK_SHARED_SECRET} // [!code focus] ``` This is an array of networks you want to stream to this webhook. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] webhooks: // [!code focus] - endpoint: YOUR_WEBHOOK_URL shared_secret: ${RINDEXER_WEBHOOK_SHARED_SECRET} networks: // [!code focus] - ethereum // [!code focus] ``` ### events This is an array of events you want to stream to this webhook. #### event\_name This is the name of the event you want to stream to this webhook, must match the ABI event name. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] webhooks: // [!code focus] - endpoint: YOUR_WEBHOOK_URL shared_secret: ${RINDEXER_WEBHOOK_SHARED_SECRET} networks: - ethereum events: // [!code focus] - event_name: Transfer // [!code focus] ``` ##### alias This is an optional `alias` you wish to assign to the event you want published to this Webhook. It is paired with the event name and allows consumers to have unique discriminator keys in the event of naming conflicts. E.g Transfer (ERC20) and Transfer (ERC721). ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] webhooks: // [!code focus] - endpoint: YOUR_WEBHOOK_URL shared_secret: ${RINDEXER_WEBHOOK_SHARED_SECRET} networks: - ethereum events: // [!code focus] - event_name: Transfer // [!code focus] alias: RocketPoolTransfer // [!code focus] ``` #### conditions This accepts an array of conditions you want to apply to the event data before calling the webhook. :::info This is optional, if you do not provide any conditions all data will be streamed. ::: You may want to filter on the stream based on the event data, if the event data has not got an index on the on the solidity event you can not filter it over the logs. The `conditions` filter is here to help you with this, based on your ABI you can filter on the event data. rindexer has enabled a special syntax which allows you to define on your ABI fields what you want to filter on. 1. `>` - higher then (for numbers only) 2. `<` - lower then (for numbers only) 3. `=` - equals 4. `>=` - higher then or equals (for numbers only) 5. `<=` - lower then or equals (for numbers only) 6. `||` - or 7. `&&` - and So lets look at an example lets say i only want to get transfer events which are higher then `2000000000000000000` RETH wei ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] webhooks: // [!code focus] - endpoint: YOUR_WEBHOOK_URL shared_secret: ${RINDEXER_WEBHOOK_SHARED_SECRET} networks: - ethereum events: // [!code focus] - event_name: Transfer // [!code focus] conditions: // [!code focus] - "value": ">=2000000000000000000" // [!code focus] ``` We use the ABI input name `value` to filter on the value field, you can find these names in the ABI file. ```json { "anonymous":false, "inputs":[ { "indexed":true, "internalType":"address", "name":"from", "type":"address" }, { "indexed":true, "internalType":"address", "name":"to", "type":"address" }, { "indexed":false, "internalType":"uint256", "name":"value", // [!code focus] "type":"uint256" } ], "name":"Transfer", "type":"event" } ``` You can use the `||` or `&&` to combine conditions. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] webhooks: // [!code focus] - endpoint: YOUR_WEBHOOK_URL shared_secret: ${RINDEXER_WEBHOOK_SHARED_SECRET} networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "value": ">=2000000000000000000 && value <=4000000000000000000" // [!code focus] ``` You can use the `=` to filter on other aspects like the `from` or `to` address. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] webhooks: // [!code focus] - endpoint: YOUR_WEBHOOK_URL shared_secret: ${RINDEXER_WEBHOOK_SHARED_SECRET} networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "from": "0x0338ce5020c447f7e668dc2ef778025ce3982662 || 0x0338ce5020c447f7e668dc2ef778025ce398266u" // [!code focus] - "value": ">=2000000000000000000 || value <=4000000000000000000" // [!code focus] ``` :::info Note we advise you to filer any `indexed` fields in the contract details in the `rindexer.yaml` file. As these can be filtered out on the request level and not filtered out in rindexer itself. You can read more about it [here](/docs/start-building/yaml-config/contracts#indexed_1-indexed_2-indexed_3). ::: If you have a tuple and you want to get that value you just use the object notation. For example lets say we want to only get the events for `profileId` from the `quoteParams` tuple which equals `1`: ```json { "anonymous": false, "inputs": [ { "components": [ { "internalType": "uint256", "name": "profileId", // [!code focus] "type": "uint256" }, ... ], "indexed": false, "internalType": "struct Types.QuoteParams", "name": "quoteParams", // [!code focus] "type": "tuple" }, ... ], "name": "QuoteCreated", // [!code focus] "type": "event" } ``` ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer streams: // [!code focus] webhooks: // [!code focus] - endpoint: YOUR_WEBHOOK_URL shared_secret: ${RINDEXER_WEBHOOK_SHARED_SECRET} networks: - ethereum events: // [!code focus] - event_name: Transfer conditions: // [!code focus] - "quoteParams.profileId": "=1" // [!code focus] ``` ## Ethers to Alloy Migration Guide Rindexer released a breaking change for all Rust projects, which internally migrates from ethers to alloy. This means that exposed types and primitives are now from Alloy rather than Ethers-rs; some methods have also been deprecated and this will require breaking changes to be implemented within any rindexer rust project. The simple steps to take are as follows: 1. Add `alloy` to your `Cargo.toml` 2. Run `rindexer codegen typings` to get the latest codegen state 3. Change all parameter names and types 4. Change advanced methods You can see more details on each section below. ### 1. Add `alloy` to your `Cargo.toml` First you should add `alloy` to your `Cargo.toml` and delete `ethers`. You can now remove any references to the `ethers` crate and replace this will the newer `alloy` crate. It should be pegged to the same version used by `rindexer` internally to ensure type compatibility. Replace in `Cargo.toml` ```diff - ethers = "2.0.14" + alloy = { version = "1.1.3", features = ["full"] } ``` Then any references to ethers types in your own code can be replaced with the alloy types. You can read more in the [migration docs](https://alloy.rs/migrating-from-ethers/reference/). ```diff - use ethers::prelude::U256; + use alloy::primitives::U256; ``` ### 2. Run `rindexer codegen typings` to get the latest codegen state Once you have added the `alloy` crate and changed any of your project-specific code uses you can now run the rindexer codegen with `rindexer codegen typings`. This will regenerate the bindings with alloy code and types. :::warn If for some reason there are errors in your codegen file, please try deleting it before re-running. Otherwise raise an issue on github: [https://github.com/joshstevens19/rindexer/issues](https://github.com/joshstevens19/rindexer/issues). ::: This should lead to working error-free code in the codegen directory. At this point you should expect to errors in most of the actual custom indexing files if you have specified them. This is mainly due to the reasons below (parameter name changes & type changes). ### 3. Change all parameter names and types You must now go through and deal with the errors in your specific handler functions. These will almost exclusively be related to casing changes i.e. `token_id` -> `tokenId`. Or it will be a type's name change, i.e `H256` -> `B256`. This should be relatively self explanatory, you can read more details below but it's just a simple manual process. **Solidity contract parameters** The casing of solidity contract parameters is now transparent, meaning the prior `snake_case` conversion will no longer be retained and instead you will be required to access an event parameter by it's source name. This will typically be `camelCase` or occasionally `UPPERCASE`, the rust compiler will help with this and the conversion should be relatively simple, albeit it a manual process. ```diff - EthereumSqlTypeWrapper::AddressBytes(t.event_data.on_behalf_of), + EthereumSqlTypeWrapper::AddressBytes(t.event_data.onBehalfOf), ``` **Solidity types** The core `EthereumSqlTypeWrapper` have been ported to use the new alloy primitives internally, so if you are manually passing any `ethers` types such as the `Address` derived manually (i.e. not as an automatically exposed type from rindexer) then you will need ensure this is migrated to the new types. Most simple types can be found in the alloy migration guide: [https://alloy.rs/migrating-from-ethers/reference/](https://alloy.rs/migrating-from-ethers/reference/) The most common types which experiences the rename are the `Hash` types, renamed to `Byte` types: * `H256` -> `B256` * `H128` -> `B128` * `H256` -> `B256` * `H512` -> `B512` The above includes all Vec, and Byte representation variants e.g. `VecB512`, etc. The following types have been retained but should be consisted deprecated and may be removed: * `H160` -> Use `Address` types instead (Including all Vec, and Byte representation variants) :::warn A simple find and replace should handle most of these cases ::: ### 4. Change advanced methods :::warn Most projects will by default use the underlying type, this is for when manual type manipulation is needed. ::: Replace any methods that are erroring out that previously were not. The most common of this will be the existence of `as_u[BITS]`. These methods are no longer exposed directly on all solidity primitive `uint` types. For example the `as_u32()` method which existed on a `U256` is no longer directly accessible. Instead, now, if you wish to downcast, you can use the `TryInto` trait on most `uint` types to achieve the same outcome and handle errors gracefully in the event of an overflow (where before a panic would occur implicitly). ```diff - EthereumSqlTypeWrapper::U32(t.tx_information.log_index.as_u32()), + EthereumSqlTypeWrapper::U32(t.tx_information.log_index.try_into().expect("log_index should fit in u32")), ``` You may also opt to match the underlying type ```diff - EthereumSqlTypeWrapper::U32(t.tx_information.log_index.as_u32()), + EthereumSqlTypeWrapper::U256(t.tx_information.log_index), ``` ### 5. Await `EventCallbackRegistry` `register` calls The `register` function is now `async`, requiring `.await` to be called for the `Future` to complete execution. This is a breaking change that requires modification of existing custom handler code: ```diff async fn transfer_handler( manifest_path: &PathBuf, registry: &mut EventCallbackRegistry, ) { RocketPoolETHEventType::Transfer( TransferEvent::handler( |results, context| async move { ... return Ok(()); }, no_extensions(), ) .await, ) - .register(manifest_path, registry); + .register(manifest_path, registry).await; // [!code focus] } ``` ## Rust Project Deep Dive As explained in the [rust](/docs/start-building/project-types/rust-project) project type, the Rust project is a project that is meant to be changed and extended. The template gives you a starting point, and you may choose to run the indexer differently, use your own custom logic, do http requests, do on chain lookups or anything else you can think of. rindexer is also a framework that can be used to build your own custom indexer and not just a no-code indexer. ## Indexers When creating a new rust project with rindexer it will create you a indexers folder, this is where you will write your custom logic for the indexer. This is where you will do all your indexing logic, you can do anything you want in here, you can do http requests, on chain lookups, custom logic, custom DBs, anything you can think of. rindexer gives you the foundations and also baked in extendability. Rust enforces a strong type system, all logs will be streamed to you just focus on the logic you want. By default if you turn storage postgres on in the YAML configuration file it will also create you postgres tables, also write SQL for you to use and expose you a postgres client. This is a great starting point for you to build on. The tables creation can be skipped by using the [disable\_create\_tables](docs/start-building/yaml-config/storage#disable_create_tables) in the YAML configuration file. If you also enable the CSV storage it will also generate code in the handler to write to that CSV files. You can regenerate the indexers folder by running the following command: :::warning This will overwrite any custom logic you have written in the indexers folder so be careful. ::: ```bash rindexer codegen indexer ``` To help understand the interfaces and ways rindexer handlers can be extended we will look at an example. Take this YAML file - All transfer events for reth on Ethereum between block 18600000 and 18718056 will be indexed. ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: rust networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18900000 end_block: 19000000 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer ``` This would generate you a `rocket_pool_eth.rs` file in the indexers folder, this file will have a handler function, note that the name of the file is the contract name in snake case alongside if you are doing filters `_filter` appended to it. If you are using multiple events to index on a contract the file it will generate will have all the handlers in the single file. ### `Handlers` As you see here with this example out the box it will generate you all your indexer handlers for in this case the Transfer event. If you have postgres storage enabled it will have the bulk insert code written for you. The boilerplate code is runnable out the box. ```rs use super::super::super::typings::rust::events::rocket_pool_eth::{ no_extensions, ApprovalEvent, RocketPoolETHEventType, TransferEvent, }; use rindexer::{ event::callback_registry::EventCallbackRegistry, rindexer_error, rindexer_info, EthereumSqlTypeWrapper, PgType, RindexerColorize, }; async fn transfer_handler( manifest_path: &PathBuf, registry: &mut EventCallbackRegistry, ) { RocketPoolETHEventType::Transfer( TransferEvent::handler( |results, context| async move { if results.is_empty() { return Ok(()); } let mut bulk_data: Vec> = vec![]; for result in results.iter() { let data = vec![ EthereumSqlTypeWrapper::Address(result.tx_information.address), EthereumSqlTypeWrapper::Address(result.event_data.from), EthereumSqlTypeWrapper::Address(result.event_data.to), EthereumSqlTypeWrapper::U256(result.event_data.value), EthereumSqlTypeWrapper::B256(result.tx_information.transaction_hash), EthereumSqlTypeWrapper::U64(result.tx_information.block_number), EthereumSqlTypeWrapper::B256(result.tx_information.block_hash), EthereumSqlTypeWrapper::String(result.tx_information.network.to_string()), ]; bulk_data.push(data); } if bulk_data.is_empty() { return Ok(()); } let result = context .database .insert_bulk( "rust_rocket_pool_eth.transfer", &[ "contract_address".to_string(), "from".to_string(), "to".to_string(), "value".to_string(), "tx_hash".to_string(), "block_number".to_string(), "block_hash".to_string(), "network".to_string(), ], &bulk_data, ) .await; if let Err(e) = result { rindexer_error!( "RocketPoolETHEventType::Transfer inserting bulk data: {:?}", e ); return Err(e.to_string()); } rindexer_info!( "RocketPoolETH::Transfer - {} - {} events", "INDEXED".green(), results.len(), ); Ok(()) }, no_extensions(), ) .await, ) .register(manifest_path, registry).await; } pub async fn rocket_pool_eth_handlers( manifest_path: &PathBuf, registry: &mut EventCallbackRegistry, ) { transfer_handler(registry).await; } ``` ### Event::Handler rindexer hides all the complex rust types and abstracts everything for you so you can easily just build the logic within the handler itself. As you see with the below you just write the logic and `results` holds all the decoded event data and `context` holds the database client and any extensions you pass to it. The naming convention for the handler is `{AbiEventName}Event::handler` so in this case `TransferEvent::handler` this is so you can. ```rs async fn transfer_handler( manifest_path: &PathBuf, registry: &mut EventCallbackRegistry, ) { RocketPoolETHEventType::Transfer( TransferEvent::handler( // [!code focus] |results, context| async move { // [!code focus] // logic here // [!code focus] return Ok(()); }, // [!code focus] no_extensions(), ) .await, ) .register(manifest_path, registry).await; } ``` ### Why an async move? rindexer has abstracted all the complex types for you so you can just focus on the logic you want to write, that said rust demands knowing all the memory location of every element and when to drop references, this is why you need to use `async move` in the handler. ### Results This holds the event logs decoded information and the transaction information for the events. ```rs async fn transfer_handler( manifest_path: &PathBuf, registry: &mut EventCallbackRegistry, ) { RocketPoolETHEventType::Transfer( TransferEvent::handler( // results = Vec // [!code focus] |results, context| async move { // [!code focus] // logic here return Ok(()); }, no_extensions(), ) .await, ) .register(manifest_path, registry).await; } ``` ```rs #[derive(Debug, Clone)] pub struct TransferResult { pub event_data: TransferData, pub tx_information: TxInformation, } ``` The `event_data` will be pointing to the ABI type generated using `alloy` `sol!` macro. ```rs sol!( #[sol(rpc, all_derives)] TransferFilter, r#" [{ "anonymous": false, "inputs": [ { "indexed": true, "name": "from", "type": "address" }, { "indexed": true, "name": "to", "type": "address" }, { "indexed": false, "name": "value", "type": "uint256" } ], "name": "Transfer", "type": "event" }] "# } ``` Which will expand to a struct like: ```rs #[allow( non_camel_case_types, non_snake_case, clippy::pub_underscore_fields, clippy::style )] pub struct Transfer { #[allow(missing_docs)] pub from: ::alloy::sol_types::private::Address, #[allow(missing_docs)] pub to: ::alloy::sol_types::private::Address, #[allow(missing_docs)] pub value: ::alloy::sol_types::private::primitives::aliases::U256, } ``` The `tx_information` will be the transaction related information for the event ```rs #[derive(Debug, Clone)] pub struct TxInformation { pub chain_id: u64, pub network: String, pub address: Address, pub block_hash: BlockHash, pub block_number: U64, pub block_timestamp: Option, pub transaction_hash: TxHash, pub log_index: U256, pub transaction_index: U64, } ``` As you see the `network` is always passed in the `tx_information` struct, this is so you can index multiple networks within the same handler if you wish. ### Context The `context` is a struct that is passed to the handler which has thread safe services exposed for ease of use within the handler. ```rs async fn transfer_handler( manifest_path: &PathBuf, registry: &mut EventCallbackRegistry, ) { RocketPoolETHEventType::Transfer( TransferEvent::handler( // context = Arc> // [!code focus] |results, context| async move { // [!code focus] // logic here return Ok(()); }, no_extensions(), ) .await, ) .register(manifest_path, registry).await; } ``` ```rs pub struct EventContext where TExtensions: Send + Sync, { pub database: Arc, pub csv: Arc, pub extensions: Arc, } ``` Note here that if you have postgres storage off in the YAML configuration file the `database` will not be present in this struct and you will not be able to use it. The same goes for the `csv` if you have csv storage off in the YAML. ### Event Callback Result The callback has to return a `Result<(), String>` so it can be handled by rindexer, rindexer by default will keep retrying the event if it fails with exponential backoff. #### Success ```rs async fn transfer_handler( manifest_path: &PathBuf, registry: &mut EventCallbackRegistry, ) { RocketPoolETHEventType::Transfer( TransferEvent::handler( |results, context| async move { // logic here return Ok(()); // [!code focus] }, no_extensions(), ) .await, ) .register(manifest_path, registry).await; } ``` #### Error Error takes in a string which then is logged in the rindexer console to help debugging traces. ```rs async fn transfer_handler( manifest_path: &PathBuf, registry: &mut EventCallbackRegistry, ) { RocketPoolETHEventType::Transfer( TransferEvent::handler( |results, context| async move { // logic here return Err("this is an error".to_string()); // [!code focus] }, no_extensions(), ) .await, ) .register(manifest_path, registry).await; } ``` ### Extensions You can also pass in your own custom thread safe extensions to the context if you wish, this is a way to pass in custom logic. For example say you wanted to use a different database or call something from outside the indexer using an http request then this is the place to pass it in from. Example below uses the [reqwest](https://docs.rs/reqwest/latest/reqwest/) rust library to make a http request. ```rs use reqwest::blocking::Client; // [!code focus] use std::error::Error; struct HttpClient { // [!code focus] client: Client, // [!code focus] } // [!code focus] impl HttpClient { // [!code focus] fn new() -> Self { // [!code focus] HttpClient { // [!code focus] client: Client::new(), // [!code focus] } // [!code focus] } // [!code focus] fn get(&self, url: &str) -> Result> { // [!code focus] let response = self.client.get(url).send()?.text()?; // [!code focus] Ok(response) // [!code focus] } } async fn transfer_handler( manifest_path: &PathBuf, registry: &mut EventCallbackRegistry, ) { RocketPoolETHEventType::Transfer( TransferEvent::handler( |results, context| async move { let response = context.extensions.client.get("https://example.com"); // [!code focus] match response { // [!code focus] Ok(response) => { println!("{}", response), // [!code focus] return Ok(()); // [!code focus] } Err(e) => { println!("Error: {:?}", e) // [!code focus] return Err(e.to_string()); // [!code focus] }, // [!code focus] } // [!code focus] }, HttpClient::new(), // [!code focus] ) .await, ) .register(manifest_path, registry).await; } ``` ### Network providers You get exposed to all the network thread safe json rpc providers you have defined in the network YAML configuration file, this allows you to do on chain lookups at indexing time. This is exposed in the `typings` folder. The naming for the provider function is the network name defined in your YAML configuration file in snake case with `get_` prefixed to it and `_provider` appended to it. ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: rust networks: - name: ethereum // [!code focus] chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18900000 end_block: 19000000 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer ``` for example with network `ethereum` the provider function would be `get_ethereum_provider`. ```rs use crate::rindexer_lib::typings::networks::get_ethereum_provider;// [!code focus] async fn transfer_handler( manifest_path: &PathBuf, registry: &mut EventCallbackRegistry, ) { RocketPoolETHEventType::Transfer( TransferEvent::handler( |results, context| async move { let provider = get_ethereum_provider(); // [!code focus] let chain_id = provider.get_chainid().await; // [!code focus] match chain_id { // [!code focus] Ok(result) => { println!("Chain id: {:?}", result) // [!code focus] return Ok(()); // [!code focus] } Err(e) => { println!("Error getting chain id: {:?}", e) // [!code focus] return Err(e.to_string()); // [!code focus] } } // [!code focus] }, no_extensions(), ) .await, ) .register(manifest_path, registry).await; } ``` :::info You can also pass in the provider to the extensions if you wish to use it in the handler using the context struct. ::: ### External Contract calls You can also make contract calls within the handler, this is useful if you want to get the current state of a contract. You get exposed to the contract for the event you are indexing on but you can also use the [global](docs/start-building/yaml-config/global) YAML to define other contracts you want to use. ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: rust networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18900000 end_block: 19000000 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval global: // [!code focus] contracts: // [!code focus] - name: USDT // [!code focus] details: // [!code focus] - address: 0xdac17f958d2ee523a2206206994597c13d831ec7 // [!code focus] network: ethereum // [!code focus] abi: ./abis/erc20.abi.json // [!code focus] ``` #### Global contract calls It as easy as importing the contract and calling the function you want. The naming convention for the contract is the contract name defined in your YAML configuration file in snake case with `_contract` appended to it. ```yaml ... global: // [!code focus] contracts: - name: USDT // [!code focus] details: - address: 0xdac17f958d2ee523a2206206994597c13d831ec7 network: ethereum abi: ./abis/erc20.abi.json ``` ```rs use crate::rindexer_lib::typings::global_contracts::usdt_contract; // [!code focus] async fn transfer_handler( manifest_path: &PathBuf, registry: &mut EventCallbackRegistry, ) { RocketPoolETHEventType::Transfer( TransferEvent::handler( |results, context| async move { let usdt = usdt_contract(); // [!code focus] let name = usdt.name().await; // [!code focus] match name { // [!code focus] Ok(result) => { println!("USDT name: {:?}", name) // [!code focus] return Ok(()); // [!code focus] } Err(e) => { println!("Error getting USDT name: {:?}", e) // [!code focus] return Err(e.to_string()); // [!code focus] } } // [!code focus] }, no_extensions(), ) .await, ) .register(manifest_path, registry).await; } ``` ##### Multiple addresses If you have defined [multiple addresses](/docs/start-building/yaml-config/contracts#address) for a contract in the YAML configuration file you have to pass in the address into the contract function. ```rs use crate::rindexer_lib::typings::global_contracts::usdt_contract; // [!code focus] async fn transfer_handler( manifest_path: &PathBuf, registry: &mut EventCallbackRegistry, ) { RocketPoolETHEventType::Transfer( TransferEvent::handler( |results, context| async move { let address: Address = "0xdac17f958d2ee523a2206206994597c13d831ec7" // [!code focus] .parse() // [!code focus] .expect("Invalid address"); // [!code focus] let usdt = usdt_contract(address); // [!code focus] let name = usdt.name().await; // [!code focus] match name { // [!code focus] Ok(result) => { println!("USDT name: {:?}", name) // [!code focus] return Ok(()); // [!code focus] } Err(e) => { println!("Error getting USDT name: {:?}", e) // [!code focus] return Err(e.to_string()); // [!code focus] } } // [!code focus] }, no_extensions(), ) .await, ) .register(manifest_path, registry).await; } ``` #### Contract calls Each event to index is defined in a contract within the YAML configuration file, you can also make calls to this contract within the handler. The naming convention for the contract is the contract name defined in your YAML configuration file in snake case with `_contract` appended to it. ```yaml contracts: - name: RocketPoolETH // [!code focus] details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18900000 end_block: 19000000 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer // [!code focus] - Approval ``` ```rs use crate::rindexer_lib::typings::rust::events::rocket_pool_eth::rocket_pool_eth_contract; // [!code focus] async fn transfer_handler( manifest_path: &PathBuf, registry: &mut EventCallbackRegistry, ) { RocketPoolETHEventType::Transfer( TransferEvent::handler( |results, context| async move { // have to pass in network name here // [!code focus] let rocket_pool_eth = rocket_pool_eth_contract("ethereum"); // [!code focus] let name = rocket_pool_eth.name().await; // [!code focus] match name { // [!code focus] Ok(result) => { println!("rETH name: {:?}", name) // [!code focus] return Ok(()); // [!code focus] } Err(e) => { println!("Error getting rETH name: {:?}", e) // [!code focus] return Err(e.to_string()); // [!code focus] } } // [!code focus] }, no_extensions(), ) .await, ) .register(manifest_path, registry).await; } ``` ##### Multiple addresses If you have defined [multiple addresses](/docs/start-building/yaml-config/contracts#address) or you have a [filter](/docs/start-building/yaml-config/contracts#filter) for a contract in the YAML configuration file you will have to pass in the address into the contract function. ```rs use crate::rindexer_lib::typings::rust::events::rocket_pool_eth::rocket_pool_eth_contract; // [!code focus] async fn transfer_handler( manifest_path: &PathBuf, registry: &mut EventCallbackRegistry, ) { RocketPoolETHEventType::Transfer( TransferEvent::handler( |results, context| async move { let address: Address = "0xdac17f958d2ee523a2206206994597c13d831ec7" // [!code focus] .parse() // [!code focus] .expect("Invalid address"); // [!code focus] // have to pass in network name here // [!code focus] let rocket_pool_eth = rocket_pool_eth_contract("ethereum", address); // [!code focus] let name = rocket_pool_eth.name().await; // [!code focus] match name { // [!code focus] Ok(result) => { println!("rETH name: {:?}", name) // [!code focus] return Ok(()); // [!code focus] } Err(e) => { println!("Error getting rETH name: {:?}", e) // [!code focus] return Err(e.to_string()); // [!code focus] } } // [!code focus] }, no_extensions(), ) .await, ) .register(manifest_path, registry).await; } ``` ### Postgres By default if you set the postgres storage on in the YAML configuration file it will generate you a postgres connected client. This uses the [tokio-postgres](https://docs.rs/tokio-postgres/latest/tokio_postgres/) library. This is a great starting point for you to build on. It uses connection pools by default. ```rs async fn transfer_handler( manifest_path: &PathBuf, registry: &mut EventCallbackRegistry, ) { RocketPoolETHEventType::Transfer( TransferEvent::handler( |results, context| async move { // database client here // [!code focus] context.database // [!code focus] return Ok(()); }, no_extensions(), ) .await, ) .register(manifest_path, registry).await; } ``` #### Disable Postgres Create Tables The tables creation can be skipped by using the [disable\_create\_tables](/docs/start-building/yaml-config/storage#disable_create_tables) in the YAML configuration file. This will generate you a blank handler with no logic inside. ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: rust networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true disable_create_tables: true // [!code focus] contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18900000 end_block: 19000000 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer ``` You can query data from the database and write data to the database, here are the postgres methods exposed. * `context.database.new` - This is for creating a new client. * `context.database.batch_execute` - This is for executing multiple queries at once. * `context.database.execute` - This is for executing a single query. * `context.database.prepare` - This is for preparing a query to be executed multiple times. * `context.database.transaction` - This is for starting a transaction. * `context.database.query` - This is for querying data from the database. * `context.database.query_one` - This is for querying a single row from the database. * `context.database.query_one_or_none` - This is for querying a single row from the database or returning None if no rows are found. * `context.database.insert_bulk` - This is for inserting multiple rows into the database efficiently. * `context.database.copy_in` - This is for inserting multiple rows into the database using the COPY command. #### EthereumSqlTypeWrapper Ethereum types are not 1 to 1 with postgres types, so rindexer has a wrapper to help you with this. This is a enum called EthereumSqlTypeWrapper which has all the types you need to pass into the postgres write functions. ```rs #[derive(Debug, Clone)] pub enum EthereumSqlTypeWrapper { // Boolean Bool(bool), VecBool(Vec), // 8-bit integers U8(u8), I8(i8), VecU8(Vec), VecI8(Vec), // 16-bit integers U16(u16), I16(i16), VecU16(Vec), VecI16(Vec), // 32-bit integers U32(u32), I32(i32), VecU32(Vec), VecI32(Vec), // 64-bit integers U64(U64), U64Nullable(U64), U64BigInt(U64), I64(i64), VecU64(Vec), VecI64(Vec), // 128-bit integers U128(u128), I128(i128), VecU128(Vec), VecI128(Vec), // 256-bit integers U256(U256), U256Numeric(U256), U256NumericNullable(Option), U256Nullable(U256), U256Bytes(U256), U256BytesNullable(U256), I256(I256), I256Numeric(I256), I256Nullable(I256), I256Bytes(I256), I256BytesNullable(I256), VecU256(Vec), VecU256Bytes(Vec), VecU256Numeric(Vec), VecI256(Vec), VecI256Bytes(Vec), // 512-bit integers U512(U512), VecU512(Vec), // Hashes B128(B128), H160(B160), // DEPRECATED - Use Address instead B256(B256), B256Bytes(B256), B512(B512), VecB128(Vec), VecB256(Vec), VecB256Bytes(Vec), VecB512(Vec), // Deprecated Hash. Move to use Address VecH160(Vec), // Address Address(Address), AddressNullable(Address), AddressBytes(Address), AddressBytesNullable(Address), VecAddress(Vec
), VecAddressBytes(Vec
), // Strings and Bytes String(String), StringVarchar(String), StringChar(String), StringNullable(String), StringVarcharNullable(String), StringCharNullable(String), VecString(Vec), VecStringVarchar(Vec), VecStringChar(Vec), Bytes(Bytes), BytesNullable(Bytes), VecBytes(Vec), // UUID Uuid(Uuid), // DateTime DateTime(DateTime), DateTimeNullable(Option>), // JSON JSONB(Value), } // to use it you just pass the value in the enum // example EthereumSqlTypeWrapper::Address(result.tx_information.address) ``` ### CSV csv storage is disabled by default in the YAML configuration file, if you turn it on it will generate you a csv client methods: * `context.csv.append_header` - This is for appending a header to the csv file. * `context.csv.append` - This is for appending a row to the csv file. ```rs async fn transfer_handler( manifest_path: &PathBuf, registry: &mut EventCallbackRegistry, ) { RocketPoolETHEventType::Transfer( TransferEvent::handler( |results, context| async move { // csv client here // [!code focus] context.csv // [!code focus] return Ok(()); }, no_extensions(), ) .await, ) .register(manifest_path, registry).await; } ``` #### Disable CSV Create Headers If you turn on csv storage then by default rindexer will create headers for you automatically inline with the ABI event data. The CSV header creation can be skipped by using the [disable\_create\_headers](/docs/start-building/yaml-config/storage#disable_create_headers) in the YAML configuration file. ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: rust networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: csv: enabled: true disable_create_headers: true // [!code focus] contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18900000 end_block: 19000000 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer ``` ### register rindexer needs to know which handlers are required to be indexed so you need to register them with the `EventCallbackRegistry`. This passing of `&mut EventCallbackRegistry` is taken care of you by the rindexer framework, you just need to call the `register` function. ```rs async fn transfer_handler( manifest_path: &PathBuf, registry: &mut EventCallbackRegistry, ) { RocketPoolETHEventType::Transfer( TransferEvent::handler( |results, context| async move { ... return Ok(()); }, no_extensions(), ) .await, ) .register(manifest_path, registry).await; // [!code focus] } ``` The `main.rs` calls the `register_all_handlers` function which lives in `all_handlers.rs` this registers all the handlers, this code is all generated for you and you do not need to worry about it. ```rs use std::path::PathBuf; use super::rust::rocket_pool_eth::rocket_pool_eth_handlers; use rindexer::event::callback_registry::EventCallbackRegistry; pub async fn register_all_handlers(manifest_path: &PathBuf) -> EventCallbackRegistry { let mut registry = EventCallbackRegistry::new(); rocket_pool_eth_handlers(manifest_path, &mut registry).await; registry } ``` ### main.rs The rust project will generate you a main.rs which can be ran out the box. This is just boilerplate code to get you started, you can customise this as you wish and should be if your building a custom indexer. ```rs use std::env; use self::rindexer_lib::indexers::all_handlers::register_all_handlers; use rindexer::{ event::callback_registry::TraceCallbackRegistry, start_rindexer, GraphQLServerDetails, GraphQLServerSettings, IndexingDetails, StartDetails, }; mod rindexer_lib; #[tokio::main] async fn main() { let args: Vec = env::args().collect(); let mut enable_graphql = false; let mut enable_indexer = false; let mut port: Option = None; for arg in args.iter() { match arg.as_str() { "--graphql" => enable_graphql = true, "--indexer" => enable_indexer = true, _ if arg.starts_with("--port=") || arg.starts_with("--p") => { if let Some(value) = arg.split('=').nth(1) { let overridden_port = value.parse::(); match overridden_port { Ok(overridden_port) => port = Some(overridden_port), Err(_) => { println!("Invalid port number"); return; } } } } _ => {} } } let path = env::current_dir(); match path { Ok(path) => { let manifest_path = path.join("rindexer.yaml"); let result = start_rindexer(StartDetails { manifest_path: &manifest_path, indexing_details: if enable_indexer { Some(IndexingDetails { registry: register_all_handlers(&manifest_path).await, trace_registry: TraceCallbackRegistry::new(), event_stream: None, }) } else { None }, graphql_details: GraphqlOverrideSettings { enabled: enable_graphql, override_port: port, }, }) .await; match result { Ok(_) => {} Err(e) => { println!("Error starting rindexer: {:?}", e); } } } Err(e) => { println!("Error getting current directory: {:?}", e); } } } ``` #### Running If you want to run this with docker support for the postgres first run: ```bash docker compose up -d ``` Then to run the boilerplate code generated for you, you can run the following command: :::info You are creating a rust rindexer project you should be wanting to change all of this logic to suit your needs. Just like react create app exposes you to the boilerplate code to get you started, this is the same. ::: :::code-group ```bash [everything] cargo run ``` ```bash [indexer only] cargo run -- --indexer ``` ```bash [graphql only] cargo run -- --graphql ``` ::: ### Managing changes when generating typings When you start changing your YAML configuration file and regenerating your typings the indexer functions may break or need editing to match the new typings. You can regenerate the indexers folder but this will overwrite any changes you did. Luckily the rust compiler is very good at telling you what you need to change. ### Subscribing to indexer events More advanced use cases may require reacting to some internal rindexer events, which can be done through the `RindexerEventStream` attached to `IndexingDetails`. ```rs use self::rindexer_lib::indexers::all_handlers::register_all_handlers; use rindexer::{ event::callback_registry::TraceCallbackRegistry, start_rindexer, GraphqlOverrideSettings, IndexingDetails, RindexerEvent, RindexerEventStream, StartDetails, }; use std::env; use tokio::sync::broadcast; use tokio::task; mod rindexer_lib; fn handle_rindexer_events(mut receiver: broadcast::Receiver) { task::spawn(async move { loop { match receiver.recv().await { Ok(event) => { println!("Received {:?}", event); } Err(e) => { println!("Error {:?}", e); } } } }); } #[tokio::main] async fn main() { let indexer_event_stream = RindexerEventStream::new(); handle_rindexer_events(indexer_event_stream.subscribe()); let path = env::current_dir(); match path { Ok(path) => { let manifest_path = path.join("rindexer.yaml"); let result = start_rindexer(StartDetails { manifest_path: &manifest_path, indexing_details: Some(IndexingDetails { registry: register_all_handlers(&manifest_path).await, trace_registry: TraceCallbackRegistry::new(), event_stream: Some(indexer_event_stream), }), graphql_details: GraphqlOverrideSettings { enabled: false, override_port: None, }, }) .await; match result { Ok(_) => {} Err(e) => { println!("Error starting rindexer: {:?}", e); } } } Err(e) => { println!("Error getting current directory: {:?}", e); } } } ``` #### Available events * `HistoricalIndexingCompleted` - emitted when historical indexing was completed (happens every restart of the indexer once it catches up with the latest block) * `BlockIndexingCompleted { chain_id, block_number }` - emitted when all events on a given chain have indexed up to at least `block_number`. ## Typings When creating a new rust project with rindexer it will create you a typings folder, this has pretty advanced typings for all your contracts, events and network information. This is generated from the ABIs you provide in the YAML configuration file. This folder is not meant to be manually edited and should always be generated using codegen. You can regenerate the typings folder by running the following command: :::info rindexer tries to be as smart as possible when it comes to updating the typings based on the `rindexer.yaml`, it will resolves as much as it can without needing a regenerate but like any codegen tool if you change certain aspects it does need to be regenerated. if you change any of these properties in the `rindexer.yaml` file it will need to be regenerated: * [indexer name](/docs/start-building/yaml-config/top-level-fields#name) * anything in the [network](/docs/start-building/yaml-config/networks) section including adding and removing networks * enabling or disabling a new [storage provider](/docs/start-building/yaml-config/storage) * changing the [contract name](/docs/start-building/yaml-config/contracts#name) * changing from [address](/docs/start-building/yaml-config/contracts#address) contract indexing to [filter](/docs/start-building/yaml-config/contracts#filter) indexing or vice versa * changing the contract [ABI](/docs/start-building/yaml-config/contracts#abi) * anything in the [global](/docs/start-building/yaml-config/global) section Also if you do regenerate your indexer files may need to be updated to match the new typings, you can manually migrate them or generate them again using [indexer codegen command](/docs/start-building/codegen#indexers) ::: ```bash rindexer codegen typings ``` ## Project Types rindexer has two types of projects you can create: * [No-code Project](/docs/start-building/project-types/no-code-project) - No-code project is where you can start indexing chain events without writing any code. This is what a lot of people will use to get started with rindexer. * [Rust Project](/docs/start-building/project-types/rust-project) - Rust project is where you can create custom indexing systems using Rust. This is for more advanced users who want to build more complex indexing systems. ## No-code Project The No-code Project type in rindexer is designed for users who wish to quickly set up and deploy indexing solutions without delving into the complexities of coding. This project type leverages a YAML-based configuration that guides you through specifying what data to index and how to index it. It's an ideal solution for those who need to implement standard indexing tasks, or for developers who prefer to focus more on application logic than on the intricacies of the indexing process. :::tip[Custom Tables - Build Powerful Indexers Without Code] With **Custom Tables**, you can build sophisticated indexers that maintain derived state - like token balances, NFT ownership, and protocol metrics - all without writing any code. Track ERC20 balances, count user actions, aggregate cross-chain data, and more. [Learn about Custom Tables β†’](/docs/start-building/tables) ::: **Features:** * **Easy Configuration**: Set up your indexing with a simple YAML file. Define what chain events to listen for and how they should be processed with just a few lines of configuration. * **Custom Tables**: Maintain derived state like token balances, NFT ownership, counters, and cross-chain aggregations with upsert, update, and delete operations - no code required. * **Quick Deployment**: With the no-code setup, your project can be up and running in minutes. This rapid deployment allows you to see results quickly and make adjustments as needed without digging into code. * **Focus on Use Cases**: This project type is perfect for use cases that don't require specialised processing of the data. It allows you to focus on the application's functionality rather than on the backend logistics of data handling. **Ideal For:** * Token balance tracking (ERC20, ERC721, ERC1155). * NFT ownership indexing. * Protocol analytics (TVL, volume, user counts). * Cross-chain aggregations. * Data reporting and dashboards. * Fast prototyping and MVP developments. * Hackathons and quick proof-of-concept projects. The no-code project functionality will keep growing and evolving to provide more features and capabilities to users who prefer a configuration-driven approach to indexing. Any suggestions for no-code indexing ideas please create a github issue and we will look into it. ## Rust Project Requires you to have rust installed on your machine. You can install rust by following the instructions [here](https://www.rust-lang.org/tools/install). The Rust Project type is for developers looking to build sophisticated and highly customised indexing systems. This approach utilises the full power of Rust, offering you the tools to write safe, efficient, and highly concurrent code. By choosing the Rust Project, you can extend the basic capabilities of rindexer with custom logic, complex data transformations, and optimisations tailored to your specific needs. **Features:** * **High Performance**: Leverage Rust’s renowned performance and efficiency to handle large-scale data processing and high-throughput applications. * **Customizable Indexing Logic**: Implement custom indexing logic and data transformations that go beyond the default configurations offered in the no-code projects. * **Advanced Data Handling**: Integrate advanced data handling capabilities, such as contract code lookups, API lookups, complex queries, and data aggregation, to suit your specific application needs. **Ideal For:** * Advanced projects requiring custom indexing solutions. * Custom indexing solutions that go beyond the capabilities of the no-code projects. * Advanced projects who want to data aggregate and transform data in complex ways. Each project type addresses different user needs and expertise levels, providing flexibility in how you choose to implement and scale your indexing solutions with rindexer. Whether you prefer a straightforward, configuration-driven approach or a custom, code-intensive implementation, rindexer supports your development journey. ## Create New Project rindexer provides two modes for creating new projects, each optimized for different use cases and infrastructure setups. ### Choose Your Mode #### Standard Mode (Most common setup) The default way to create and run rindexer projects. This mode connects to blockchain nodes via RPC endpoints and is suitable for every use case. [Create a Standard Project β†’](/docs/start-building/create-new-project/standard) #### Reth Mode (Advanced) An advanced mode that integrates directly with Reth nodes using Execution Extensions (ExEx). **Requirements:** * Running Reth archive node * Advanced knowledge of Ethereum infrastructure [Create a Reth Project β†’](/docs/start-building/create-new-project/reth-mode) ## Create New Project - Reth Mode :::warning Reth mode requires running a Reth archive node and is intended for advanced users. If you're just getting started, you should be using [Standard Mode](/docs/start-building/create-new-project/standard) instead. ::: :::info Make sure you have the CLI installed before starting a new project. You can find the installation instructions [here](/docs/introduction/installation). ::: ### Prerequisites Before creating a Reth mode project, ensure you have: 1. **Reth Archive Node**: A fully synced Reth archive node running locally or on your infrastructure 2. **Hardware Requirements**: Sufficient disk space and memory for running both Reth and rindexer :::tip For detailed instructions on setting up a Reth archive node, see the official [Running Reth on Ethereum](https://reth.rs/run/ethereum) guide. ::: ### 1. Create a New Reth Project The `--reth` flag enables Reth mode when creating a new project. You can also pass additional Reth configuration arguments after `--`. :::code-group ```bash [no-code] rindexer new no-code --reth ``` ```bash [rust] rindexer new rust --reth ``` ```bash [with custom args] rindexer new no-code --reth -- --datadir /custom/path --http true ``` ::: :::info Starting rust projects with Reth mode will add the `reth` feature to your project, which automatically installs the dependencies for Reth. The user does not need to install Reth separately. ::: #### Example New with Reth ```bash rindexer new no-code --reth Initializing new rindexer project with Reth support... Project Name: RocketPoolETHIndexer Project Description (skip by pressing Enter): High-performance rETH indexer using Reth Repository (skip by pressing Enter): https://github.com/joshstevens19/rindexer What Storages To Enable? (graphql can only be supported if postgres is enabled) [postgres, csv, both, none]: postgres Postgres Docker Support Out The Box? [yes, no]: yes Reth Configuration: Data Directory (default: ~/.reth): /data/reth Chain (default: mainnet) [mainnet, sepolia, holesky]: mainnet Enable HTTP RPC? [yes, no]: yes Auth RPC Port (default: 8551): 8551 rindexer no-code project created with Reth support with a rETH transfer events YAML template. ``` ### 2. Reth Configuration in YAML When you create a project with `--reth`, the generated `rindexer.yaml` includes Reth configuration: ```yaml name: RocketPoolETHIndexer description: High-performance rETH indexer using Reth repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co # Fallback RPC reth: enabled: true logging: true # Show Reth logs in stdout cli_args: - "--datadir /data/reth" - "--http" - "--full false" # Archive mode storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18600000 end_block: 18718056 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval ``` #### Key Reth Configuration Options * **enabled**: Enable/disable Reth integration * **logging**: Show Reth logs in stdout (useful for debugging) * **cli\_args**: Array of Reth CLI arguments in "flag value" format :::warning The JWT secret for authenticated RPC is not included in the YAML for security reasons. You'll need to add it manually or via environment variables: ```yaml cli_args: - "--authrpc.jwtsecret /path/to/jwt.hex" ``` ::: #### Common Reth CLI Arguments | Argument | Description | Example | | --------------------- | ------------------------------------ | -------------------------------------- | | `--datadir` | Reth data directory | `--datadir /data/reth` | | `--authrpc.jwtsecret` | Path to JWT secret | `--authrpc.jwtsecret /secrets/jwt.hex` | | `--authrpc.addr` | Auth RPC address | `--authrpc.addr 127.0.0.1` | | `--authrpc.port` | Auth RPC port | `--authrpc.port 8551` | | `--full` | Run as full node (false for archive) | `--full false` | | `--chain` | Network to sync | `--chain mainnet` | | `--http` | Enable HTTP RPC | `--http` | | `--metrics` | Enable metrics | `--metrics 127.0.0.1:9001` | ### 3. Environment Variables All Reth configuration can be overridden using environment variables: ```bash # .env file RETH_DATA_DIR=/data/reth RETH_JWT_SECRET=/secrets/jwt.hex RETH_AUTH_PORT=8551 ``` ```yaml # rindexer.yaml using environment variables networks: - name: ethereum chain_id: 1 rpc: ${FALLBACK_RPC_URL} reth: enabled: true cli_args: - "--datadir ${RETH_DATA_DIR}" - "--authrpc.jwtsecret ${RETH_JWT_SECRET}" - "--authrpc.port ${RETH_AUTH_PORT}" ``` ### 4. Start the Project Starting a Reth mode project will: 1. Start your Reth node, and connect to it 2. Set up ExEx (Execution Extensions) for reorg-aware streaming 3. Begin indexing with minimal latency :::info Ensure your Reth node is fully synced before starting the indexer. ::: :::code-group ```bash [all services] rindexer start all ``` ```bash [indexer only] rindexer start indexer ``` ```bash [graphql only] rindexer start graphql ``` ::: ### 5. Performance Benefits Reth mode provides several advantages over standard RPC-based indexing: #### Native Reorg Handling * ExEx notifications include reorg information * Automatic rollback and reprocessing (coming soon) * No missed events during reorgs #### Minimal Latency * Direct connection to Reth node * No network overhead * Access to pending transactions #### Better Performance * Efficient state access * Reduced RPC calls ### 6. Monitoring and Debugging #### Enable Reth Logging Set `logging: true` in your Reth configuration to see detailed logs: ```yaml reth: enabled: true logging: true # Enable Reth logs ``` ### 7. Troubleshooting #### Common Issues **Cannot connect to Reth node** * Ensure Reth is running and fully synced * Verify JWT secret is correct * Try running reth node separately to see if it works. There might be a problem with the arguments. **Performance issues** * Monitor Reth resource usage * Ensure sufficient disk I/O #### Getting Help For Reth-specific issues: * [Reth Documentation](https://reth.rs) * [Reth GitHub](https://github.com/paradigmxyz/reth) * [rindexer GitHub Issues](https://github.com/joshstevens19/rindexer/issues) ### Next Steps * Learn about [Reth Execution Extensions](/docs/advanced/using-reth-exex) * Configure [advanced indexing options](/docs/start-building/yaml-config) ## Create New Project - Standard Mode :::info Make sure you have the CLI installed before starting a new project. You can find the installation instructions [here](/docs/introduction/installation). ::: We advise anyone using rindexer to install docker which makes running locally with postgres storage a lot easier. If you not got docker you can install it [here](https://docs.docker.com/get-docker/). ### 1. Create a new project This will walk you through setting up your project by asking you a series of questions in the terminal. :::code-group ```bash [no-code] rindexer new no-code ``` ```bash [rust] rindexer new rust ``` ::: #### Example New ```bash Initializing new rindexer project... Project Name: RocketPoolETHIndexer Project Description (skip by pressing Enter): My first rindexer project Repository (skip by pressing Enter): https://github.com/joshstevens19/rindexer What Storages To Enable? (graphql can only be supported if postgres is enabled) [postgres, csv, both, none]: postgres Postgres Docker Support Out The Box? [yes, no]: yes rindexer no-code project created with a rETH transfer events YAML template. ``` If any of the steps are unclear, you can find more information in the [New Project Appendix](#new-project-appendix). Once completed a new boilerplate project will be created in the current directory. You can navigate to the project directory and start building your project. The boilerplate project is configured to index rETH transfer and approval events from ethereum mainnet between a specific block range. ### 2. Add Environment Variables If you are not using `postgres` you can move straight to [starting your project](#3-config-your-rindexeryaml-file). If you selected `yes` to the `Postgres Docker Support Out The Box?` question, a `.env` file has be generated for you with the required environment variables. You can move straight to [starting your project](#3-config-your-rindexeryaml-file). *** Open up the generated `.env` file and fill in the required environment variables. #### DATABASE\_URL :::warning Can skip if: * on question "What Storages To Enable?" you selected csv or none * on question "Postgres Docker Support Out The Box?" you selected yes ::: For ease of running locally we suggest you enable docker support on the rindexer project, if you did not enable docker support with postgres storage you will need to provide a postgres database information in the `.env` file which has been generated for you. `sslmode=require` is supported as well just include it in the connection string. #### POSTGRES\_PASSWORD :::warning Can skip if: * on question "What Storages To Enable?" you selected csv or none * on question "Postgres Docker Support Out The Box?" you selected no ::: This is injected into the `.env` for your if you selected `yes` to the `Postgres Docker Support Out The Box?` question. This is used for the docker to create a postgres database for you locally. You do not need this if you have your own postgres database or on deployed environments. It is purely for local development. ```bash POSTGRES_PASSWORD=password ``` #### Other Environment Variables Every part of the `rindexer.yaml` file can be overridden by an environment variable. The syntax for this in the `rindexer.yaml` is `${ENV_VARIABLE_NAME}` example `${POLYGON_RPC_URL}`. This can be used in ANY field in the YAML file. Read more about the environment variables in the [yaml configuration documentation](/docs/start-building/yaml-config#environment-variables). ### 3. Config your `rindexer.yaml` file Generating a rindexer project will generate a `rindexer.yaml` file for you. This is where you will configure your project. You can read all about the rindexer.yaml settings in the [yaml configuration documentation](/docs/start-building/yaml-config). You can also use the [rindexer add](/docs/start-building/add) command to add contracts to your project and pull in ABIs for you. :::tip[Want to track balances, counters, or aggregations?] Instead of just logging raw events, you can use **Custom Tables** to maintain derived state. For example, track token balances that automatically update with each Transfer event, or count user actions, or aggregate data across multiple chains. [Learn about Custom Tables β†’](/docs/start-building/tables) ::: It will generate you an boilerplate project which is configured to index rETH transfer and approval events from ethereum mainnet between a specific block range. ```yaml name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: 18600000 end_block: 18718056 abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - Approval ``` ### 4. Start the project :::info rindexer starts your postgres docker compose file up for you automatically if the DATABASE\_URL can not connect to the database and docker-compose.yml is present in the parent directory. You will need to make sure you have docker running on your machine before starting the project. If you have not got docker you can install it [here](https://docs.docker.com/get-docker/). You can also run docker manually by using `docker compose up -d`. ::: :::info graphql can only run if you have postgres storage enabled ::: :::code-group ```bash [graphql and indexer] rindexer start all ``` ```bash [indexer only] rindexer start indexer ``` ```bash [graphql only] rindexer start graphql ``` ::: :::warning The boilerplate template uses a free node which may get rate limited. We recommend using a paid node for production. ::: ### 5. Query the GraphQL API :::info It is worth noting that the graphql API is here for convenience as it works out the box with the postgres storage, if your building an advance indexer you may want to build your own API on top of the data you have indexed. If that's the case it is fine just to use rindexer as the indexing tool and build your own API on top of it. ::: :::info Each page request will have a max query limit of 1000 per page to avoid memory and database issues. ::: GraphQL will be available at [http://localhost:3001/graphql](http://localhost:3001/graphql) and the playground at [http://localhost:3001/playground](http://localhost:3001/playground). You can read more about rindexer GraphQL API in the [API documentation](/docs/accessing-data/graphql). :::code-group ```graphql [request] query AllTransfers($orderBy: [TransfersOrderBy!] = [BLOCK_NUMBER_DESC], $first: Int = 5) { allTransfers(orderBy: $orderBy, first: $first) { nodes { blockHash blockNumber contractAddress from network nodeId to txHash value } pageInfo { endCursor hasNextPage hasPreviousPage startCursor } } } ``` ```json [response] { "data": { "allTransfers": { "nodes": [ { "blockHash": "0x8461da7a1d4b47190a01fa6eae219be40aacffab0dd64af7259b2d404572c3d9", "blockNumber": "18718011", "contractAddress": "0xae78736cd615f374d3085123a210448e74fc6393 ", "from": "0xfac5ddb4e3eb6941a458544bfe2588ee566bd4ff", "network": "ethereum", "nodeId": "WyJ0cmFuc2ZlcnMiLDU4Nzld", "to": "0x2201d2400d30bfd8172104b4ad046d019ca4e7bd", "txHash": "0x145c6705ffbf461e85d08b4a7f5850d6b52a7364d93a057722ca1194034f3ba4", "value": "263518859" }, { "blockHash": "0x8461da7a1d4b47190a01fa6eae219be40aacffab0dd64af7259b2d404572c3d9", "blockNumber": "18718011", "contractAddress": "0xae78736cd615f374d3085123a210448e74fc6393 ", "from": "0xe4f719c11fc5ab883e32068df99962985645e860", "network": "ethereum", "nodeId": "WyJ0cmFuc2ZlcnMiLDU4ODBd", "to": "0xc5c2dd38d29960e7bb015e77be44aefbb08f192b", "txHash": "0x145c6705ffbf461e85d08b4a7f5850d6b52a7364d93a057722ca1194034f3ba4", "value": "19152486233394367" }, { "blockHash": "0x8461da7a1d4b47190a01fa6eae219be40aacffab0dd64af7259b2d404572c3d9", "blockNumber": "18718011", "contractAddress": "0xae78736cd615f374d3085123a210448e74fc6393 ", "from": "0xc5c2dd38d29960e7bb015e77be44aefbb08f192b", "network": "ethereum", "nodeId": "WyJ0cmFuc2ZlcnMiLDU4ODFd", "to": "0x882a41fd4c5d09d01900db378903c5c00cc31d64", "txHash": "0x145c6705ffbf461e85d08b4a7f5850d6b52a7364d93a057722ca1194034f3ba4", "value": "19159007520480803" }, { "blockHash": "0x8461da7a1d4b47190a01fa6eae219be40aacffab0dd64af7259b2d404572c3d9", "blockNumber": "18718011", "contractAddress": "0xae78736cd615f374d3085123a210448e74fc6393 ", "from": "0x882a41fd4c5d09d01900db378903c5c00cc31d64", "network": "ethereum", "nodeId": "WyJ0cmFuc2ZlcnMiLDU4ODJd", "to": "0x2201d2400d30bfd8172104b4ad046d019ca4e7bd", "txHash": "0x145c6705ffbf461e85d08b4a7f5850d6b52a7364d93a057722ca1194034f3ba4", "value": "19159007520480803" }, { "blockHash": "0x8461da7a1d4b47190a01fa6eae219be40aacffab0dd64af7259b2d404572c3d9", "blockNumber": "18718011", "contractAddress": "0xae78736cd615f374d3085123a210448e74fc6393 ", "from": "0x882a41fd4c5d09d01900db378903c5c00cc31d64", "network": "ethereum", "nodeId": "WyJ0cmFuc2ZlcnMiLDU4ODNd", "to": "0xc5c2dd38d29960e7bb015e77be44aefbb08f192b", "txHash": "0x145c6705ffbf461e85d08b4a7f5850d6b52a7364d93a057722ca1194034f3ba4", "value": "0" } ], "pageInfo": { "endCursor": "WyJibG9ja19udW1iZXJfZGVzYyIsWzE4NzE4MDExLDU4ODNdXQ==", "hasNextPage": true, "hasPreviousPage": false, "startCursor": "WyJibG9ja19udW1iZXJfZGVzYyIsWzE4NzE4MDExLDU4NzldXQ==" } } } } ``` ::: #### Generate graphql queries You can generate .graphql prebuilt queries to get up and running in seconds. These will be generated in a `queries` folder. ```bash rindexer codegen graphql ``` ##### TypeScript [graphql-codegen](https://the-guild.dev/graphql/codegen) is the best tool on the market to generate TypeScript typings for your GraphQL queries, mutations, and subscriptions. learn about the `codegen.ts` config [here](https://the-guild.dev/graphql/codegen/docs/config-reference/codegen-config) the graphql API url is the `schema` in the config, you can set this to your graphql endpoint like so: ```ts import { CodegenConfig } from '@graphql-codegen/cli' const config: CodegenConfig = { // this is YOUR_GRAPHQL_API_URL // [!code focus] schema: 'http://localhost:3001/graphql', // [!code focus] ... } export default config ``` then how you hook up the config with your tool of choice, below are some links to documentation: * React Apollo - [https://the-guild.dev/graphql/codegen/plugins/typescript/typescript-react-apollo#with-react-hooks](https://the-guild.dev/graphql/codegen/plugins/typescript/typescript-react-apollo#with-react-hooks) * React Query - [https://the-guild.dev/graphql/codegen/plugins/typescript/typescript-react-query](https://the-guild.dev/graphql/codegen/plugins/typescript/typescript-react-query) * Node app - [https://the-guild.dev/graphql/codegen/plugins/typescript/typescript-urql](https://the-guild.dev/graphql/codegen/plugins/typescript/typescript-urql) ##### .NET, Dart, Java, Flow codegen for other languages can be found [here](https://the-guild.dev/graphql/codegen) ### New Project Appendix If you are not sure what to select this section will explain each step in more detail. #### What Storages To Enable? * Postgres - This will use a postgres database to store the data. * Csv - This will store the data in a csv file on the machine. * Both - This will store the data in both a postgres database and a csv file. * None - This will not store the data anywhere. #### Postgres Docker * Yes - This will use docker to spin up a postgres database for you, great for local development. * No - This will not use docker and you will need to provide a postgres database information in the `.env` file. ## Discord Discord is one of the most popular chat platforms, and is great to build bots and notifications when things happen on chain. :::info Due to rate limits out of rindexer control ChatBots will only send messages with max block range of 10 blocks. Most people who will use rindexer Chatbots will only want to start sending messages of the live data anyway. The ChatBots are really only meant to be ran sending live data not historic data. ::: ### Setup a bot on discord 1. go to [https://discordapp.com/developers/applications/](https://discordapp.com/developers/applications/) 2. If you already have a bot created, click it in the list. If you don’t have any discord bots, click the β€œNew Application” button. 3. Give Your Bot a Name (you can then after add a description and icon for it) 4. Your next step is to go over the menu on the left side of the screen and click β€œBot”. It’s the icon that looks like a little puzzle piece. 5. Click the β€œAdd Bot” button and press "Yes, do it!" 6. You see a section called β€œToken” you need to generate your bot token and save it somewhere safe for later. 7. In order to add your bot to your Discord Server, you’ll need to navigate back to the β€œOAuth2” tab. 8. Once there, scroll down to the β€œOauth2 URL Generator” section. In the β€œScopes” section, you’ll want to select the β€œbot” checkbox. 9. You’ll notice that a URL appeared as soon as you clicked β€œbot” β€” this will be your URL for adding your bot to a server. 10. Scroll down some more to the β€œBot Permissions” section. This is where you choose what permissions to give your bot, and what it can and can’t do. 11. You want to do tick "Send messages" as rindexer does not read any messages from the server. 12. After you’ve selected your permissions, scroll up a little bit and look at the URL that was generated and copy and go to that url in your browser. 13. Here you’ll want to select the server you’re adding your bot to and press β€œContinue” 14. It then confirm permissions make sure you have ticked "Send Messages" and press "Authorize" 15. You are now done you will need the bot token to setup the discord bot with rindexer ### Configure rindexer `discord` property accepts an array allowing you to split up the channels any way you wish. ### Example :::code-group ```yaml [contract events] name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] discord: // [!code focus] - bot_token: ${DISCORD_BOT_TOKEN} // [!code focus] channel_id: 123456789012345678 // [!code focus] networks: // [!code focus] - ethereum // [!code focus] messages: - event_name: Transfer // [!code focus] # filter_expression is optional // [!code focus] filter_expression: "from = '0x0338ce5020c447f7e668dc2ef778025ce3982662' || from = '0x0338ce5020c447f7e668dc2ef778025ce398266u' && value >= 10 && value <= 2000000000000000000" // [!code focus] template_inline: "*New RETH Transfer Event* // [!code focus] from: {{from}} // [!code focus] to: {{to}} // [!code focus] amount: {{format_value(value, 18)}} // [!code focus] RETH contract: {{transaction_information.address}} // [!code focus] [etherscan](https://etherscan.io/tx/{{transaction_information.transaction_hash}}) // [!code focus] " // [!code focus] ``` ```yaml [native transfers] name: ETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: - network: ethereum chat: // [!code focus] discord: // [!code focus] - bot_token: ${DISCORD_BOT_TOKEN} // [!code focus] channel_id: 123456789012345678 // [!code focus] networks: // [!code focus] - ethereum // [!code focus] messages: - event_name: NativeTransfer // [!code focus] template_inline: "*New ETH Transfer Event* // [!code focus] from: {{from}} // [!code focus] to: {{to}} // [!code focus] amount: {{format_value(value, 18)}} // [!code focus] Token address: {{transaction_information.address}} // [!code focus] [etherscan](https://etherscan.io/tx/{{transaction_information.transaction_hash}}) // [!code focus] " // [!code focus] ``` ::: ### bot\_token This is your discord bot token which you generate using @BotFather. :::info We advise you to put this in a environment variables. ::: ```yaml ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] discord: // [!code focus] - bot_token: ${DISCORD_BOT_TOKEN} // [!code focus] ``` ### channel\_id You have add your bot to channel to use it, so this is the channel ID you wish the bot to send messages to. ```yaml ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] discord: // [!code focus] - bot_token: ${DISCORD_BOT_TOKEN} channel_id: -4223616270 // [!code focus] ``` ### networks This is an array of networks you want to send messages to this discord channel. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] discord: // [!code focus] - bot_token: ${DISCORD_BOT_TOKEN} channel_id: 123456789012345678 networks: // [!code focus] - ethereum // [!code focus] ``` ### messages This is an array of messages you want to send to this discord channel. It is an array as you can define many different messages to send to this channel with different conditions. #### event\_name This is the name of the event you want to send a message for, must match the ABI event name. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] discord: // [!code focus] - bot_token: ${DISCORD_BOT_TOKEN} channel_id: 123456789012345678 networks: - ethereum messages: - event_name: Transfer // [!code focus] ``` #### filter\_expression This accepts a filter expression to filter the events before sending a message to this discord channel :::info This is optional, if you do not provide any filter expression all the events will be sent to this discord channel. ::: Filter expressions allow for condition checking of the event data and support logical operators to combine multiple conditions. ##### Supported types and operations: | Type | Description | Operators | Notes | | | --------------------------- | ------------------------------------------------------------------------------------------------------------------------- | ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | | Numeric (uint/int variants) | Integer values (e.g., `42`, `-100`) or decimal values (e.g., `3.14`, `-0.5`) | `>`, `<`, `=`, `>=`, `<=` | Numeric comparisons | Numbers must have digits before and after a decimal point if one is present (e.g., `.5` or `5.` are not valid standalone numbers) | | Address | Ethereum addresses (e.g., `0x1234567890abcdef1234567890abcdef12345678`) | `=`, `!=` | Comparisons (e.g., `from == '0xABC...'`) are typically case-insensitive regarding the hex characters of the address value itself | | | String | Text values. Can be single-quoted (e.g., `'hello'`) or, on the right-hand side of a comparison, unquoted (e.g., `active`) | `=`, `!=` | Quoted strings support `\'` to escape a single quote and `\\` to escape a backslash. All string comparison operations (e.g., `name == 'Alice'`, `description contains 'error'`) are performed case-insensitively during evaluation. | | | Boolean | True or false values | `=`, `!=` | Represented as `true` or `false`. These keywords are parsed case-insensitively (e.g., `TRUE`, `False` are also valid in expressions). | | | Hex String Literal | A string literal starting with `0x` or `0X` followed by hexadecimal characters (0-9, a-f, A-F). | `=`, `!=` | Treated as a string for comparison purposes (e.g., `input_data starts_with '0xa9059cbb'`). Comparison is case-sensitive for the hex characters after `0x`. | | | Array | Ordered list of items | `==`, `!=`, `[index]` | See "Array Type Operations" below | | ##### Logical Operators * `&&` - All conditions must be true * `||` - At least one condition must be true * `()` - Parentheses for grouping * `&&` has higher precedence than `||` (i.e., `&&` operations are evaluated before `||` operations if not grouped by parentheses) ##### Array Type Operations For array types, you can use the following operations: * `array_param == '["raw_json_array_string"]'` string comparison of the array's entire JSON string representation against the provided string * `array_param != '["raw_json_array_string"]'` the negation of the above * `array_param[0]` indexed access. The index must be a non-negative integer. ##### Whitespace Flexible whitespace is generally allowed around operators, parentheses, and keywords for readability. However, whitespace within quoted string literals is significant and preserved. ##### Examples * `value > 1000` - Numeric comparison, checks if `value` is greater than 1000. * `from = '0x1234567890abcdef1234567890abcdef12345678'` - Address comparison, checks if `from` matches the specified address. * `name != 'Alice'` - String comparison, checks if `name` is not equal to 'Alice'. * `active = true` - Boolean comparison, checks if `active` is true. * `value >= 1000 && value <= 2000` - Numeric range check, checks if `value` is between 1000 and 2000 inclusive. * `from = '0x1234567890abcdef1234567890abcdef12345678' || from = '0xabcdefabcdefabcdefabcdefabcdefabcdef'` - Address comparison with logical OR, checks if `from` matches either of the two addresses. * `value > 1000 && (from = '0x1234567890abcdef1234567890abcdef12345678' || from = '0xabcdefabcdefabcdefabcdefabcdefabcdef')` - Combined numeric and address checks with logical AND and OR, checks if `value` is greater than 1000 and `from` matches either of the two addresses. #### conditions This accepts an array of conditions you want to apply to the event data before sending a message to this discord channel. :::warning Conditions are supported for backwards compatibility. It is recommended to use `filter_expression` instead of `conditions`. ::: :::info This is optional, if you do not provide any conditions all the events will be sent to this discord channel. ::: You may want to filter on the message based on the event data, if the event data has not got an index on the on the solidity event you can not filter it over the logs. The `conditions` filter is here to help you with this, based on your ABI you can filter on the event data. rindexer has enabled a special syntax which allows you to define on your ABI fields what you want to filter on. 1. `>` - higher then (for numbers only) 2. `<` - lower then (for numbers only) 3. `=` - equals 4. `>=` - higher then or equals (for numbers only) 5. `<=` - lower then or equals (for numbers only) 6. `||` - or 7. `&&` - and So lets look at an example lets say i only want to get transfer events which are higher then `2000000000000000000` RETH wei ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] discord: // [!code focus] - bot_token: ${DISCORD_BOT_TOKEN} channel_id: 123456789012345678 networks: - ethereum messages: - event_name: Transfer // [!code focus] conditions: // [!code focus] - "value": ">=2000000000000000000" // [!code focus] ``` We use the ABI input name `value` to filter on the value field, you can find these names in the ABI file. ```json { "anonymous":false, "inputs":[ { "indexed":true, "internalType":"address", "name":"from", "type":"address" }, { "indexed":true, "internalType":"address", "name":"to", "type":"address" }, { "indexed":false, "internalType":"uint256", "name":"value", // [!code focus] "type":"uint256" } ], "name":"Transfer", "type":"event" } ``` You can use the `||` or `&&` to combine conditions. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] discord: // [!code focus] - bot_token: ${DISCORD_BOT_TOKEN} channel_id: 123456789012345678 networks: - ethereum messages: - event_name: Transfer conditions: // [!code focus] - "value": ">=2000000000000000000 && value <=4000000000000000000" // [!code focus] ``` You can use the `=` to filter on other aspects like the `from` or `to` address. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] discord: // [!code focus] - bot_token: ${DISCORD_BOT_TOKEN} channel_id: 123456789012345678 networks: - ethereum messages: - event_name: Transfer conditions: // [!code focus] - "from": "0x0338ce5020c447f7e668dc2ef778025ce3982662 || 0x0338ce5020c447f7e668dc2ef778025ce398266u" // [!code focus] - "value": ">=2000000000000000000 || value <=4000000000000000000" // [!code focus] ``` :::info Note we advise you to filer any `indexed` fields in the contract details in the `rindexer.yaml` file. As these can be filtered out on the request level and not filtered out in rindexer itself. You can read more about it [here](/docs/start-building/yaml-config/contracts#indexed_1-indexed_2-indexed_3). ::: If you have a tuple and you want to get that value you just use the object notation. For example lets say we want to only get the events for `profileId` from the `quoteParams` tuple which equals `1`: ```json { "anonymous": false, "inputs": [ { "components": [ { "internalType": "uint256", "name": "profileId", // [!code focus] "type": "uint256" }, ... ], "indexed": false, "internalType": "struct Types.QuoteParams", "name": "quoteParams", // [!code focus] "type": "tuple" }, ... ], "name": "QuoteCreated", // [!code focus] "type": "event" } ``` ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] discord: // [!code focus] - bot_token: ${DISCORD_BOT_TOKEN} channel_id: 123456789012345678 networks: - ethereum messages: - event_name: Transfer conditions: // [!code focus] - "quoteParams.profileId": "=1" // [!code focus] ``` #### template\_inline You can then write your own template inline, this is the template you want to send to the channel. You have to use the ABI input names in object notation for example if i wanted to put value in the template i just have to write `{{value}}` in the template and it will be replaced with the value of the event itself. The template supports: * bold text = \*bold text\* * italic text = \_italic text\_ * inline url = \[inline URL]\(YOUR\_URL) * inline fixed-width code = \`inline fixed-width code\` * pre-formatted fixed-width code block = \`\`\`pre-formatted fixed-width code block\`\`\` * pre-formatted fixed-width known code block = \`\`\`rust pre-formatted fixed-width known code block\`\`\` * breaks = just line break in the template ##### transaction\_information You also can use the `transaction_information` object to get common information about the transaction, this is the transaction information for the event. ```rs #[derive(Debug, Serialize, Deserialize, Clone)] pub struct TxInformation { pub network: String, // This will convert to a hex string in the template pub address: Address, // This will convert to a hex string in the template pub block_hash: BlockHash, // This will convert to a string decimal in the template pub block_number: U64, // This will convert to a hex string in the template pub transaction_hash: TxHash, // This will convert to a string decimal in the template pub log_index: U256, // This will convert to a string decimal in the template pub transaction_index: U64, } ``` :::info To avoid confusion `address` in `transaction_information` is the address of the contract the event was emitted from. ::: ##### format\_value You can use the `format_value` function to format the value of the event to a decimal value with the specified decimals. Lets put it all together: ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] discord: // [!code focus] - bot_token: ${DISCORD_BOT_TOKEN} channel_id: 123456789012345678 networks: - ethereum messages: - event_name: Transfer // [!code focus] template_inline: "*New RETH Transfer Event* // [!code focus] from: {{from}} // [!code focus] to: {{to}} // [!code focus] amount: {{format_value(value, 18)}} // [!code focus] RETH contract: {{transaction_information.address}} // [!code focus] [etherscan](https://etherscan.io/tx/{{transaction_information.transaction_hash}}) // [!code focus] " // [!code focus] ``` ## Chatbots :::info rindexer Chatbots can be used without any other storage providers. It can also be used with storage providers. ::: rindexer has first-class support for Chatbots, this means you can use your favourite chat platform to send messages to when events happen on chain. :::info Due to rate limits out of rindexer control ChatBots will only send messages with max block range of 10 blocks. Most people who will use rindexer Chatbots will only want to start sending messages of the live data anyway. The ChatBots are really only meant to be ran sending live data not historic data. ::: :::info Rust projects do not get exposed to the stream clients yet but it can easily be exposed in the future. ::: Supported Chatbots providers: * [Telegram](/docs/start-building/chatbots/telegram) - Send messages to your Telegram chats * [Discord](/docs/start-building/chatbots/discord) - Send messages to your Discord chats * [Slack](/docs/start-building/chatbots/slack) - Send messages to your Slack channels * [Twilio](/docs/start-building/chatbots/twilio) - Send SMS notifications to phone numbers * [PagerDuty](/docs/start-building/chatbots/pagerduty) - Trigger PagerDuty incidents from on-chain events * [OpsGenie](/docs/start-building/chatbots/opsgenie) - Create OpsGenie alerts from on-chain events Want any other chat provider to be supported? [Create an issue](https://github.com/joshstevens19/rindexer/issues/new) and we will look into it. ## OpsGenie OpsGenie is an alert and incident management platform by Atlassian. rindexer can create OpsGenie alerts when on-chain events occur. :::info Due to rate limits out of rindexer control ChatBots will only send messages with max block range of 10 blocks. Most people who will use rindexer Chatbots will only want to start sending messages of the live data anyway. The ChatBots are really only meant to be ran sending live data not historic data. ::: ### Setup an OpsGenie integration 1. Log into your OpsGenie account at [opsgenie.com](https://www.opsgenie.com/). 2. Navigate to **Settings** > **Integration list**. 3. Search for **API** and click **Add**. 4. Copy the **API Key** from the integration settings. This is what you will use to create alerts. 5. Make sure the integration is enabled and saved. ### Configure rindexer `opsgenie` property accepts an array allowing you to split up the alerts any way you wish. ### Example :::code-group ```yaml [contract events] name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] opsgenie: // [!code focus] - api_key: ${OPSGENIE_API_KEY} // [!code focus] priority: P1 // [!code focus] networks: // [!code focus] - ethereum // [!code focus] messages: - event_name: Transfer // [!code focus] # filter_expression is optional // [!code focus] filter_expression: "from = '0x0338ce5020c447f7e668dc2ef778025ce3982662' || from = '0x0338ce5020c447f7e668dc2ef778025ce398266u' && value >= 10 && value <= 2000000000000000000" // [!code focus] template_inline: "New RETH Transfer Event // [!code focus] from: {{from}} // [!code focus] to: {{to}} // [!code focus] amount: {{format_value(value, 18)}} // [!code focus] RETH contract: {{transaction_information.address}} // [!code focus] etherscan: https://etherscan.io/tx/{{transaction_information.transaction_hash}} // [!code focus] " // [!code focus] ``` ```yaml [native transfers] name: ETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: - network: ethereum chat: // [!code focus] opsgenie: // [!code focus] - api_key: ${OPSGENIE_API_KEY} // [!code focus] priority: P1 // [!code focus] networks: // [!code focus] - ethereum // [!code focus] messages: - event_name: NativeTransfer // [!code focus] template_inline: "New ETH Transfer Event // [!code focus] from: {{from}} // [!code focus] to: {{to}} // [!code focus] amount: {{format_value(value, 18)}} // [!code focus] Token address: {{transaction_information.address}} // [!code focus] etherscan: https://etherscan.io/tx/{{transaction_information.transaction_hash}} // [!code focus] " // [!code focus] ``` ::: ### api\_key This is your OpsGenie API integration key. You can find it in your integration settings. :::info We advise you to put this in a environment variables. ::: ```yaml ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] opsgenie: // [!code focus] - api_key: ${OPSGENIE_API_KEY} // [!code focus] ``` ### priority The priority level for the OpsGenie alert. Must be one of: `P1`, `P2`, `P3`, `P4`, `P5`. Defaults to `P1`. * `P1` - Critical * `P2` - High * `P3` - Moderate * `P4` - Low * `P5` - Informational ```yaml ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] opsgenie: // [!code focus] - api_key: ${OPSGENIE_API_KEY} priority: P1 // [!code focus] ``` ### networks This is an array of networks you want to create OpsGenie alerts for. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] opsgenie: // [!code focus] - api_key: ${OPSGENIE_API_KEY} priority: P1 networks: // [!code focus] - ethereum // [!code focus] ``` ### messages This is an array of messages you want to create as OpsGenie alerts. It is an array as you can define many different messages with different conditions. #### event\_name This is the name of the event you want to send a message for, must match the ABI event name. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] opsgenie: // [!code focus] - api_key: ${OPSGENIE_API_KEY} priority: P1 networks: - ethereum messages: - event_name: Transfer // [!code focus] ``` #### filter\_expression This accepts a filter expression to filter the events before creating an OpsGenie alert. :::info This is optional, if you do not provide any filter expression all the events will create OpsGenie alerts. ::: Filter expressions allow for condition checking of the event data and support logical operators to combine multiple conditions. #### conditions This accepts an array of conditions you want to apply to the event data before creating an OpsGenie alert. :::warning Conditions are supported for backwards compatibility. It is recommended to use `filter_expression` instead of `conditions`. ::: :::info This is optional, if you do not provide any conditions all the events will create OpsGenie alerts. ::: #### template\_inline You can then write your own template inline, this is the template used as the OpsGenie alert message. You have to use the ABI input names in object notation for example if i wanted to put value in the template i just have to write `{{value}}` in the template and it will be replaced with the value of the event itself. :::info OpsGenie alert messages are plain text only. ::: ##### transaction\_information You also can use the `transaction_information` object to get common information about the transaction, this is the transaction information for the event. ```rs #[derive(Debug, Serialize, Deserialize, Clone)] pub struct TxInformation { pub network: String, pub address: Address, pub block_hash: BlockHash, pub block_number: U64, pub transaction_hash: TxHash, pub log_index: U256, pub transaction_index: U64, } ``` :::info To avoid confusion `address` in `transaction_information` is the address of the contract the event was emitted from. ::: ##### format\_value You can use the `format_value` function to format the value of the event to a decimal value with the specified decimals. Lets put it all together: ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] opsgenie: // [!code focus] - api_key: ${OPSGENIE_API_KEY} priority: P1 networks: - ethereum messages: - event_name: Transfer // [!code focus] template_inline: "New RETH Transfer Event // [!code focus] from: {{from}} // [!code focus] to: {{to}} // [!code focus] amount: {{format_value(value, 18)}} // [!code focus] RETH contract: {{transaction_information.address}} // [!code focus] etherscan: https://etherscan.io/tx/{{transaction_information.transaction_hash}} // [!code focus] " // [!code focus] ``` ## PagerDuty PagerDuty is an incident management platform that helps teams detect and respond to issues in real-time. rindexer can trigger PagerDuty incidents when on-chain events occur. :::info Due to rate limits out of rindexer control ChatBots will only send messages with max block range of 10 blocks. Most people who will use rindexer Chatbots will only want to start sending messages of the live data anyway. The ChatBots are really only meant to be ran sending live data not historic data. ::: ### Setup a PagerDuty integration 1. Log into your PagerDuty account and navigate to **Services** > **Service Directory**. 2. Select an existing service or create a new one. 3. Go to the **Integrations** tab and click **Add an integration**. 4. Select **Events API v2** and click **Add**. 5. Copy the **Integration Key** (also called Routing Key). This is what you will use to send events. ### Configure rindexer `pagerduty` property accepts an array allowing you to split up the alerts any way you wish. ### Example :::code-group ```yaml [contract events] name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] pagerduty: // [!code focus] - routing_key: ${PAGERDUTY_ROUTING_KEY} // [!code focus] severity: critical // [!code focus] networks: // [!code focus] - ethereum // [!code focus] messages: - event_name: Transfer // [!code focus] # filter_expression is optional // [!code focus] filter_expression: "from = '0x0338ce5020c447f7e668dc2ef778025ce3982662' || from = '0x0338ce5020c447f7e668dc2ef778025ce398266u' && value >= 10 && value <= 2000000000000000000" // [!code focus] template_inline: "New RETH Transfer Event // [!code focus] from: {{from}} // [!code focus] to: {{to}} // [!code focus] amount: {{format_value(value, 18)}} // [!code focus] RETH contract: {{transaction_information.address}} // [!code focus] etherscan: https://etherscan.io/tx/{{transaction_information.transaction_hash}} // [!code focus] " // [!code focus] ``` ```yaml [native transfers] name: ETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: - network: ethereum chat: // [!code focus] pagerduty: // [!code focus] - routing_key: ${PAGERDUTY_ROUTING_KEY} // [!code focus] severity: critical // [!code focus] networks: // [!code focus] - ethereum // [!code focus] messages: - event_name: NativeTransfer // [!code focus] template_inline: "New ETH Transfer Event // [!code focus] from: {{from}} // [!code focus] to: {{to}} // [!code focus] amount: {{format_value(value, 18)}} // [!code focus] Token address: {{transaction_information.address}} // [!code focus] etherscan: https://etherscan.io/tx/{{transaction_information.transaction_hash}} // [!code focus] " // [!code focus] ``` ::: ### routing\_key This is your PagerDuty Events API v2 integration key (also called routing key). You can find it in your service's Integrations tab. :::info We advise you to put this in a environment variables. ::: ```yaml ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] pagerduty: // [!code focus] - routing_key: ${PAGERDUTY_ROUTING_KEY} // [!code focus] ``` ### severity The severity level for the PagerDuty event. Must be one of: `info`, `warning`, `error`, `critical`. Defaults to `critical`. ```yaml ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] pagerduty: // [!code focus] - routing_key: ${PAGERDUTY_ROUTING_KEY} severity: critical // [!code focus] ``` ### networks This is an array of networks you want to trigger PagerDuty events for. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] pagerduty: // [!code focus] - routing_key: ${PAGERDUTY_ROUTING_KEY} severity: critical networks: // [!code focus] - ethereum // [!code focus] ``` ### messages This is an array of messages you want to trigger as PagerDuty events. It is an array as you can define many different messages with different conditions. #### event\_name This is the name of the event you want to send a message for, must match the ABI event name. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] pagerduty: // [!code focus] - routing_key: ${PAGERDUTY_ROUTING_KEY} severity: critical networks: - ethereum messages: - event_name: Transfer // [!code focus] ``` #### filter\_expression This accepts a filter expression to filter the events before triggering a PagerDuty event. :::info This is optional, if you do not provide any filter expression all the events will trigger PagerDuty events. ::: Filter expressions allow for condition checking of the event data and support logical operators to combine multiple conditions. #### conditions This accepts an array of conditions you want to apply to the event data before triggering a PagerDuty event. :::warning Conditions are supported for backwards compatibility. It is recommended to use `filter_expression` instead of `conditions`. ::: :::info This is optional, if you do not provide any conditions all the events will trigger PagerDuty events. ::: #### template\_inline You can then write your own template inline, this is the template used as the PagerDuty event summary. You have to use the ABI input names in object notation for example if i wanted to put value in the template i just have to write `{{value}}` in the template and it will be replaced with the value of the event itself. :::info PagerDuty event summaries are plain text only. ::: ##### transaction\_information You also can use the `transaction_information` object to get common information about the transaction, this is the transaction information for the event. ```rs #[derive(Debug, Serialize, Deserialize, Clone)] pub struct TxInformation { pub network: String, pub address: Address, pub block_hash: BlockHash, pub block_number: U64, pub transaction_hash: TxHash, pub log_index: U256, pub transaction_index: U64, } ``` :::info To avoid confusion `address` in `transaction_information` is the address of the contract the event was emitted from. ::: ##### format\_value You can use the `format_value` function to format the value of the event to a decimal value with the specified decimals. Lets put it all together: ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] pagerduty: // [!code focus] - routing_key: ${PAGERDUTY_ROUTING_KEY} severity: critical networks: - ethereum messages: - event_name: Transfer // [!code focus] template_inline: "New RETH Transfer Event // [!code focus] from: {{from}} // [!code focus] to: {{to}} // [!code focus] amount: {{format_value(value, 18)}} // [!code focus] RETH contract: {{transaction_information.address}} // [!code focus] etherscan: https://etherscan.io/tx/{{transaction_information.transaction_hash}} // [!code focus] " // [!code focus] ``` ## Slack Slack is one of the most popular chat platforms, and is great to build bots and notifications when things happen on chain. :::info Due to rate limits out of rindexer control ChatBots will only send messages with max block range of 10 blocks. Most people who will use rindexer Chatbots will only want to start sending messages of the live data anyway. The ChatBots are really only meant to be ran sending live data not historic data. ::: ### Setup a bot on slack 1. Go to api.slack.com, log into your workspace and click on Create an app 2. Click on From scratch and then give it a name and select your workspace. We will call ours RethTransferEvents. 3. Click on the Bots box under the Add features and functionality header 4. Click on Review scopes to add 5. Scroll down to the Bot token scopes header and add `chat:write`. These are the permissions the bot needs to write messages 6. Finally, scroll all the way up and click on Install to workspace, and Allow on the following screen. This should now show a screen with the Bot User OAuth Token visible. Take note of this token, since it’s the one we will be using to send messages. 7. Now add the bot to the channel you want to send a message to, channels are the `#` followed by the channel name. You need to include the `#` in the channel name. ### Configure rindexer `slack` property accepts an array allowing you to split up the channels any way you wish. ### Example :::code-group ```yaml [contract events] name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] slack: // [!code focus] - bot_token: ${SLACK_BOT_TOKEN} // [!code focus] channel: "#RethTransferEvents" // [!code focus] networks: // [!code focus] - ethereum // [!code focus] messages: - event_name: Transfer // [!code focus] # filter_expression is optional // [!code focus] filter_expression: "from = '0x0338ce5020c447f7e668dc2ef778025ce3982662' || from = '0x0338ce5020c447f7e668dc2ef778025ce398266u' && value >= 10 && value <= 2000000000000000000" // [!code focus] template_inline: "*New RETH Transfer Event* // [!code focus] from: {{from}} // [!code focus] to: {{to}} // [!code focus] amount: {{format_value(value, 18)}} // [!code focus] RETH contract: {{transaction_information.address}} // [!code focus] // [!code focus] " // [!code focus] ``` ```yaml [native transfers] name: ETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: - network: ethereum chat: // [!code focus] slack: // [!code focus] - bot_token: ${SLACK_BOT_TOKEN} // [!code focus] channel: "#EthTransferEvents" // [!code focus] networks: // [!code focus] - ethereum // [!code focus] messages: - event_name: NativeTransfer // [!code focus] template_inline: "*New ETH Transfer Event* // [!code focus] from: {{from}} // [!code focus] to: {{to}} // [!code focus] amount: {{format_value(value, 18)}} // [!code focus] Token address: {{transaction_information.address}} // [!code focus] [etherscan](https://etherscan.io/tx/{{transaction_information.transaction_hash}}) // [!code focus] " // [!code focus] ``` ::: ### bot\_token This is your slack bot token which you generate using @BotFather. :::info We advise you to put this in a environment variables. ::: ```yaml ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] slack: // [!code focus] - bot_token: ${SLACK_BOT_TOKEN} // [!code focus] ``` ### channel This is the channel you want to send messages to. :::info The `#` must be included in the channel name. ::: ```yaml ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] slack: // [!code focus] - bot_token: ${SLACK_BOT_TOKEN} channel: "#RethTransferEvents" // [!code focus] ``` ### networks This is an array of networks you want to send messages to this slack channel. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] slack: // [!code focus] - bot_token: ${SLACK_BOT_TOKEN} channel: "#RethTransferEvents" networks: // [!code focus] - ethereum // [!code focus] ``` ### messages This is an array of messages you want to send to this slack channel. It is an array as you can define many different messages to send to this channel with different conditions. #### event\_name This is the name of the event you want to send a message for, must match the ABI event name. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] slack: // [!code focus] - bot_token: ${SLACK_BOT_TOKEN} channel: "#RethTransferEvents" networks: - ethereum messages: - event_name: Transfer // [!code focus] ``` #### filter\_expression This accepts a filter expression to filter the events before sending a message to this discord channel :::info This is optional, if you do not provide any filter expression all the events will be sent to this discord channel. ::: Filter expressions allow for condition checking of the event data and support logical operators to combine multiple conditions. ##### Supported types and operations: | Type | Description | Operators | Notes | | | --------------------------- | ------------------------------------------------------------------------------------------------------------------------- | ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | | Numeric (uint/int variants) | Integer values (e.g., `42`, `-100`) or decimal values (e.g., `3.14`, `-0.5`) | `>`, `<`, `=`, `>=`, `<=` | Numeric comparisons | Numbers must have digits before and after a decimal point if one is present (e.g., `.5` or `5.` are not valid standalone numbers) | | Address | Ethereum addresses (e.g., `0x1234567890abcdef1234567890abcdef12345678`) | `=`, `!=` | Comparisons (e.g., `from == '0xABC...'`) are typically case-insensitive regarding the hex characters of the address value itself | | | String | Text values. Can be single-quoted (e.g., `'hello'`) or, on the right-hand side of a comparison, unquoted (e.g., `active`) | `=`, `!=` | Quoted strings support `\'` to escape a single quote and `\\` to escape a backslash. All string comparison operations (e.g., `name == 'Alice'`, `description contains 'error'`) are performed case-insensitively during evaluation. | | | Boolean | True or false values | `=`, `!=` | Represented as `true` or `false`. These keywords are parsed case-insensitively (e.g., `TRUE`, `False` are also valid in expressions). | | | Hex String Literal | A string literal starting with `0x` or `0X` followed by hexadecimal characters (0-9, a-f, A-F). | `=`, `!=` | Treated as a string for comparison purposes (e.g., `input_data starts_with '0xa9059cbb'`). Comparison is case-sensitive for the hex characters after `0x`. | | | Array | Ordered list of items | `==`, `!=`, `[index]` | See "Array Type Operations" below | | ##### Logical Operators * `&&` - All conditions must be true * `||` - At least one condition must be true * `()` - Parentheses for grouping * `&&` has higher precedence than `||` (i.e., `&&` operations are evaluated before `||` operations if not grouped by parentheses) ##### Array Type Operations For array types, you can use the following operations: * `array_param == '["raw_json_array_string"]'` string comparison of the array's entire JSON string representation against the provided string * `array_param != '["raw_json_array_string"]'` the negation of the above * `array_param[0]` indexed access. The index must be a non-negative integer. ##### Whitespace Flexible whitespace is generally allowed around operators, parentheses, and keywords for readability. However, whitespace within quoted string literals is significant and preserved. ##### Examples * `value > 1000` - Numeric comparison, checks if `value` is greater than 1000. * `from = '0x1234567890abcdef1234567890abcdef12345678'` - Address comparison, checks if `from` matches the specified address. * `name != 'Alice'` - String comparison, checks if `name` is not equal to 'Alice'. * `active = true` - Boolean comparison, checks if `active` is true. * `value >= 1000 && value <= 2000` - Numeric range check, checks if `value` is between 1000 and 2000 inclusive. * `from = '0x1234567890abcdef1234567890abcdef12345678' || from = '0xabcdefabcdefabcdefabcdefabcdefabcdef'` - Address comparison with logical OR, checks if `from` matches either of the two addresses. * `value > 1000 && (from = '0x1234567890abcdef1234567890abcdef12345678' || from = '0xabcdefabcdefabcdefabcdefabcdefabcdef')` - Combined numeric and address checks with logical AND and OR, checks if `value` is greater than 1000 and `from` matches either of the two addresses. #### conditions This accepts an array of conditions you want to apply to the event data before sending a message to this slack channel. :::warning Conditions are supported for backwards compatibility. It is recommended to use `filter_expression` instead of `conditions`. ::: :::info This is optional, if you do not provide any conditions all the events will be sent to this slack channel. ::: You may want to filter on the message based on the event data, if the event data has not got an index on the on the solidity event you can not filter it over the logs. The `conditions` filter is here to help you with this, based on your ABI you can filter on the event data. rindexer has enabled a special syntax which allows you to define on your ABI fields what you want to filter on. 1. `>` - higher then (for numbers only) 2. `<` - lower then (for numbers only) 3. `=` - equals 4. `>=` - higher then or equals (for numbers only) 5. `<=` - lower then or equals (for numbers only) 6. `||` - or 7. `&&` - and So lets look at an example lets say i only want to get transfer events which are higher then `2000000000000000000` RETH wei ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] slack: // [!code focus] - bot_token: ${SLACK_BOT_TOKEN} channel: "#RethTransferEvents" networks: - ethereum messages: - event_name: Transfer // [!code focus] conditions: // [!code focus] - "value": ">=2000000000000000000" // [!code focus] ``` We use the ABI input name `value` to filter on the value field, you can find these names in the ABI file. ```json { "anonymous":false, "inputs":[ { "indexed":true, "internalType":"address", "name":"from", "type":"address" }, { "indexed":true, "internalType":"address", "name":"to", "type":"address" }, { "indexed":false, "internalType":"uint256", "name":"value", // [!code focus] "type":"uint256" } ], "name":"Transfer", "type":"event" } ``` You can use the `||` or `&&` to combine conditions. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] slack: // [!code focus] - bot_token: ${SLACK_BOT_TOKEN} channel: "#RethTransferEvents" networks: - ethereum messages: - event_name: Transfer conditions: // [!code focus] - "value": ">=2000000000000000000 && value <=4000000000000000000" // [!code focus] ``` You can use the `=` to filter on other aspects like the `from` or `to` address. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] slack: // [!code focus] - bot_token: ${SLACK_BOT_TOKEN} channel: "#RethTransferEvents" networks: - ethereum messages: - event_name: Transfer conditions: // [!code focus] - "from": "0x0338ce5020c447f7e668dc2ef778025ce3982662 || 0x0338ce5020c447f7e668dc2ef778025ce398266u" // [!code focus] - "value": ">=2000000000000000000 || value <=4000000000000000000" // [!code focus] ``` :::info Note we advise you to filer any `indexed` fields in the contract details in the `rindexer.yaml` file. As these can be filtered out on the request level and not filtered out in rindexer itself. You can read more about it [here](/docs/start-building/yaml-config/contracts#indexed_1-indexed_2-indexed_3). ::: If you have a tuple and you want to get that value you just use the object notation. For example lets say we want to only get the events for `profileId` from the `quoteParams` tuple which equals `1`: ```json { "anonymous": false, "inputs": [ { "components": [ { "internalType": "uint256", "name": "profileId", // [!code focus] "type": "uint256" }, ... ], "indexed": false, "internalType": "struct Types.QuoteParams", "name": "quoteParams", // [!code focus] "type": "tuple" }, ... ], "name": "QuoteCreated", // [!code focus] "type": "event" } ``` ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] slack: // [!code focus] - bot_token: ${SLACK_BOT_TOKEN} channel: "#RethTransferEvents" networks: - ethereum messages: - event_name: Transfer conditions: // [!code focus] - "quoteParams.profileId": "=1" // [!code focus] ``` #### template\_inline You can then write your own template inline, this is the template you want to send to the channel. You have to use the ABI input names in object notation for example if i wanted to put value in the template i just have to write `{{value}}` in the template and it will be replaced with the value of the event itself. The template supports: * bold text = \*bold text\* * italic text = \_italic text\_ * strikethrough text = \~strikethrough text\~ * block qoute = > block quote * inline url = \ * inline fixed-width code = \`inline fixed-width code\` * pre-formatted fixed-width code block = \`\`\`pre-formatted fixed-width code block\`\`\` * pre-formatted fixed-width known code block = \`\`\`rust pre-formatted fixed-width known code block\`\`\` * breaks = just line break in the template ##### transaction\_information You also can use the `transaction_information` object to get common information about the transaction, this is the transaction information for the event. ```rs #[derive(Debug, Serialize, Deserialize, Clone)] pub struct TxInformation { pub network: String, // This will convert to a hex string in the template pub address: Address, // This will convert to a hex string in the template pub block_hash: BlockHash, // This will convert to a string decimal in the template pub block_number: U64, // This will convert to a hex string in the template pub transaction_hash: TxHash, // This will convert to a string decimal in the template pub log_index: U256, // This will convert to a string decimal in the template pub transaction_index: U64, } ``` :::info To avoid confusion `address` in `transaction_information` is the address of the contract the event was emitted from. ::: ##### format\_value You can use the `format_value` function to format the value of the event to a decimal value with the specified decimals. Lets put it all together: ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] slack: // [!code focus] - bot_token: ${SLACK_BOT_TOKEN} channel: "#RethTransferEvents" networks: - ethereum messages: - event_name: Transfer // [!code focus] template_inline: "*New RETH Transfer Event* // [!code focus] from: {{from}} // [!code focus] to: {{to}} // [!code focus] amount: {{format_value(value, 18)}} // [!code focus] RETH contract: {{transaction_information.address}} // [!code focus] // [!code focus] " // [!code focus] ``` ## Telegram Telegram is one of the most popular chat platforms, and is great to build bots and notifications when things happen on chain. :::info Due to rate limits out of rindexer control ChatBots will only send messages with max block range of 10 blocks. Most people who will use rindexer Chatbots will only want to start sending messages of the live data anyway. The ChatBots are really only meant to be ran sending live data not historic data. ::: ### Setup a bot on telegram You have to use telegram itself to setup a bot: 1. Search BotFather on Telegram. 2. Type /start to get started. 3. Type /newbot to get a bot. 4. Enter your Bot name and unique Username, which should end with the bot. 5. Then, you will get your Bot token. (keep this safe you need it shortly) ### Configure rindexer `telegram` property accepts an array allowing you to split up the chats any way you wish. ### Example :::code-group ```yaml [contract events] name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] telegram: // [!code focus] - bot_token: ${TELEGRAM_BOT_TOKEN} // [!code focus] chat_id: -4223616270 // [!code focus] networks: // [!code focus] - ethereum // [!code focus] messages: - event_name: Transfer // [!code focus] # filter_expression is optional // [!code focus] filter_expression: "from = '0x0338ce5020c447f7e668dc2ef778025ce3982662' || from = '0x0338ce5020c447f7e668dc2ef778025ce398266u' && value >= 10 && value <= 2000000000000000000" // [!code focus] template_inline: "*New RETH Transfer Event* // [!code focus] from: {{from}} // [!code focus] to: {{to}} // [!code focus] amount: {{format_value(value, 18)}} // [!code focus] RETH contract: {{transaction_information.address}} // [!code focus] [etherscan](https://etherscan.io/tx/{{transaction_information.transaction_hash}}) // [!code focus] " // [!code focus] ``` ```yaml [native transfers] name: ETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: - network: ethereum chat: // [!code focus] telegram: // [!code focus] - bot_token: ${TELEGRAM_BOT_TOKEN} // [!code focus] chat_id: -4223616270 // [!code focus] networks: // [!code focus] - ethereum // [!code focus] messages: - event_name: NativeTransfer // [!code focus] template_inline: "*New ETH Transfer Event* // [!code focus] from: {{from}} // [!code focus] to: {{to}} // [!code focus] amount: {{format_value(value, 18)}} // [!code focus] Token address: {{transaction_information.address}} // [!code focus] [etherscan](https://etherscan.io/tx/{{transaction_information.transaction_hash}}) // [!code focus] " // [!code focus] ``` ::: ### bot\_token This is your telegram bot token which you generate using @BotFather. :::info We advise you to put this in a environment variables. ::: ```yaml ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] telegram: // [!code focus] - bot_token: ${TELEGRAM_BOT_TOKEN} // [!code focus] ``` ### chat\_id You have add your bot to chats to use it, so this is the chat ID you wish the bot to send messages to. ```yaml ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] telegram: // [!code focus] - bot_token: ${TELEGRAM_BOT_TOKEN} chat_id: -4223616270 // [!code focus] ``` ### networks This is an array of networks you want to send messages to this telegram chat. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] telegram: // [!code focus] - bot_token: ${TELEGRAM_BOT_TOKEN} chat_id: -4223616270 networks: // [!code focus] - ethereum // [!code focus] ``` ### messages This is an array of messages you want to send to this telegram chat. It is an array as you can define many different messages to send to this chat with different conditions. #### event\_name This is the name of the event you want to send a message for, must match the ABI event name. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] telegram: // [!code focus] - bot_token: ${TELEGRAM_BOT_TOKEN} chat_id: -4223616270 networks: - ethereum messages: - event_name: Transfer // [!code focus] ``` #### filter\_expression This accepts a filter expression to filter the events before sending a message to this discord channel :::info This is optional, if you do not provide any filter expression all the events will be sent to this discord channel. ::: Filter expressions allow for condition checking of the event data and support logical operators to combine multiple conditions. ##### Supported types and operations: | Type | Description | Operators | Notes | | | --------------------------- | ------------------------------------------------------------------------------------------------------------------------- | ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | | Numeric (uint/int variants) | Integer values (e.g., `42`, `-100`) or decimal values (e.g., `3.14`, `-0.5`) | `>`, `<`, `=`, `>=`, `<=` | Numeric comparisons | Numbers must have digits before and after a decimal point if one is present (e.g., `.5` or `5.` are not valid standalone numbers) | | Address | Ethereum addresses (e.g., `0x1234567890abcdef1234567890abcdef12345678`) | `=`, `!=` | Comparisons (e.g., `from == '0xABC...'`) are typically case-insensitive regarding the hex characters of the address value itself | | | String | Text values. Can be single-quoted (e.g., `'hello'`) or, on the right-hand side of a comparison, unquoted (e.g., `active`) | `=`, `!=` | Quoted strings support `\'` to escape a single quote and `\\` to escape a backslash. All string comparison operations (e.g., `name == 'Alice'`, `description contains 'error'`) are performed case-insensitively during evaluation. | | | Boolean | True or false values | `=`, `!=` | Represented as `true` or `false`. These keywords are parsed case-insensitively (e.g., `TRUE`, `False` are also valid in expressions). | | | Hex String Literal | A string literal starting with `0x` or `0X` followed by hexadecimal characters (0-9, a-f, A-F). | `=`, `!=` | Treated as a string for comparison purposes (e.g., `input_data starts_with '0xa9059cbb'`). Comparison is case-sensitive for the hex characters after `0x`. | | | Array | Ordered list of items | `==`, `!=`, `[index]` | See "Array Type Operations" below | | ##### Logical Operators * `&&` - All conditions must be true * `||` - At least one condition must be true * `()` - Parentheses for grouping * `&&` has higher precedence than `||` (i.e., `&&` operations are evaluated before `||` operations if not grouped by parentheses) ##### Array Type Operations For array types, you can use the following operations: * `array_param == '["raw_json_array_string"]'` string comparison of the array's entire JSON string representation against the provided string * `array_param != '["raw_json_array_string"]'` the negation of the above * `array_param[0]` indexed access. The index must be a non-negative integer. ##### Whitespace Flexible whitespace is generally allowed around operators, parentheses, and keywords for readability. However, whitespace within quoted string literals is significant and preserved. ##### Examples * `value > 1000` - Numeric comparison, checks if `value` is greater than 1000. * `from = '0x1234567890abcdef1234567890abcdef12345678'` - Address comparison, checks if `from` matches the specified address. * `name != 'Alice'` - String comparison, checks if `name` is not equal to 'Alice'. * `active = true` - Boolean comparison, checks if `active` is true. * `value >= 1000 && value <= 2000` - Numeric range check, checks if `value` is between 1000 and 2000 inclusive. * `from = '0x1234567890abcdef1234567890abcdef12345678' || from = '0xabcdefabcdefabcdefabcdefabcdefabcdef'` - Address comparison with logical OR, checks if `from` matches either of the two addresses. * `value > 1000 && (from = '0x1234567890abcdef1234567890abcdef12345678' || from = '0xabcdefabcdefabcdefabcdefabcdefabcdef')` - Combined numeric and address checks with logical AND and OR, checks if `value` is greater than 1000 and `from` matches either of the two addresses. #### conditions This accepts an array of conditions you want to apply to the event data before sending a message to this telegram chat. :::warning Conditions are supported for backwards compatibility. It is recommended to use `filter_expression` instead of `conditions`. ::: :::info This is optional, if you do not provide any conditions all the events will be sent to this telegram chat. ::: You may want to filter on the message based on the event data, if the event data has not got an index on the on the solidity event you can not filter it over the logs. The `conditions` filter is here to help you with this, based on your ABI you can filter on the event data. rindexer has enabled a special syntax which allows you to define on your ABI fields what you want to filter on. 1. `>` - higher then (for numbers only) 2. `<` - lower then (for numbers only) 3. `=` - equals 4. `>=` - higher then or equals (for numbers only) 5. `<=` - lower then or equals (for numbers only) 6. `||` - or 7. `&&` - and So lets look at an example lets say i only want to get transfer events which are higher then `2000000000000000000` RETH wei ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] telegram: // [!code focus] - bot_token: ${TELEGRAM_BOT_TOKEN} chat_id: -4223616270 networks: - ethereum messages: - event_name: Transfer // [!code focus] conditions: // [!code focus] - "value": ">=2000000000000000000" // [!code focus] ``` We use the ABI input name `value` to filter on the value field, you can find these names in the ABI file. ```json { "anonymous":false, "inputs":[ { "indexed":true, "internalType":"address", "name":"from", "type":"address" }, { "indexed":true, "internalType":"address", "name":"to", "type":"address" }, { "indexed":false, "internalType":"uint256", "name":"value", // [!code focus] "type":"uint256" } ], "name":"Transfer", "type":"event" } ``` You can use the `||` or `&&` to combine conditions. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] telegram: // [!code focus] - bot_token: ${TELEGRAM_BOT_TOKEN} chat_id: -4223616270 networks: - ethereum messages: - event_name: Transfer conditions: // [!code focus] - "value": ">=2000000000000000000 && value <=4000000000000000000" // [!code focus] ``` You can use the `=` to filter on other aspects like the `from` or `to` address. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] telegram: // [!code focus] - bot_token: ${TELEGRAM_BOT_TOKEN} chat_id: -4223616270 networks: - ethereum messages: - event_name: Transfer conditions: // [!code focus] - "from": "0x0338ce5020c447f7e668dc2ef778025ce3982662 || 0x0338ce5020c447f7e668dc2ef778025ce398266u" // [!code focus] - "value": ">=2000000000000000000 || value <=4000000000000000000" // [!code focus] ``` :::info Note we advise you to filer any `indexed` fields in the contract details in the `rindexer.yaml` file. As these can be filtered out on the request level and not filtered out in rindexer itself. You can read more about it [here](/docs/start-building/yaml-config/contracts#indexed_1-indexed_2-indexed_3). ::: If you have a tuple and you want to get that value you just use the object notation. For example lets say we want to only get the events for `profileId` from the `quoteParams` tuple which equals `1`: ```json { "anonymous": false, "inputs": [ { "components": [ { "internalType": "uint256", "name": "profileId", // [!code focus] "type": "uint256" }, ... ], "indexed": false, "internalType": "struct Types.QuoteParams", "name": "quoteParams", // [!code focus] "type": "tuple" }, ... ], "name": "QuoteCreated", // [!code focus] "type": "event" } ``` ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] telegram: // [!code focus] - bot_token: ${TELEGRAM_BOT_TOKEN} chat_id: -4223616270 networks: - ethereum messages: - event_name: Transfer conditions: // [!code focus] - "quoteParams.profileId": "=1" // [!code focus] ``` #### template\_inline You can then write your own template inline, this is the template you want to send to the chat. You have to use the ABI input names in object notation for example if i wanted to put value in the template i just have to write `{{value}}` in the template and it will be replaced with the value of the event itself. The template supports: * bold text = \*bold text\* * italic text = \_italic text\_ * inline url = \[inline URL]\(YOUR\_URL) * inline fixed-width code = \`inline fixed-width code\` * pre-formatted fixed-width code block = \`\`\`pre-formatted fixed-width code block\`\`\` * pre-formatted fixed-width known code block = \`\`\`rust pre-formatted fixed-width known code block\`\`\` * breaks = just line break in the template ##### transaction\_information You also can use the `transaction_information` object to get common information about the transaction, this is the transaction information for the event. ```rs #[derive(Debug, Serialize, Deserialize, Clone)] pub struct TxInformation { pub network: String, // This will convert to a hex string in the template pub address: Address, // This will convert to a hex string in the template pub block_hash: BlockHash, // This will convert to a string decimal in the template pub block_number: U64, // This will convert to a hex string in the template pub transaction_hash: TxHash, // This will convert to a string decimal in the template pub log_index: U256, // This will convert to a string decimal in the template pub transaction_index: U64, } ``` :::info To avoid confusion `address` in `transaction_information` is the address of the contract the event was emitted from. ::: ##### format\_value You can use the `format_value` function to format the value of the event to a decimal value with the specified decimals. Lets put it all together: ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] telegram: // [!code focus] - bot_token: ${TELEGRAM_BOT_TOKEN} chat_id: -4223616270 networks: - ethereum messages: - event_name: Transfer // [!code focus] template_inline: "*New RETH Transfer Event* // [!code focus] from: {{from}} // [!code focus] to: {{to}} // [!code focus] amount: {{format_value(value, 18)}} // [!code focus] RETH contract: {{transaction_information.address}} // [!code focus] [etherscan](https://etherscan.io/tx/{{transaction_information.transaction_hash}}) // [!code focus] " // [!code focus] ``` ## Twilio Twilio SMS allows you to send SMS notifications to phone numbers when events happen on chain. :::info Due to rate limits out of rindexer control ChatBots will only send messages with max block range of 10 blocks. Most people who will use rindexer Chatbots will only want to start sending messages of the live data anyway. The ChatBots are really only meant to be ran sending live data not historic data. ::: ### Setup a Twilio account 1. Sign up for a Twilio account at [twilio.com](https://www.twilio.com/). 2. Navigate to the Twilio Console to find your **Account SID** and **Auth Token**. 3. Get a Twilio phone number from the Console under Phone Numbers > Manage > Buy a number. 4. Keep your Account SID, Auth Token, and Twilio phone number safe, you will need them shortly. ### Configure rindexer `twilio` property accepts an array allowing you to split up the SMS notifications any way you wish. ### Example :::code-group ```yaml [contract events] name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] twilio: // [!code focus] - account_sid: ${TWILIO_ACCOUNT_SID} // [!code focus] auth_token: ${TWILIO_AUTH_TOKEN} // [!code focus] from_number: ${TWILIO_FROM_NUMBER} // [!code focus] to_number: ${TWILIO_TO_NUMBER} // [!code focus] networks: // [!code focus] - ethereum // [!code focus] messages: - event_name: Transfer // [!code focus] # filter_expression is optional // [!code focus] filter_expression: "from = '0x0338ce5020c447f7e668dc2ef778025ce3982662' || from = '0x0338ce5020c447f7e668dc2ef778025ce398266u' && value >= 10 && value <= 2000000000000000000" // [!code focus] template_inline: "New RETH Transfer Event // [!code focus] from: {{from}} // [!code focus] to: {{to}} // [!code focus] amount: {{format_value(value, 18)}} // [!code focus] RETH contract: {{transaction_information.address}} // [!code focus] etherscan: https://etherscan.io/tx/{{transaction_information.transaction_hash}} // [!code focus] " // [!code focus] ``` ```yaml [native transfers] name: ETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co native_transfers: networks: - network: ethereum chat: // [!code focus] twilio: // [!code focus] - account_sid: ${TWILIO_ACCOUNT_SID} // [!code focus] auth_token: ${TWILIO_AUTH_TOKEN} // [!code focus] from_number: ${TWILIO_FROM_NUMBER} // [!code focus] to_number: ${TWILIO_TO_NUMBER} // [!code focus] networks: // [!code focus] - ethereum // [!code focus] messages: - event_name: NativeTransfer // [!code focus] template_inline: "New ETH Transfer Event // [!code focus] from: {{from}} // [!code focus] to: {{to}} // [!code focus] amount: {{format_value(value, 18)}} // [!code focus] Token address: {{transaction_information.address}} // [!code focus] etherscan: https://etherscan.io/tx/{{transaction_information.transaction_hash}} // [!code focus] " // [!code focus] ``` ::: ### account\_sid This is your Twilio Account SID which you can find in your Twilio Console dashboard. :::info We advise you to put this in a environment variables. ::: ```yaml ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] twilio: // [!code focus] - account_sid: ${TWILIO_ACCOUNT_SID} // [!code focus] ``` ### auth\_token This is your Twilio Auth Token which you can find in your Twilio Console dashboard. :::info We advise you to put this in a environment variables. ::: ```yaml ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] twilio: // [!code focus] - account_sid: ${TWILIO_ACCOUNT_SID} auth_token: ${TWILIO_AUTH_TOKEN} // [!code focus] ``` ### from\_number This is the Twilio phone number you purchased to send SMS messages from. Must be in E.164 format (e.g., +1234567890). :::info We advise you to put this in a environment variables. ::: ```yaml ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] twilio: // [!code focus] - account_sid: ${TWILIO_ACCOUNT_SID} auth_token: ${TWILIO_AUTH_TOKEN} from_number: ${TWILIO_FROM_NUMBER} // [!code focus] ``` ### to\_number This is the phone number you want to send SMS messages to. Must be in E.164 format (e.g., +1234567890). ```yaml ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] twilio: // [!code focus] - account_sid: ${TWILIO_ACCOUNT_SID} auth_token: ${TWILIO_AUTH_TOKEN} from_number: ${TWILIO_FROM_NUMBER} to_number: ${TWILIO_TO_NUMBER} // [!code focus] ``` ### networks This is an array of networks you want to send SMS messages for. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] twilio: // [!code focus] - account_sid: ${TWILIO_ACCOUNT_SID} auth_token: ${TWILIO_AUTH_TOKEN} from_number: ${TWILIO_FROM_NUMBER} to_number: ${TWILIO_TO_NUMBER} networks: // [!code focus] - ethereum // [!code focus] ``` ### messages This is an array of messages you want to send as SMS. It is an array as you can define many different messages to send with different conditions. #### event\_name This is the name of the event you want to send a message for, must match the ABI event name. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] twilio: // [!code focus] - account_sid: ${TWILIO_ACCOUNT_SID} auth_token: ${TWILIO_AUTH_TOKEN} from_number: ${TWILIO_FROM_NUMBER} to_number: ${TWILIO_TO_NUMBER} networks: - ethereum messages: - event_name: Transfer // [!code focus] ``` #### filter\_expression This accepts a filter expression to filter the events before sending an SMS message. :::info This is optional, if you do not provide any filter expression all the events will be sent as SMS messages. ::: Filter expressions allow for condition checking of the event data and support logical operators to combine multiple conditions. ##### Supported types and operations: | Type | Description | Operators | Notes | | | --------------------------- | ------------------------------------------------------------------------------------------------------------------------- | ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | | Numeric (uint/int variants) | Integer values (e.g., `42`, `-100`) or decimal values (e.g., `3.14`, `-0.5`) | `>`, `<`, `=`, `>=`, `<=` | Numeric comparisons | Numbers must have digits before and after a decimal point if one is present (e.g., `.5` or `5.` are not valid standalone numbers) | | Address | Ethereum addresses (e.g., `0x1234567890abcdef1234567890abcdef12345678`) | `=`, `!=` | Comparisons (e.g., `from == '0xABC...'`) are typically case-insensitive regarding the hex characters of the address value itself | | | String | Text values. Can be single-quoted (e.g., `'hello'`) or, on the right-hand side of a comparison, unquoted (e.g., `active`) | `=`, `!=` | Quoted strings support `\'` to escape a single quote and `\\` to escape a backslash. All string comparison operations (e.g., `name == 'Alice'`, `description contains 'error'`) are performed case-insensitively during evaluation. | | | Boolean | True or false values | `=`, `!=` | Represented as `true` or `false`. These keywords are parsed case-insensitively (e.g., `TRUE`, `False` are also valid in expressions). | | | Hex String Literal | A string literal starting with `0x` or `0X` followed by hexadecimal characters (0-9, a-f, A-F). | `=`, `!=` | Treated as a string for comparison purposes (e.g., `input_data starts_with '0xa9059cbb'`). Comparison is case-sensitive for the hex characters after `0x`. | | | Array | Ordered list of items | `==`, `!=`, `[index]` | See "Array Type Operations" below | | ##### Logical Operators * `&&` - All conditions must be true * `||` - At least one condition must be true * `()` - Parentheses for grouping * `&&` has higher precedence than `||` (i.e., `&&` operations are evaluated before `||` operations if not grouped by parentheses) ##### Array Type Operations For array types, you can use the following operations: * `array_param == '["raw_json_array_string"]'` string comparison of the array's entire JSON string representation against the provided string * `array_param != '["raw_json_array_string"]'` the negation of the above * `array_param[0]` indexed access. The index must be a non-negative integer. ##### Whitespace Flexible whitespace is generally allowed around operators, parentheses, and keywords for readability. However, whitespace within quoted string literals is significant and preserved. ##### Examples * `value > 1000` - Numeric comparison, checks if `value` is greater than 1000. * `from = '0x1234567890abcdef1234567890abcdef12345678'` - Address comparison, checks if `from` matches the specified address. * `name != 'Alice'` - String comparison, checks if `name` is not equal to 'Alice'. * `active = true` - Boolean comparison, checks if `active` is true. * `value >= 1000 && value <= 2000` - Numeric range check, checks if `value` is between 1000 and 2000 inclusive. * `from = '0x1234567890abcdef1234567890abcdef12345678' || from = '0xabcdefabcdefabcdefabcdefabcdefabcdef'` - Address comparison with logical OR, checks if `from` matches either of the two addresses. * `value > 1000 && (from = '0x1234567890abcdef1234567890abcdef12345678' || from = '0xabcdefabcdefabcdefabcdefabcdefabcdef')` - Combined numeric and address checks with logical AND and OR, checks if `value` is greater than 1000 and `from` matches either of the two addresses. #### conditions This accepts an array of conditions you want to apply to the event data before sending an SMS message. :::warning Conditions are supported for backwards compatibility. It is recommended to use `filter_expression` instead of `conditions`. ::: :::info This is optional, if you do not provide any conditions all the events will be sent as SMS messages. ::: You may want to filter on the message based on the event data, if the event data has not got an index on the on the solidity event you can not filter it over the logs. The `conditions` filter is here to help you with this, based on your ABI you can filter on the event data. rindexer has enabled a special syntax which allows you to define on your ABI fields what you want to filter on. 1. `>` - higher then (for numbers only) 2. `<` - lower then (for numbers only) 3. `=` - equals 4. `>=` - higher then or equals (for numbers only) 5. `<=` - lower then or equals (for numbers only) 6. `||` - or 7. `&&` - and So lets look at an example lets say i only want to get transfer events which are higher then `2000000000000000000` RETH wei ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] twilio: // [!code focus] - account_sid: ${TWILIO_ACCOUNT_SID} auth_token: ${TWILIO_AUTH_TOKEN} from_number: ${TWILIO_FROM_NUMBER} to_number: ${TWILIO_TO_NUMBER} networks: - ethereum messages: - event_name: Transfer // [!code focus] conditions: // [!code focus] - "value": ">=2000000000000000000" // [!code focus] ``` We use the ABI input name `value` to filter on the value field, you can find these names in the ABI file. ```json { "anonymous":false, "inputs":[ { "indexed":true, "internalType":"address", "name":"from", "type":"address" }, { "indexed":true, "internalType":"address", "name":"to", "type":"address" }, { "indexed":false, "internalType":"uint256", "name":"value", // [!code focus] "type":"uint256" } ], "name":"Transfer", "type":"event" } ``` You can use the `||` or `&&` to combine conditions. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] twilio: // [!code focus] - account_sid: ${TWILIO_ACCOUNT_SID} auth_token: ${TWILIO_AUTH_TOKEN} from_number: ${TWILIO_FROM_NUMBER} to_number: ${TWILIO_TO_NUMBER} networks: - ethereum messages: - event_name: Transfer conditions: // [!code focus] - "value": ">=2000000000000000000 && value <=4000000000000000000" // [!code focus] ``` You can use the `=` to filter on other aspects like the `from` or `to` address. ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] twilio: // [!code focus] - account_sid: ${TWILIO_ACCOUNT_SID} auth_token: ${TWILIO_AUTH_TOKEN} from_number: ${TWILIO_FROM_NUMBER} to_number: ${TWILIO_TO_NUMBER} networks: - ethereum messages: - event_name: Transfer conditions: // [!code focus] - "from": "0x0338ce5020c447f7e668dc2ef778025ce3982662 || 0x0338ce5020c447f7e668dc2ef778025ce398266u" // [!code focus] - "value": ">=2000000000000000000 || value <=4000000000000000000" // [!code focus] ``` :::info Note we advise you to filer any `indexed` fields in the contract details in the `rindexer.yaml` file. As these can be filtered out on the request level and not filtered out in rindexer itself. You can read more about it [here](/docs/start-building/yaml-config/contracts#indexed_1-indexed_2-indexed_3). ::: If you have a tuple and you want to get that value you just use the object notation. For example lets say we want to only get the events for `profileId` from the `quoteParams` tuple which equals `1`: ```json { "anonymous": false, "inputs": [ { "components": [ { "internalType": "uint256", "name": "profileId", // [!code focus] "type": "uint256" }, ... ], "indexed": false, "internalType": "struct Types.QuoteParams", "name": "quoteParams", // [!code focus] "type": "tuple" }, ... ], "name": "QuoteCreated", // [!code focus] "type": "event" } ``` ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] twilio: // [!code focus] - account_sid: ${TWILIO_ACCOUNT_SID} auth_token: ${TWILIO_AUTH_TOKEN} from_number: ${TWILIO_FROM_NUMBER} to_number: ${TWILIO_TO_NUMBER} networks: - ethereum messages: - event_name: Transfer conditions: // [!code focus] - "quoteParams.profileId": "=1" // [!code focus] ``` #### template\_inline You can then write your own template inline, this is the template you want to send as SMS. You have to use the ABI input names in object notation for example if i wanted to put value in the template i just have to write `{{value}}` in the template and it will be replaced with the value of the event itself. :::info SMS messages are plain text only. Markdown formatting like bold, italic, or links will not render in SMS. ::: ##### transaction\_information You also can use the `transaction_information` object to get common information about the transaction, this is the transaction information for the event. ```rs #[derive(Debug, Serialize, Deserialize, Clone)] pub struct TxInformation { pub network: String, // This will convert to a hex string in the template pub address: Address, // This will convert to a hex string in the template pub block_hash: BlockHash, // This will convert to a string decimal in the template pub block_number: U64, // This will convert to a hex string in the template pub transaction_hash: TxHash, // This will convert to a string decimal in the template pub log_index: U256, // This will convert to a string decimal in the template pub transaction_index: U64, } ``` :::info To avoid confusion `address` in `transaction_information` is the address of the contract the event was emitted from. ::: ##### format\_value You can use the `format_value` function to format the value of the event to a decimal value with the specified decimals. Lets put it all together: ```yaml [rindexer.yaml] ... contracts: - name: RocketPoolETH details: - network: ethereum address: "0xae78736cd615f374d3085123a210448e74fc6393" start_block: "18600000" end_block: "18600181" abi: "./abis/RocketTokenRETH.abi.json" include_events: - Transfer chat: // [!code focus] twilio: // [!code focus] - account_sid: ${TWILIO_ACCOUNT_SID} auth_token: ${TWILIO_AUTH_TOKEN} from_number: ${TWILIO_FROM_NUMBER} to_number: ${TWILIO_TO_NUMBER} networks: - ethereum messages: - event_name: Transfer // [!code focus] template_inline: "New RETH Transfer Event // [!code focus] from: {{from}} // [!code focus] to: {{to}} // [!code focus] amount: {{format_value(value, 18)}} // [!code focus] RETH contract: {{transaction_information.address}} // [!code focus] etherscan: https://etherscan.io/tx/{{transaction_information.transaction_hash}} // [!code focus] " // [!code focus] ``` ## rindexer CLI rindexer is a CLI first tool allowing you to do everything you need to do with rindexer. ```bash Usage: rindexer Commands: new Creates a new rindexer no-code project or rust project start Start various services like indexers, GraphQL APIs or both together add Add elements such as contracts to the rindexer.yaml file codegen Generates rust code based on rindexer.yaml or graphql queries delete Delete data from the postgres database or csv files phantom Use phantom events to add your own events to contracts help Print this message or the help of the given subcommand(s) Options: -h, --help Print help -V, --version Print version ``` ### new Creates a new rindexer no-code project or rust project. This will walk you through setting up your project by asking you a series of questions in the terminal. ```bash Usage: rindexer new [OPTIONS] Commands: no-code Creates a new no-code project rust Creates a new rust project help Print this message or the help of the given subcommand(s) Options: -p, --path optional - The path to create the project in, default will be where the command is run -h, --help Print help (see a summary with '-h') ``` #### Subcommand Options Both `no-code` and `rust` subcommands support: ```bash Options: -r, --reth optional - Enable Reth support for high-performance indexing [-- ...] Additional arguments to pass to reth when --reth is enabled These should be provided after -- e.g. -- --datadir /path --http true Examples: # Standard project rindexer new no-code # Reth-enabled project rindexer new no-code --reth # Reth project with custom arguments rindexer new rust --reth -- --datadir /custom/path --http true ``` ### start Start various services like indexers, GraphQL APIs or both together. This will start the services based on the rindexer.yaml file. A health monitoring server is automatically started alongside these services. ```bash `rindexer start indexer` or `rindexer start graphql` or `rindexer start all` Usage: rindexer start [OPTIONS] Commands: indexer Starts the indexing service based on the rindexer.yaml file graphql Starts the GraphQL server based on the rindexer.yaml file all Starts the indexers and the GraphQL together based on the rindexer.yaml file help Print this message or the help of the given subcommand(s) Options: -p, --path optional - The path to run the command in, default will be where the command is run -y, --yes Auto-confirm all schema migration prompts. Useful for CI/CD pipelines. -w, --watch Watch rindexer.yaml for changes and hot-reload (no-code projects only). -h, --help Print help (see a summary with '-h') ``` :::info The `--watch` flag must come **before** the subcommand: `rindexer start --watch all`, not `rindexer start all --watch`. See the [Hot Reload](/docs/start-building/hot-reload) documentation for details. ::: :::info The health monitoring server runs on port 8080 by default. You can configure it in your `rindexer.yaml` file using the `health_port` setting. See the [Running](/docs/start-building/running#health-monitoring) documentation for more details. ::: ### add These commands allow you to through the CLI add elements to your YAML file. ```bash Usage: rindexer_cli add [OPTIONS] Commands: contract Add a contract from a network to the rindexer.yaml file. It will download the ABI and add it to the abis folder and map it in the yaml file. help Print this message or the help of the given subcommand(s) Options: -p, --path optional - The path to run the command in, default will be where the command is run -h, --help Print help (see a summary with '-h') ``` ### codegen Generates rust code based on rindexer.yaml or graphql queries. This will generate the code based on the command you run. ```bash Example: `rindexer codegen typings` or `rindexer codegen handlers` or `rindexer codegen graphql --endpoint=graphql_api` or `rindexer codegen rust-all` Usage: rindexer codegen [OPTIONS] Commands: typings Generates the rindexer rust typings based on the rindexer.yaml file indexer Generates the rindexer rust indexers handlers based on the rindexer.yaml file graphql Generates the GraphQL queries from a GraphQL schema all Generates both typings and indexers handlers based on the rindexer.yaml file help Print this message or the help of the given subcommand(s) Options: -p, --path optional - The path to run the command in, default will be where the command is run -h, --help Print help (see a summary with '-h') ``` ### delete This can be used to delete data from the postgres database or csv files. It will ask you questions in the terminal to determine what you want to delete. ```bash Usage: rindexer delete ``` ### phantom ```bash Example: `rindexer phantom init` or `rindexer phantom clone --contract-name --network ` or `rindexer phantom compile --contract-name --network ` or `rindexer phantom deploy --contract-name --network ` Usage: rindexer phantom [OPTIONS] Commands: init Sets up phantom events on rindexer clone Clone the contract with the network you wish to add phantom events to compile Compiles the phantom contract deploy Deploy the modified phantom contract help Print this message or the help of the given subcommand(s) Options: -p, --path optional - The path to create the project in, default will be where the command is run -h, --help Print help (see a summary with '-h') ``` ## RPC node providers RPC providers speed has a direct link to how fast you can index data, providers who try to return data as fast as possible are the best providers to have. With RPC providers the fastest providers are ones which return you the to and from block ranges even if you supply an out of range block request. This means you can extract that data from the error message and use it to get the the biggest depth of logs out of a single request. The slower providers give you a max block range and this means you have to crawl through the blocks to get the logs even if no data is in the blocks, this is a lot slower. ### Tenderly [Tenderly](https://tenderly.co/) has some very fast nodes and in internal testing with free nodes they blew everyone out the water for returning the most event logs in the fastest time. check out their free nodes here - [https://docs.tenderly.co/supported-networks-and-languages](https://docs.tenderly.co/supported-networks-and-languages) :::info Keep in mind that the public RPC are rate limited. For production use, it’s recommended use a dedicated Node RPC. ::: ### Alchemy [Alchemy](https://www.alchemy.com/) is another great provider with a free tier that is generous. ### Other top providers Infura and thirdweb are also good providers. ### All other networks You can use [chainlist](https://chainlist.org/) to find all the providers which support your network. ### Local nodes rindexer should work with any local nodes you run including [Anvil](https://book.getfoundry.sh/anvil/) and [Hardhat](https://hardhat.org/) ### RPC Proxy and Caching [eRPC](https://github.com/erpc/erpc) is a fault-tolerant EVM RPC proxy and re-org aware permanent caching solution. It is built with read-heavy use-cases in mind such as data indexing and high-load frontend usage. #### Setup 1. Create your [`erpc.yaml`](https://docs.erpc.cloud/config/example) configuration file: ```yaml filename="erpc.yaml" logLevel: debug projects: - id: main upstreams: # You don't need to define architecture (e.g. evm) or chain id (e.g. 137) # as they will be detected automatically by eRPC. - endpoint: https://eth-mainnet.blastapi.io/xxxx - endpoint: https://polygon-mainnet.blastapi.io/xxxx - endpoint: evm+alchemy://xxxx-my-alchemy-api-key-xxxx ``` See [a complete config example](https://docs.erpc.cloud/config/example) for inspiration. 2. Use the Docker image: ```bash docker run -v $(pwd)/erpc.yaml:/root/erpc.yaml -p 4000:4000 -p 4001:4001 ghcr.io/erpc/erpc:latest ``` or add the below configs to the rindexer's [docker-compose.yaml](https://github.com/joshstevens19/rindexer/blob/master/docker-compose.yml) as a service and run `docker-compose up -d`: ```yaml [rindexer.yaml] services: ... erpc: // [!code focus] image: ghcr.io/erpc/erpc:latest // [!code focus] platform: linux/amd64 // [!code focus] volumes: // [!code focus] - ${PROJECT_PATH}/erpc.yaml:/root/erpc.yaml // [!code focus] ports: // [!code focus] - 4000:4000 // [!code focus] - 4001:4001 // [!code focus] restart: always // [!code focus] ``` 3. Set erpc url in [rindexer network config](https://rindexer.xyz/docs/start-building/yaml-config/networks#rpc): ```yaml [rindexer.yaml] name: rETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: http://erpc:4000/main/evm/1 // [!code focus] ``` and you are set to go. the rpc requests now will be redirected toward erpc and it will handle caching, failover, auto-batching, rate-limiting, auto-discovery of node providers, etc. behind the scenes. ## Installation rindexer installation process is simple and can be done with a few steps. :::info rindexer uses docker to spin up postgres databases for you when it runs locally so its recommended to install [docker](https://www.docker.com/products/docker-desktop/) if you don't have it installed already. ::: ### rindexer CLI rindexer operates as a CLI toolset to make it easy to create new rindexer projects or run existing ones. #### Installing :::warning If you’re on Windows, you will need to install and use Git BASH or WSL, as your terminal, since rindexer installation does not support Powershell or Cmd. ::: :::code-group ```bash [latest] curl -L https://rindexer.xyz/install.sh | bash ``` ```bash [exact version] curl -L https://rindexer.xyz/install.sh | bash -s -- --version ``` ::: Once installed you can run the following command to check the installation was successful: ```bash rindexer --help ``` ```bash Blazing fast EVM indexing tool built in rust Usage: rindexer [COMMAND] Commands: new Creates a new rindexer no-code project or rust project start Start various services like indexers, GraphQL APIs or both together add Add elements such as contracts to the rindexer.yaml file codegen Generates rust code based on rindexer.yaml or graphql queries delete Delete data from the postgres database or csv files phantom Use phantom events to add your own events to contracts help Print this message or the help of the given subcommand(s) Options: -h, --help Print help -V, --version Print version ``` you can also get help on any of the commands for example to get help on the new command you can run: ```bash rindexer new --help ``` To upgrade to the latest version of rindexer you can run the following command: ```bash rindexerup ``` To uninstall rindexer you can run the following command: ```bash rindexerdown ``` ### Docker pre-built image There's is pre-built docker image which can be used to run `rindexer` inside your dockerized infra: * Docker image: [`ghcr.io/joshstevens19/rindexer`](https://github.com/users/joshstevens19/packages/container/package/rindexer) #### Create new project To create a new `no-code` project in your current directory, you can run the following: ```bash docker run -it -v $PWD:/app/project_path ghcr.io/joshstevens19/rindexer new -p /app/project_path no-code ``` #### Use with existing project To use it with an existing project and a running postgres instance you can simply invoke: ``` export PROJECT_PATH=/path/to/your/project export DATABASE_URL="postgresql://user:pass@postgres/db" docker-compose up -d ``` This will start all local indexing and if you have enabled the graphql endpoint, it will become exposed under: [http://localhost:3001](http://localhost:3001) **If you are using csv you do not need to install docker, it is only recommended with postgres or if you're deploying rindexer in cloud environments.** ### Rust - optional If you are only doing no-code projects you do not need rust installed but if you are doing rust projects you will need to install rust. You can install rust by following the instructions [here](https://www.rust-lang.org/tools/install). That is it lets now walk through how you can start using rindexer. ## Other indexing tools rindexer is not a tool to take market share from other indexing tools, it is a tool to provide more options for developers to index data on EVM chains. rindexer allows you to index data with no learning curve and no-code if you pick that option. The current indexing tools are mainly JavaScript based alongside require code to be written to use them, rindexer is a Rust based indexing tool and has no-code features built into it. Diversity is very important in the industry and rindexer is here to provide more options for developers to index data on EVM chains. rindexer is not a company or a business it is an open-source project and is here to help the industry move forward. ### TheGraph - Pay per query rindexer is not trying to replace `TheGraph` or even sees itself as a competitor to `TheGraph`. `TheGraph` vision is inspirational and the future of data should be decentralised provable indexing and we are not trying to replace that or take away from all the amazing things `TheGraph` has done for our industry. `TheGraph` have now sunset the hosting service being true to their ethos of decentralisation which you have to respect. rindexer was created to make indexing easier and faster. In the future I see a world where rindexer and `TheGraph` can work together to provide the best indexing experience for developers, you should be able to resync decentralised verified data from `TheGraph` from rindexer using no-code. If you want decentralised provable indexing `TheGraph` is the tool you should be using not rindexer. ### Shadow - Paid :::note rindexer has first-party support for phantom events powered by Shadow, you can read more about it [here](/docs/start-building/phantom). ::: `Shadow` allows you to add custom events and view functions to smart contracts and is a very powerful indexing service. The `Shadow` team are doing great work and I have a lot of respect for them and the work they are doing. ### dyRPC - Paid :::note rindexer has first-party support for phantom events powered by dyRPC, you can read more about it [here](/docs/start-building/phantom). ::: `dyRPC` is a tool built on top of overlay which can be ran on any erigon node and allows you to also modify the contract's source code adding gasless custom events and view functions. The `dyRPC` team are awesome. ### Cryo - Free `Cryo` is a great way to extract data from all EVM chains and is awesome for data analysis and research. It also is powered by a CLI tool allowing you to get this data using the CLI tool only. Really great tool with an incredible team behind it. ### TrueBlocks - Free TrueBlocks.io is a blockchain data indexing and querying tool designed to provide highly detailed and decentralized access to Ethereum blockchain data. It aims to enable users to efficiently extract and interact with blockchain data for various applications such as analytics, auditing, and reporting. Awesome team and amazing tool. ### Ponder - Free `Ponder` is a great indexing tool which is very feature rich and has a lot of great features. I really respect the work that has gone into `Ponder` and I think it is a great tool for indexing data on EVM chains. rindexer took some inspiration from the `Ponder` codebase and only have love for the `Ponder` team. It is built in JavaScript and you can build your own custom indexers with it. ### Goldsky - Paid `Goldsky` is a great indexing tool which has no-code elements, they also offer bespoke services to build custom indexing solutions. Great team and great product. ### Subsquid - Free features and Paid features `Subsquid` is a great indexing tool which supports EVM based indexing as well as non-EVM based indexing. It is built around the subsquid network which is a decentralised query engine. They have a great team and a great product. Accessing all the historical data is free with subsquid but if you want the SQD cloud hosting then it costs. ### GhostGraph - Free features and Paid features `GhostGraph` is a first-of-a-kind indexing solution that lets you write your index transformations in solidity. `GhostGraph` makes building fast indexers (subgraphs) for smart contracts easy. It is currently in beta. ### Envio - Free features and Paid features `Envio` (now referred to as HyperIndex) is a multi-chain indexer focused on performance and flexibility. It connects to an RPC node or HyperSync, an optimized Rust-built data node that massively improves indexing speeds. ## What is rindexer ? :::info Note rindexer is brand new and actively under development, things will change and bugs will exist - if you find any bugs or have any feature requests please open an issue on [github](https://github.com/joshstevens19/rindexer/issues). ::: rindexer is an opensource powerful, high-speed indexing toolset developed in Rust, designed for compatibility with any EVM chain. This tool allows you to index chain events using a simple YAML file, requiring no additional coding. For more advanced needs, the rindexer provides foundations and advanced capabilities to build whatever you want. It's highly extendable, enabling you to construct indexing pipelines with ease and focus exclusively on the logic. rindexer out the box also gives you a GraphQL API to query the data you have indexed instantly. ## What can I use rindexer for? * Hackathons: spin up a quick indexer to index events for your dApp with an API without any code needed * Data reporting * Building advanced indexers * Building a custom indexer for your project * Fast prototyping and MVP developments * Quick proof-of-concept projects * Enterprise standard indexing solutions for projects * Much more... ## What networks do you support? rindexer supports any EVM chain out of the box. If you have a custom chain, you can easily add support for it by adding the chain's RPC URL to the YAML configuration file and defining the chain ID. No code changes are required. ## Why rindexer? ### Why do we need rindexer in general? Indexing data on EVM chains is crucial for developers creating dApps or just general data reporting. Building the necessary indexing infrastructure, however, presents significant challenges. It is complex, time-consuming, and can divert focus from the task at hand and in a lot of cases even stop the task from moving forward. As applications become more complex with more advanced features, the need for robust, easily extendable indexing solutions grows. Some great indexing tools exist already mainly all in JavaScript so adding a rust based indexer tool creates more indexing options which is important for the industry. ### The Problems Traditional indexing solutions come with steep learning curves and require substantial initial development to meet dApps specific needs. This requirement can delay overall application development and add complexity, especially when integrating with different chain environments. Additionally, many tools do not offer the flexibility needed to quickly adapt to changing project requirements. Effective indexing tools should be easy to use, requiring no-code for basic data reporting or basic indexing needs, while also being highly customizable for more advanced requirements. We are building more and more advanced applications that require more advanced indexing tools, and rindexer is designed to meet these needs as well. The use of Rust in EVM chain tools like Foundry and Reth has demonstrated significant performance improvements on existing toolsets, which rindexer also leverages. ### Developer Experience rindexer significantly enhances the developer experience by simplifying the setup and management of indexing tasks. Its straightforward YAML-based configuration allows anyone to begin indexing events without writing any code, enabling them to concentrate more on their application's logic or profiling the data itself. For those seeking to build more advanced indexing rindexer provides a framework abstracting the complexity of how you get the chains data, allowing developers to focus on the projects logic and not the low-level chain indexing specifics. ### Performance Rust was chosen for developing rindexer due to its unmatched performance and efficiency. Its capacity for handling intensive computation and its memory safetyβ€”without needing a garbage collector allows rindexer to manage high-throughput data with minimal latency. This makes rindexer a very fast indexing solution, essential for applications that require real-time data analysis and for developers who value speed and efficiency. "Speed is everything in software." ## AWS ### Prerequisites Ensure that you have the following installed and configured: * **[AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)**: Configured with necessary permissions. * **[kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)**: Installed and configured. * **[Helm](https://helm.sh/docs/intro/install/)**: Installed. * **[eksctl](https://eksctl.io/installation/)**: Installed. ### 1. Create an EKS Cluster This command creates a new EKS cluster with a managed node group. Adjust the `--region`, `--node-type`, and node count options as needed. ```bash eksctl create cluster --name my-cluster --region us-west-2 --nodegroup-name standard-workers --node-type t3.medium --nodes 1 --nodes-min 1 --nodes-max 2 --managed ``` Output: ```bash 2024-08-20 18:21:15 [β„Ή] eksctl version 0.189.0-dev+c9afc4260.2024-08-19T12:43:03Z 2024-08-20 18:21:15 [β„Ή] using region us-west-2 2024-08-20 18:21:16 [β„Ή] setting availability zones to [us-west-2c us-west-2d us-west-2b] 2024-08-20 18:21:16 [β„Ή] subnets for us-west-2c - public:192.168.0.0/19 private:192.168.96.0/19 2024-08-20 18:21:16 [β„Ή] subnets for us-west-2d - public:192.168.32.0/19 private:192.168.128.0/19 2024-08-20 18:21:16 [β„Ή] subnets for us-west-2b - public:192.168.64.0/19 private:192.168.160.0/19 2024-08-20 18:21:16 [β„Ή] nodegroup "standard-workers" will use "" [AmazonLinux2/1.30] 2024-08-20 18:21:16 [β„Ή] using Kubernetes version 1.30 2024-08-20 18:21:16 [β„Ή] creating EKS cluster "my-cluster" in "us-west-2" region with managed nodes 2024-08-20 18:21:16 [β„Ή] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup 2024-08-20 18:21:16 [β„Ή] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=my-cluster' 2024-08-20 18:21:16 [β„Ή] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "my-cluster" in "us-west-2" 2024-08-20 18:21:16 [β„Ή] CloudWatch logging will not be enabled for cluster "my-cluster" in "us-west-2" 2024-08-20 18:21:16 [β„Ή] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=my-cluster' 2024-08-20 18:21:16 [β„Ή] default addons coredns, vpc-cni, kube-proxy were not specified, will install them as EKS addons 2024-08-20 18:21:16 [β„Ή] 2 sequential tasks: { create cluster control plane "my-cluster", 2 sequential sub-tasks: { 2 sequential sub-tasks: { 1 task: { create addons }, wait for control plane to become ready, }, create managed nodegroup "standard-workers", } } 2024-08-20 18:21:16 [β„Ή] building cluster stack "eksctl-my-cluster-cluster" 2024-08-20 18:21:18 [β„Ή] deploying stack "eksctl-my-cluster-cluster" 2024-08-20 18:21:48 [β„Ή] waiting for CloudFormation stack "eksctl-my-cluster-cluster" ... 2024-08-20 18:30:29 [β„Ή] creating addon 2024-08-20 18:30:29 [β„Ή] successfully created addon 2024-08-20 18:30:30 [!] recommended policies were found for "vpc-cni" addon, but since OIDC is disabled on the cluster, eksctl cannot configure the requested permissions; the recommended way to provide IAM permissions for "vpc-cni" addon is via pod identity associations; after addon creation is completed, add all recommended policies to the config file, under `addon.PodIdentityAssociations`, and run `eksctl update addon` 2024-08-20 18:30:30 [β„Ή] creating addon 2024-08-20 18:30:31 [β„Ή] successfully created addon 2024-08-20 18:30:32 [β„Ή] creating addon 2024-08-20 18:30:32 [β„Ή] successfully created addon 2024-08-20 18:32:35 [β„Ή] building managed nodegroup stack "eksctl-my-cluster-nodegroup-standard-workers" 2024-08-20 18:32:37 [β„Ή] deploying stack "eksctl-my-cluster-nodegroup-standard-workers" 2024-08-20 18:32:37 [β„Ή] waiting for CloudFormation stack "eksctl-my-cluster-nodegroup-standard-workers" ... 2024-08-20 18:37:39 [βœ”] saved kubeconfig as "/Users/rindexer/.kube/config" 2024-08-20 18:37:39 [β„Ή] no tasks 2024-08-20 18:37:39 [βœ”] all EKS cluster resources for "my-cluster" have been created 2024-08-20 18:37:39 [βœ”] created 0 nodegroup(s) in cluster "my-cluster" 2024-08-20 18:37:40 [β„Ή] nodegroup "standard-workers" has 1 node(s) 2024-08-20 18:37:40 [β„Ή] node "ip-192-168-22-89.us-west-2.compute.internal" is ready 2024-08-20 18:37:40 [β„Ή] waiting for at least 1 node(s) to become ready in "standard-workers" 2024-08-20 18:37:40 [β„Ή] nodegroup "standard-workers" has 1 node(s) 2024-08-20 18:37:40 [β„Ή] node "ip-192-168-22-89.us-west-2.compute.internal" is ready 2024-08-20 18:37:40 [βœ”] created 1 managed nodegroup(s) in cluster "my-cluster" 2024-08-20 18:37:41 [β„Ή] kubectl command should work with "/Users/rindexer/.kube/config", try 'kubectl get nodes' 2024-08-20 18:37:41 [βœ”] EKS cluster "my-cluster" in "us-west-2" region is ready ``` ```bash eksctl get cluster --name my-cluster --region us-west-2 ``` Output: ```bash NAME VERSION STATUS CREATED VPC SUBNETS SECURITYGROUPS PROVIDER my-cluster 1.30 ACTIVE 2024-08-20T16:21:42Z vpc-090d3761130933be4 subnet-00f479ddeb9bc51f7,subnet-0123eaaf4d9fb037a,subnet-09256a39c7e39ad7c,subnet-0df075e1795076648,subnet-0ed78cc4efed47b11,subnet-0f64d1e62abe83d4d sg-0939a7fb80a664be9 EKS ``` `eksctl` automatically configures your `kubeconfig` file. To check your nodes: ```bash kubectl get nodes ``` Output: ```bash NAME STATUS ROLES AGE VERSION ip-192-168-22-89.us-west-2.compute.internal Ready 6m33s v1.30.2-eks-1552ad0 ``` ### 2. Deploy the Helm Chart #### 2.1. Download the rindexer repository ```bash git clone https://github.com/joshstevens19/rindexer.git ``` #### 2.2. Configure the `values.yaml` File Customize the `values.yaml` for your deployment: ```yaml replicaCount: 2 image: repository: ghcr.io/joshstevens19/rindexer tag: "latest" pullPolicy: IfNotPresent service: type: ClusterIP port: 3001 ingress: enabled: false postgresql: enabled: false ``` :::info If you are using AWS RDS for your PostgreSQL database with `sslmode=require`, you will need to include the RDS certificates in your connection configuration. You can find the necessary certificates in the [AWS RDS SSL documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html). ::: #### 2.3. Install the Helm Chart ```bash helm install rindexer ./helm/rindexer -f helm/rindexer/values.yaml ``` Output: ```bash NAME: rindexer LAST DEPLOYED: Tue Aug 20 18:43:58 2024 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=rindexer,app.kubernetes.io/instance=rindexer" -o jsonpath="{.items[0].metadata.name}") export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT ``` #### 2.4. Verify the Deployment ```bash kubectl get pods ``` Output: ```bash NAME READY STATUS RESTARTS AGE rindexer-rindexer-94dd58475-p8g5d 1/1 Running 0 17s ``` ### 3. Monitor and Manage the Deployment #### 3.1. Health Monitoring Rindexer includes a built-in health monitoring server that provides comprehensive system status information. The health server runs on port `8080` by default and provides real-time insights into: * **Database connectivity** - PostgreSQL connection status * **Indexing status** - Whether the indexer is running and how many tasks are active * **Sync status** - Data synchronization health between different storage backends * **Overall system health** - Aggregated status across all components ##### 3.1.1. Health Server Lifecycle The health server's lifecycle depends on which services you start: * **`rindexer start indexer` (with end\_block set)**: Short-lived - dies when historical indexing completes * **`rindexer start indexer` (no end\_block set)**: Long-lived - stays alive for continuous live indexing * **`rindexer start graphql`**: No health server - health monitoring not available * **`rindexer start all`**: Long-lived - follows the GraphQL server lifecycle ##### 3.1.2. Accessing Health Endpoints The health server is automatically started when you run rindexer with indexing enabled. It provides the following endpoint: * `GET /health` - Complete health status with detailed service information Example health response: ```json { "status": "healthy", "timestamp": "2024-01-15T10:30:00Z", "services": { "database": "healthy", "indexing": "healthy", "sync": "healthy" }, "indexing": { "active_tasks": 2, "is_running": true } } ``` ##### 3.1.3. Health Status Types The health endpoint returns different status types: * `healthy` - Service is functioning normally * `unhealthy` - Service has encountered an error * `unknown` - Status cannot be determined * `not_configured` - Service is not set up * `disabled` - Service is intentionally disabled * `no_data` - Service is working but no data is available * `stopped` - Service is not running ##### 3.1.4. Service Health Checks **Database Health Check:** * **`healthy`**: PostgreSQL is enabled and a simple `SELECT 1` query succeeds * **`unhealthy`**: PostgreSQL is enabled but the connection fails or query errors occur * **`not_configured`**: PostgreSQL is enabled but no database client is available * **`disabled`**: PostgreSQL is not enabled in the configuration **Indexing Health Check:** * **`healthy`**: The indexer is currently running (system state flag is set) * **`stopped`**: The indexer is not running (system state flag is not set) **Sync Health Check:** * **PostgreSQL storage**: Checks for event tables (excluding system tables) * **CSV storage**: Checks if CSV directory exists and contains CSV files * **`healthy`**: Data synchronization is working properly * **`no_data`**: No data tables/files exist yet (acceptable for new deployments) * **`unhealthy`**: Sync process has errors * **`disabled`**: Sync is not configured ##### 3.1.5. Monitoring in Production For production deployments, you can: 1. **Set up monitoring alerts** based on HTTP status codes: * `200 OK` - System is healthy * `503 Service Unavailable` - System has issues 2. **Configure load balancer health checks** to point to `/health` 3. **Use monitoring tools** like Prometheus, Grafana, or DataDog to track health metrics 4. **Set up automated alerts** when the health status changes to `unhealthy` ##### 3.1.6. Custom Health Port You can configure the health server port in your `rindexer.yaml` file: ```yaml global: health_port: 8081 ``` #### 3.2. View Logs ```bash kubectl logs -l app.kubernetes.io/name=rindexer ``` Output: ```bash 20 August - 16:44:17.710908 INFO RocketPoolETH::Transfer - network ethereum - 100.00% progress 20 August - 16:44:17.779423 INFO RocketPoolETH::Transfer - No events found between blocks 18999946 - 19000000 20 August - 16:44:17.779458 INFO RocketPoolETH::Transfer - COMPLETED - Finished indexing historic events 20 August - 16:44:18.825983 INFO RocketPoolETH::Approval - INDEXED - 4884 events - blocks: 18900000 - 19000000 - network: ethereum 20 August - 16:44:18.827845 INFO RocketPoolETH::Approval - network ethereum - 100.00% progress 20 August - 16:44:18.906260 INFO RocketPoolETH::Approval - No events found between blocks 18999896 - 19000000 20 August - 16:44:18.906299 INFO RocketPoolETH::Approval - COMPLETED - Finished indexing historic events 20 August - 16:44:18.906347 INFO Historical indexing complete - time taken: 2.599786906s 20 August - 16:44:18.906407 INFO Applying indexes if any back to the database as historic resync is complete 20 August - 16:44:18.906414 INFO rindexer resync is complete ``` #### 3.3. Upgrade the Helm Chart ```bash helm upgrade rindexer ./rindexer -f values.yaml ``` ### 4. Clean Up #### 4.1. Uninstall the Helm Chart ```bash helm uninstall rindexer ``` Output: ```bash release "rindexer" uninstalled ``` #### 4.2. Delete the EKS cluster ```bash eksctl delete cluster --name my-cluster --region us-west-2 ``` Ouput: ```bash 2024-08-20 18:49:04 [β„Ή] deleting EKS cluster "my-cluster" 2024-08-20 18:49:05 [β„Ή] will drain 0 unmanaged nodegroup(s) in cluster "my-cluster" 2024-08-20 18:49:05 [β„Ή] starting parallel draining, max in-flight of 1 2024-08-20 18:49:05 [βœ–] failed to acquire semaphore while waiting for all routines to finish: context canceled 2024-08-20 18:49:07 [β„Ή] deleted 0 Fargate profile(s) 2024-08-20 18:49:09 [βœ”] kubeconfig has been updated 2024-08-20 18:49:09 [β„Ή] cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress 2024-08-20 18:49:12 [β„Ή] 2 sequential tasks: { delete nodegroup "standard-workers", delete cluster control plane "my-cluster" [async] } 2024-08-20 18:49:12 [β„Ή] will delete stack "eksctl-my-cluster-nodegroup-standard-workers" 2024-08-20 18:49:12 [β„Ή] waiting for stack "eksctl-my-cluster-nodegroup-standard-workers" to get deleted 2024-08-20 18:49:13 [β„Ή] waiting for CloudFormation stack "eksctl-my-cluster-nodegroup-standard-workers" .... 2024-08-20 18:58:09 [β„Ή] will delete stack "eksctl-my-cluster-cluster" 2024-08-20 18:58:10 [βœ”] all cluster resources were deleted ``` This guide provides the necessary steps to deploy the `rindexer` Helm chart on AWS EKS using `eksctl`. ## GCP ### Prerequisites Ensure that you have the following installed and configured: * **[Google Cloud SDK](https://cloud.google.com/sdk/docs/install)**: Installed and configured with necessary permissions. * **[kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)**: Installed and configured. * **[Helm](https://helm.sh/docs/intro/install/)**: Installed. ### 1. Create a GKE Cluster This command creates a new GKE cluster. Adjust the --zone, --machine-type, and node count options as needed. ```bash gcloud container clusters create my-cluster --zone us-west1-a --machine-type n1-standard-1 --num-nodes=1 --enable-autoscaling --min-nodes=1 --max-nodes=3 ``` Output: ```bash Creating cluster my-cluster in us-west1-a... Cluster is being created. Created [https://container.googleapis.com/v1/projects/my-project/zones/us-west1-a/clusters/my-cluster]. To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-west1-a/my-cluster?project=my-project kubeconfig entry generated for my-cluster. NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS my-cluster us-west1-a v1.30.2-gke.100 35.233.164.24 n1-standard-1 v1.30.2-gke.100 1 RUNNING ``` `gcloud` automatically configures your `kubeconfig` file. To check your nodes: ```bash kubectl get nodes ``` Output: ```bash NAME STATUS ROLES AGE VERSION gke-my-cluster-default-pool-1a2b3c4d-e123 Ready 6m33s v1.30.2-gke.100 ``` ### 2. Deploy the Helm Chart #### 2.1. Download the rindexer repository ```bash git clone https://github.com/joshstevens19/rindexer.git ``` #### 2.2. Configure the `values.yaml` File Customize the `values.yaml` for your deployment: ```yaml replicaCount: 2 image: repository: ghcr.io/joshstevens19/rindexer tag: "latest" pullPolicy: IfNotPresent service: type: ClusterIP port: 3001 ingress: enabled: false postgresql: enabled: false ``` #### 2.3. Install the Helm Chart ```bash helm install rindexer ./helm/rindexer -f helm/rindexer/values.yaml ``` Output: ```bash NAME: rindexer LAST DEPLOYED: Tue Aug 21 18:23:34 2024 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=rindexer,app.kubernetes.io/instance=rindexer" -o jsonpath="{.items[0].metadata.name}") export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT ``` #### 2.4. Verify the Deployment ```bash kubectl get pods ``` Output: ```bash NAME READY STATUS RESTARTS AGE rindexer-rindexer-35bb35619-t9r2l 1/1 Running 1 (7s ago) 17s ``` ### 3. Monitor and Manage the Deployment #### 3.1. View Logs ```bash kubectl logs -l app.kubernetes.io/name=rindexer ``` Output: ```bash 21 August - 17:32:17.710908 INFO RocketPoolETH::Transfer - network ethereum - 100.00% progress 21 August - 17:32:17.779423 INFO RocketPoolETH::Transfer - No events found between blocks 18999946 - 19000000 21 August - 17:32:17.779458 INFO RocketPoolETH::Transfer - COMPLETED - Finished indexing historic events 21 August - 17:32:18.825983 INFO RocketPoolETH::Approval - INDEXED - 4884 events - blocks: 18900000 - 19000000 - network: ethereum 21 August - 17:32:18.827845 INFO RocketPoolETH::Approval - network ethereum - 100.00% progress 21 August - 17:32:18.906260 INFO RocketPoolETH::Approval - No events found between blocks 18999896 - 19000000 21 August - 17:32:18.906299 INFO RocketPoolETH::Approval - COMPLETED - Finished indexing historic events 21 August - 17:32:18.906347 INFO Historical indexing complete - time taken: 2.599786906s 21 August - 17:32:18.906407 INFO Applying indexes if any back to the database as historic resync is complete 21 August - 17:32:18.906414 INFO rindexer resync is complete ``` #### 3.2. Upgrade the Helm Chart ```bash helm upgrade rindexer ./rindexer -f values.yaml ``` ### 4. Clean Up #### 4.1. Uninstall the Helm Chart ```bash helm uninstall rindexer ``` Output: ```bash release "rindexer" uninstalled ``` #### 4.2. Delete the EKS cluster ```bash gcloud container clusters delete my-cluster --zone us-west1-a ``` Ouput: ```bash The following clusters will be deleted. - [my-cluster] in [us-west1-a] Do you want to continue (Y/n)? Y Deleting cluster my-cluster...done. Deleted [https://container.googleapis.com/v1/projects/my-project/zones/us-west1-a/clusters/my-cluster]. ``` This guide provides the necessary steps to deploy the rindexer Helm chart on Google Kubernetes Engine (GKE) using gcloud and kubectl. ### Health Monitoring Rindexer includes a built-in health monitoring server that provides comprehensive system status information. The health server runs on port `8080` by default and provides real-time insights into: * **Database connectivity** - PostgreSQL connection status * **Indexing status** - Whether the indexer is running and how many tasks are active * **Sync status** - Data synchronization health between different storage backends * **Overall system health** - Aggregated status across all components #### Health Server Lifecycle The health server's lifecycle depends on which services you start: * **`rindexer start indexer` (with end\_block set)**: Short-lived - dies when historical indexing completes * **`rindexer start indexer` (no end\_block set)**: Long-lived - stays alive for continuous live indexing * **`rindexer start graphql`**: No health server - health monitoring not available * **`rindexer start all`**: Long-lived - follows the GraphQL server lifecycle #### Accessing Health Endpoints The health server is automatically started when you run rindexer with indexing enabled. It provides the following endpoint: * `GET /health` - Complete health status with detailed service information Example health response: ```json { "status": "healthy", "timestamp": "2024-01-15T10:30:00Z", "services": { "database": "healthy", "indexing": "healthy", "sync": "healthy" }, "indexing": { "active_tasks": 2, "is_running": true } } ``` #### Health Status Types The health endpoint returns different status types: * `healthy` - Service is functioning normally * `unhealthy` - Service has encountered an error * `unknown` - Status cannot be determined * `not_configured` - Service is not set up * `disabled` - Service is intentionally disabled * `no_data` - Service is working but no data is available * `stopped` - Service is not running #### Monitoring in Production For production deployments on GCP, you can: 1. **Set up monitoring alerts** based on HTTP status codes: * `200 OK` - System is healthy * `503 Service Unavailable` - System has issues 2. **Use Google Cloud Monitoring** to track health metrics 3. **Set up automated alerts** when the health status changes to `unhealthy` 4. **Configure load balancer health checks** to point to `/health` 5. **Access health endpoints** through your GCP load balancer: ``` https://your-load-balancer-ip:8080/health ``` #### Custom Health Port You can configure the health server port in your `rindexer.yaml` file: ```yaml global: health_port: 8081 ``` ## Railway ### One-click Deploy Example [![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/template/Rqrlcf?referralCode=eD4laT) ### Deploy an example project [https://github.com/joshstevens19/rindexer/tree/master/providers/railway](https://github.com/joshstevens19/rindexer/tree/master/providers/railway) 1. Clone the relevant directory ```bash # this will clone the railway directory mkdir rindexer-railway && cd rindexer-railway git clone \ --depth=1 \ --no-checkout \ --filter=tree:0 \ https://github.com/joshstevens19/rindexer . git sparse-checkout set --no-cone providers/railway . git checkout && cp -r providers/railway/* . && rm -rf providers ``` 2. Initialize a new Railway project Install [Railway CLI](https://docs.railway.com/guides/cli) if not already installed. ```bash railway login ``` ```bash railway init --name rindexer-example ``` 3. Create a service and link it to the project ```bash railway up --detach railway link ? Select a project > rindexer-example ? Select an environment > production ? Select a service > rindexer-example ``` 4. Create a Postgres database ```bash railway add --database postgres ``` 5. Configure environment variables ```bash railway open ``` * Open the service "Variables" tab: * Select "Add Variable Reference" and add a reference for `DATABASE_URL` and append ?sslmode=disable to the end of the value. The result should look like `${{Postgres.DATABASE_URL}}?sslmode=disable`. * Select "Add Variable Reference" and add a reference for `POSTGRES_PASSWORD`. * Select "New Variable" with name `PORT` and value `3001` (This is the default value for the rindexer service, update this variable accordingly if the value is changed in the rindexer Dockerfile). * Hit "Deploy" or press Shift+Enter. 6. Create a domain to access GraphQL Playground ```bash railway domain ``` 7. Redeploy the service ```bash railway up ``` ### Health Monitoring Rindexer includes a built-in health monitoring server that provides comprehensive system status information. The health server runs on port `8080` by default and provides real-time insights into: * **Database connectivity** - PostgreSQL connection status * **Indexing status** - Whether the indexer is running and how many tasks are active * **Sync status** - Data synchronization health between different storage backends * **Overall system health** - Aggregated status across all components #### Health Server Lifecycle The health server's lifecycle depends on which services you start: * **`rindexer start indexer` (with end\_block set)**: Short-lived - dies when historical indexing completes * **`rindexer start indexer` (no end\_block set)**: Long-lived - stays alive for continuous live indexing * **`rindexer start graphql`**: No health server - health monitoring not available * **`rindexer start all`**: Long-lived - follows the GraphQL server lifecycle #### Accessing Health Endpoints The health server is automatically started when you run rindexer with indexing enabled. It provides the following endpoint: * `GET /health` - Complete health status with detailed service information Example health response: ```json { "status": "healthy", "timestamp": "2024-01-15T10:30:00Z", "services": { "database": "healthy", "indexing": "healthy", "sync": "healthy" }, "indexing": { "active_tasks": 2, "is_running": true } } ``` #### Health Status Types The health endpoint returns different status types: * `healthy` - Service is functioning normally * `unhealthy` - Service has encountered an error * `unknown` - Status cannot be determined * `not_configured` - Service is not set up * `disabled` - Service is intentionally disabled * `no_data` - Service is working but no data is available * `stopped` - Service is not running #### Monitoring in Production For production deployments on Railway, you can: 1. **Set up monitoring alerts** based on HTTP status codes: * `200 OK` - System is healthy * `503 Service Unavailable` - System has issues 2. **Use Railway's built-in monitoring** to track health metrics 3. **Set up automated alerts** when the health status changes to `unhealthy` 4. **Access health endpoints** through your Railway domain: ``` https://your-app.railway.app/health ``` #### Custom Health Port You can configure the health server port in your `rindexer.yaml` file: ```yaml global: health_port: 8081 ``` ## Using Reth Execution Extensions (ExEx) Reth Execution Extensions (ExEx) is a powerful framework introduced by Reth for building high-performance off-chain infrastructure as post-execution hooks. rindexer leverages ExEx to provide superior indexing performance and native reorg handling. ### What is ExEx? ExEx provides a reorg-aware stream called `ExExNotification` which includes: * Blocks with full transaction data * Receipts with logs and state changes * Native reorg notifications * Trie updates for state verification This allows rindexer to: * Process blocks at native speed without RPC overhead * Handle reorganizations automatically * Maintain consistency during chain splits * Access pending transactions and state ### Architecture When running in Reth mode, rindexer operates as an execution extension within the Reth node: ``` β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Reth Node │────▢│ rindexer ExEx│────▢│ PostgreSQL β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ │◀────│ Indexing β”‚ β”‚ Storage β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ExEx Process Write Notifications Events Data ``` ### Chain State Notifications rindexer processes three types of chain state notifications: #### 1. Committed Emitted when new blocks are added to the canonical chain: ```rust Committed { from_block: 19000000, to_block: 19000100, tip_hash: 0x123... } ``` #### 2. Reorged Emitted during reorganizations: ```rust Reorged { // Blocks to revert revert_from_block: 19000098, revert_to_block: 19000100, // New canonical blocks new_from_block: 19000098, new_to_block: 19000101, new_tip_hash: 0x456... } ``` #### 3. Reverted Emitted when blocks are reverted (chain rollback): ```rust Reverted { from_block: 19000099, to_block: 19000100 } ``` ### Configuration #### Basic Configuration ```yaml [rindexer.yaml] name: HighPerformanceIndexer networks: - name: ethereum chain_id: 1 rpc: https://eth.llamarpc.com # Fallback RPC reth: enabled: true logging: true # Enable Reth logs cli_args: - "--datadir /data/reth" - "--authrpc.jwtsecret /secrets/jwt.hex" - "--authrpc.port 8551" - "--chain mainnet" ``` #### Advanced Configuration ```yaml [rindexer.yaml] networks: - name: ethereum chain_id: 1 reth: enabled: true logging: false # Disable for production cli_args: # Core settings - "--datadir /nvme/reth" # Fast NVMe storage - "--authrpc.jwtsecret /secrets/jwt.hex" - "--authrpc.addr 127.0.0.1" - "--authrpc.port 8551" # Archive node (required) - "--full false" # Performance tuning - "--db.log-level error" - "--max-outbound-peers 100" - "--max-inbound-peers 50" # Metrics - "--metrics 127.0.0.1:9001" # HTTP RPC (optional) - "--http" - "--http.addr 0.0.0.0" - "--http.port 8545" - "--http.api eth,net,web3,debug,trace" ``` ### Performance Considerations #### Hardware Requirements For optimal ExEx performance: * **CPU**: 8+ cores recommended * **RAM**: 32GB minimum, 64GB recommended * **Storage**: NVMe SSD with 2TB+ for mainnet archive * **Network**: Stable connection for peer synchronization ### Best Practices 1. **Use Archive Node**: Run Reth in archive mode for ExEx. 2. **Monitor Resources**: Set up alerts for disk, CPU, and memory ### Migration from Standard Mode To migrate an existing project to ExEx: 1. **Sync Reth Node**: Ensure fully synced archive node 2. **Update Config**: Add `reth` section to networks 3. **Reindex**: Consider full reindex for consistency ### Further Resources * [Reth ExEx Documentation](https://reth.rs/developers/exex.html) * [Running Reth on Ethereum](https://reth.rs/run/ethereum) * [rindexer Reth Mode Guide](/docs/start-building/create-new-project/reth-mode) ## Direct SQL ### DBeaver - recommended If you are wanting to access the data directly from the database you can use a tool like DBeaver to connect to the database. you can download it [here](https://dbeaver.io/download/) and is supported on all platforms. If wanting to connect to your docker postgres instance database you can create a new connection in DBeaver and use the following settings: * host: localhost * port: 5440 * database: postgres * username: postgres * password: rindexer press test connection and you should be able to connect to the database. You can then go to `postgres` -> `schemas` and you see the indexer schemas and tables. From there you can run SQL queries to get the data you need inside DBeaver. ### PSQL If you are wanting to use the command line you can use psql to connect to the database. These instructions run you through how to install psql - [https://www.timescale.com/blog/how-to-install-psql-on-mac-ubuntu-debian-windows/](https://www.timescale.com/blog/how-to-install-psql-on-mac-ubuntu-debian-windows/) #### Connect ```bash psql 'postgresql://username:password@localhost:5432/your_database' ``` #### Listing all tables across all schemas rindexer uses schemas to break up the tables and by default psql only shows public so you need to run the following command to see all tables across all schemas. ```bash \dt *.* ``` You can also run the following command to see all tables in a specific schema ```bash \dn schema_name.* ``` #### Query data You can now just run SQL queries to get the data you need. ```sql SELECT * FROM my_project_rocket_pool_eth.transfer; ``` #### Exit To exit the `psql` terminal run ```bash exit ``` ## GraphQL GraphQL is a query language for your API, and a server-side runtime for executing queries using a type system you define for your data. you can learn all about graphql [here](https://graphql.org) ### Hot Tip As GraphQL is a type system this means building queries can be a bit tricky, if you are not familiar with GraphQL. The beauty of this is using the [http://localhost:3001/playground](http://localhost:3001/playground) supplied for you allows you to build up all your queries but also understand every single filter and ordering you can do. ### Querying the data The GraphQL will expose a playground for you which you can get to on [http://localhost:3001/playground](http://localhost:3001/playground) this uses apollo server sandbox which is a great tool for testing and building up your queries - [https://studio.apollographql.com/sandbox/explorer](https://studio.apollographql.com/sandbox/explorer). Note in these examples we will put the raw parameters in the graphql query but you can pass parameters in using the `$` syntax allowing code to define the parameters. :::code-group ```graphql [hardcoded parameter] query AllTransfers { allTransfers(first: 20) { nodes { blockHash blockNumber contractAddress from network nodeId to txHash value } pageInfo { endCursor hasNextPage hasPreviousPage startCursor } } } ``` ```graphql [parameter passed in] query AllTransfers($first: Int!) { allTransfers(first: $$first) { nodes { blockHash blockNumber contractAddress from network nodeId to txHash value } pageInfo { endCursor hasNextPage hasPreviousPage startCursor } } } ``` ::: #### Query naming conventions lets say we had 2 events `Approval` and `Transfer` from the ERC20 standard, the ABI would look like the below: ```json { "anonymous": false, "inputs": [ { "indexed": true, "name": "owner", "type": "address" }, { "indexed": true, "name": "spender", "type": "address" }, { "indexed": false, "name": "value", "type": "uint256" } ], "name": "Approval", "type": "event" }, { "anonymous": false, "inputs": [ { "indexed": true, "name": "from", "type": "address" }, { "indexed": true, "name": "to", "type": "address" }, { "indexed": false, "name": "value", "type": "uint256" } ], "name": "Transfer", "type": "event" } ``` with rindexer graphql you could generate the following queries to get the transfer data you need: :::code-group ```graphql [list of transfers] query AllTransfers { allTransfers { nodes { blockHash blockNumber contractAddress from network nodeId to txHash value } pageInfo { endCursor hasNextPage hasPreviousPage startCursor } } } ``` ```graphql [single transfer] query Transfer($nodeId: ID!) { transfer(nodeId: $nodeId) { nodeId rindexerId contractAddress from to value txHash blockNumber blockHash network } } ``` ::: The format of the query names are: * list items = `all{event_name}s` = `All` + `Transfer` + `s` = `AllTransfers` * single item = `{event_name}` (lowercase) = `transfer` For single item queries you can use the `nodeId` to query single items which is always returned as a field in the list results alongside the singular item query. ##### Conflicting event naming :::warning Important to read if you have 2 events with matching names across contracts. ::: If you have 2 events which have exactly the same name as another contract this is a conflict of naming for graphql so rindexer will render it as `{contract_name}{event_name}` in pascal case, for example `Transfer` would turn into `{contract_name}Transfer` So its is super clear lets say i had a yaml like this: ```yaml name: RocketPoolETHIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: ethereum chain_id: 1 rpc: https://mainnet.gateway.tenderly.co storage: postgres: enabled: true contracts: - name: RocketPoolETH details: - network: ethereum address: 0xae78736cd615f374d3085123a210448e74fc6393 start_block: '18600000' end_block: '18718056' abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer - name: RocketPoolETHFork details: - network: ethereum address: 0xba78736cb615f374d3035123a210448e74fc6392 start_block: '18600000' end_block: '18718056' abi: ./abis/RocketTokenRETH.abi.json include_events: - Transfer ``` My query names for `allTransfers` would be: * `AllRocketPoolETHTransfers` * `AllRocketPoolETHForkTransfers` #### Ordering :::info All filtering options and ordering can both be used together. ::: You can order the results by any field you wish, you can also order by multiple fields the first item in the array will be the applied ordering first then the next will be applied after and so on. :::warning It is advised to have indexes on any fields you which to filter on in your database to make the queries faster. You can define your own indexes in the [storage](/docs/start-building/yaml-config/storage#indexes) section of the YAML configuration file. ::: This example will get the first 20 transfers ordered by the block number ascending. ```graphql query AllTransfers { allTransfers(first: 20, orderBy: [BLOCK_NUMBER_ASC]) { nodes { blockHash blockNumber contractAddress from network nodeId to txHash value } pageInfo { endCursor hasNextPage hasPreviousPage startCursor } } } ``` #### Filtering :::info All filtering options and ordering can both be used together. ::: You can do condition filters as well as advanced filters on all the events indexed. :::warning It is advised to have indexes on any fields you which to filter on in your database to make the queries faster. You can define your own indexes in the [storage](/docs/start-building/yaml-config/storage#indexes) section of the YAML configuration file. ::: ##### Condition You can filter in every event property you want using the `condition` input fields. The example below im filtering on all transfer based on the block number, which has to be a string as its a BigFloat. ```graphql query AllTransfers { allTransfers(first: 20, condition: { blockNumber: "18600181" }) { nodes { blockHash blockNumber contractAddress from network nodeId to txHash value } pageInfo { endCursor hasNextPage hasPreviousPage startCursor } } } ``` You can mix the filtering in every direction with any field so you can filter `blockNumber` with `from` and `to` with `value` or even `network` with `contractAddress` and `txHash`, anything you wish. ```graphql query AllTransfers { allTransfers(first: 20, condition: { blockNumber: "18600181", value: "2000000000000000000" from: "0x0338ce5020c447f7e668dc2ef778025ce398266b" }) { nodes { blockHash blockNumber contractAddress from network nodeId to txHash value } pageInfo { endCursor hasNextPage hasPreviousPage startCursor } } } ``` ##### Filter :::info Advanced filtering is enabled by default but these filters easily be abused and cause performance issues, if you wish to disable it you can set `disable_advanced_filters` to true in the [graphql](/docs/start-building/yaml-config/graphql#disable_advanced_filters) section of the YAML configuration file. ::: For more advanced filtering you can use the `filter` input field. For example if we wanted to get all transfer events over 1 rEth (wei would be 1000000000000000000) and after block number 18600181 we can use the following query. ```graphql query AllTransfers { allTransfers(first: 20, condition: { value: "1000000000000000000", }, filter: { blockNumber: { greaterThan: "18600181" } }) { nodes { blockHash blockNumber contractAddress from network nodeId to txHash value } pageInfo { endCursor hasNextPage hasPreviousPage startCursor } } } ``` #### Result limits You can define how many you which to return using the `first` and `last` properties, you can not return more then 1000 in a single query but you can use offset to get the item you wish to get. We advise to always set a limit on the amount of items you wish to return. * first will return the first inserted x items * last will return the last inserted x items * offset will return the first/last x items after the offset :::code-group ```graphql [first] query AllTransfers { allTransfers(first: 20) { nodes { blockHash blockNumber contractAddress from network nodeId to txHash value } pageInfo { endCursor hasNextPage hasPreviousPage startCursor } } } ``` ```graphql [last] query AllTransfers { allTransfers(last: 20) { nodes { blockHash blockNumber contractAddress from network nodeId to txHash value } pageInfo { endCursor hasNextPage hasPreviousPage startCursor } } } ``` ```graphql [offset] query AllTransfers { allTransfers(first: 20, offset: 20) { nodes { blockHash blockNumber contractAddress from network nodeId to txHash value } pageInfo { endCursor hasNextPage hasPreviousPage startCursor } } } ``` ::: #### Page info The page info will give you the following information: * endCursor: The cursor to continue from * hasNextPage: If there is a next page * hasPreviousPage: If there is a previous page * startCursor: The cursor to start from ##### Cursor based pagination Cursor-based pagination is a common approach to pagination that avoids some of the pitfalls of "classic" page-based pagination. The idea is to encode the current state of the query into a "cursor" that can be passed back to the server to get the next page of results. You can page through the data using `before` and `after` cursors, you can get the cursors from the `pageInfo` object. * `before` will get the items before the cursor - this is how you go back in the data so say page 2 to page 1 * `after` will get the items after the cursor - this is how you go forward in the data so say page 1 to page 2 :::code-group ```graphql [next results] query AllTransfers { allTransfers( first: 1, orderBy: [BLOCK_NUMBER_ASC], after: "WyJibG9ja19udW1iZXJfYXNjIixbMTg2MDAxODEsMV1d" ) { nodes { blockHash blockNumber contractAddress from network nodeId to txHash value } pageInfo { endCursor hasNextPage hasPreviousPage startCursor } } } ``` ```graphql [preview results] query AllTransfers { allTransfers( first: 1, orderBy: [BLOCK_NUMBER_ASC], before: "WyJibG9ja19udW1iZXJfYXNjIixbMTg2MDAxODEsMV1d" ) { nodes { blockHash blockNumber contractAddress from network nodeId to txHash value } pageInfo { endCursor hasNextPage hasPreviousPage startCursor } } } ``` ::: #### Relationships When you define [relationships](/docs/start-building/yaml-config/storage#relationships) between events rindexer will automatically create relationships between the events in the database and expose them on the `GraphQL` interface, this means you can query the relationships within a single query avoiding having to have multiple queries to get the data you need. Lets walk through an example imagine we were playing around with the `lens` data and we want to get the profile metadata back when we get quotes created. we can create a relationship between the `QuoteCreated` `quoteParams.profileId` and the `ProfileMetadataSet` `profileId` events, note you should read about [relationships config](/docs/start-building/yaml-config/storage#relationships) first. Your `rindexer.yaml` would look like: ```yaml name: LensIndexer description: My first rindexer project repository: https://github.com/joshstevens19/rindexer project_type: no-code networks: - name: polygon chain_id: 137 rpc: https://polygon.gateway.tenderly.co storage: postgres: enabled: true relationships: // [!code focus] - contract_name: LensHub // [!code focus] event_name: QuoteCreated // [!code focus] event_input_name: "quoteParams.profileId" // [!code focus] linked_to: // [!code focus] - contract_name: LensHub // [!code focus] event_name: ProfileMetadataSet // [!code focus] event_input_name: profileId // [!code focus] contracts: - name: LensHub // [!code focus] details: - network: polygon address: 0xDb46d1Dc155634FbC732f92E853b10B288AD5a1d start_block: 59034400 end_block: 59034400 abi: ./abis/lens-hub-events-abi.json include_events: // [!code focus] - QuoteCreated // [!code focus] - ProfileMetadataSet // [!code focus] ``` So in this example the `allQuoteCreateds` and `quoteCreated` queries will allow you to get the `ProfileMetadataSet` event in the same query. This is a basic example but you can see how you can query the relationships within the same query. ``` query AllQuoteCreateds { allQuoteCreateds { nodes { nodeId quoteParamsContentUri quoteParamsPointedProfileId quoteParamsPointedPubId by: profileMetadataSetByQuoteParamsProfileId { profileId metadata transactionExecutor timestamp txHash blockNumber blockHash network } timestamp txHash } } } ``` :::info GraphQL supports aliases to make your queries read even nicer, you can read more about them [here](https://graphql.org/learn/queries/#aliases). People may not like the event input names and can easily alias them to something more readable. :::