General: New Projects - FlipsideCrypto/fsc-evm GitHub Wiki
Requires fsc-evm v4.0.0+
Fsc-evm for New Projects
Overview
The fsc-evm package is at the heart of all EVM dbt model repos. New EVM blockchain projects deploy through a structured, phased approach using automated make commands that handle RPC node compatibility testing, infrastructure setup, and model deployment. Each project begins with the main_package
, which provides foundational models (fact_blocks
, fact_transactions
, fact_event_logs
, fact_traces
) and essential automation including GitHub Actions workflow management, Snowflake task scheduling, and streamline data ingestion pipelines.
The framework's custom variable management system extends dbt's native functionality, allowing project-specific configuration through dedicated macro files while maintaining backwards compatibility. Additional specialized packages (decoder_package
, curated_package
, balances_package
, scores_package
) can be enabled as needed to provide smart contract decoding, protocol analytics, balance tracking, and activity scoring capabilities.
Key Macros
RPC Node Compatibility
call_sample_rpc_node(blockchain, node_provider, network, ...)
Macro: - Calls the
sample_rpc_node
stored procedure to test RPC node capabilities and compatibility - Accepts parameters for blockchain, node_provider, network, and various overrides
- Uses project variables as defaults when specific parameters are not provided
- Critical for determining what blockchain data fields are available during initial deployment to establish core table columns
set_dynamic_fields(gold_model)
Macro: - Dynamically determines which fields are available for specific gold models (
fact_blocks
,fact_transactions
) - Queries the
admin__fact_rpc_details
table to check field availability from RPC responses - Returns dictionary mapping field names to boolean availability status
- Enables conditional model building based on what data the blockchain actually provides
- Note:
field
inall_fields
must be set manually if a new object key is presented in the node output for a blockchain
create_sample_rpc_node_sp()
Macro: - Creates the
admin.sample_rpc_node
stored procedure in Snowflake - Only runs when
UPDATE_UDFS_AND_SPS
variable is set to true - The stored procedure tests RPC node endpoints for compatibility with various blockchain data types
- Creates logging table
admin.rpc_node_logs
to track RPC testing results, which outputs data such as theblocks_per_hour
,range_tested
and applicable fields for blocks, transactions, receipts (and traces if enabled) - Grants appropriate permissions to internal roles for procedure usage
Livequery & Streamline Deployment
livequery_grants()
Macro: - Grants usage permissions on
_live
and_utils
schemas to project-specific roles - Only executes when
UPDATE_UDFS_AND_SPS
variable is set to true - Applies grants to
AWS_LAMBDA_{PROJECT}_API
,DBT_CLOUD_{PROJECT}
, andINTERNAL_DEV
roles - Essential for enabling livequery functionality in deployed projects
create_evm_streamline_udfs()
Macro: - Creates the
streamline
schema and essential UDFs whenUPDATE_UDFS_AND_SPS
is true - Deploys core streamline functions such as
bulk_rest_api_v2
,bulk_decode_logs_v2
, andbulk_decode_traces_v2
- Required for streamline data ingestion processes to function properly
Github Actions Workflow Management
drop_github_actions_schema()
Macro: - Safely drops all existing tasks in the
github_actions
schema - Used for clean deployments when recreating GitHub Actions automation and updating workflows or schedules
generate_workflow_schedules(chainhead_schedule)
Macro: - Generates cron schedules for all GitHub Actions workflows based on a root chainhead schedule
- Creates unique schedules per repository using database name as a seed to avoid conflicts
- Supports various cadences: hourly, every 4 hours, daily, weekly, monthly, and custom
- Returns workflow names with their corresponding cron schedules and cadence types
- Respects variable overrides for custom scheduling when defined
create_workflow_table(workflow_values)
Macro: - Creates the
github_actions.workflows
table with workflow names from.github/workflows/...yml
files - Applied during make command execution to catalog available workflows
- Grants appropriate permissions to internal and project-specific roles
create_gha_tasks()
Macro: - Creates Snowflake tasks that trigger GitHub Actions workflows on defined schedules
- Reads from
github_actions__workflow_schedule
model to get task definitions - Tasks execute GitHub workflow dispatches via the
workflow_dispatches
function - Supports optional automatic task resumption via
RESUME_GHA_TASKS
variable
alter_gha_tasks(task_names, task_action)
Macro: - Modifies specific GitHub Actions tasks by name with actions like RESUME or SUSPEND
- Accepts comma-separated list of task names for batch operations
alter_all_gha_tasks(task_action)
Macro: - Applies the same action (RESUME/SUSPEND) to all GitHub Actions tasks in the project
- Reads task list from
github_actions__workflow_schedule
model for comprehensive management
These macros all work together through the deployment phases (deploy_chain_phase_1
... deploy_chain_phase_4
) to establish a fully functional EVM blockchain data pipeline, with proper scheduling, monitoring, and data processing capabilities.
Key Packages
main_package
Package: - Core foundational package that provides essential blockchain data models and infrastructure
- Contains fundamental data layers: bronze (raw ingestion), silver (cleaned/transformed), and gold (analytics-ready)
- Includes critical sub-packages such as (subject to change):
- core - Primary blockchain data models including
fact_blocks
,fact_transactions
,fact_event_logs
,fact_traces
,ez_token_transfers
,ez_native_transfers
, anddim_contracts
- streamline - Data ingestion pipeline models for real-time and historical blockchain data collection
- admin - Administrative models for variable management, RPC node details, and system metadata
- prices - Price data models including
ez_prices_hourly
,ez_asset_metadata
, anddim_asset_metadata
- labels - Address labeling and categorization through
dim_labels
- observability - Data quality monitoring and pipeline health tracking
- utils - Utility models and helper functions
- github_actions - Workflow automation and task management models
- core - Primary blockchain data models including
decoder_package
Package: - Specialized package for smart contract interaction decoding and ABI management
- Enables interpretation of raw blockchain data into human-readable formats
- Contains primary sub-packages such as (subject to change):
- abis - Contract ABI collection, management, and storage across bronze, silver, gold, and streamline layers
- decoded_logs - Event log decoding pipeline with models like
ez_decoded_event_logs
that translate raw log data using collected ABIs
curated_package
Package: - Higher-level analytics package providing protocol-specific and statistical models
- Built on top of core blockchain data to deliver business intelligence
- Contains specialized sub-packages such as (subject to change):
- protocols - Protocol-specific models (e.g., Vertex exchange analytics with models like
ez_market_stats
,ez_account_stats
,ez_liquidations
,ez_market_depth_stats
) - stats - Blockchain-wide statistical analysis including
ez_core_metrics_hourly
for network health and usage metrics
- protocols - Protocol-specific models (e.g., Vertex exchange analytics with models like
balances_package
Package: - Specialized package for tracking account balances and state changes
- Provides comprehensive native asset and token balances tables
Each package follows the standard dbt layered medallion architecture (bronze → silver → gold) and integrates seamlessly with the fsc-evm variable management system to provide configurable, scalable blockchain data infrastructure.
Deployment Process
Please see New Project Set Up for an in-depth look into the new project deployment process, including step by step instructions and developer notes.