RPC Optimization Techniques
In-depth technical insights into optimizing Solana RPC usage.
Transaction Optimization Patterns
Follow these best practices for optimizing Solana transaction sending and confirmation. For a detailed guide, visit Helius Transaction Optimization Guide.
Optimize Compute Unit (CU) Usage
Simulate CUs Used: Test your transaction to determine CU usage. Example:
Set a CU Limit: Add a margin (~10%) to the simulated value:
Serialize & Encode Transactions
Serialize and Base58 encode your transaction for APIs:
Set Priority Fees
Get Fee Estimate: Fetch a recommended priority fee from Helius:
Apply the Fee:
Send & Confirm Transactions
Assemble, serialize, and send your transaction using
sendTransaction
orsendRawTransaction
.Tip: Set
skipPreflight: true
to reduce transaction time by ~100ms, but note the loss of pre-validation.
Monitor & Rebroadcast
If a transaction isn’t confirmed:
Use
getSignatureStatuses
to check its status.Rebroadcast until the blockhash expires.
For a comprehensive breakdown, visit our guide.
When to Use Jito Tips
Key Facts
Bundles: Groups of up to 5 transactions executed sequentially and atomically via Jito.
Tips: Incentives for block builders to execute bundles.
Auctions: Bundles compete via out-of-protocol auctions every 200ms based on tip amounts.
Validators: Only Jito-Solana client validators (currently >90% of stake) can process bundles.
Prioritization: Tips, compute efficiency, and account locking patterns determine bundle order.
Use Cases: Ideal for landing transactions at the top of a block.
When to Use Jito Tips
MEV Opportunities: Arbitrage trading, liquidation transactions, front-running protection, specific transaction ordering.
Time-Critical DeFi Operations: Token launches, high-volatility trades, NFT mints.
High-Stakes Transactions: Immediate settlement or time-sensitive interactions.
Examples of Jito Usage
Arbitrage Trading: A trader identifies a price difference between two decentralized exchanges (DEXs). Using Jito tips ensures their arbitrage transaction is processed at the top of the block, securing profit before others can react.
NFT Minting: During a high-demand mint, competition is fierce. Adding Jito tips guarantees priority placement in the block, increasing the chances of a successful mint.
Liquidation Transactions: In lending protocols, liquidations can be time-critical. Jito tips allow the liquidator’s transaction to execute ahead of others, ensuring timely liquidation and profit capture.
Best Practices for Jito Tips
Use Tips for Priority: Apply Jito tips only when transaction timing and order are critical.
Avoid Overusing Tips: Routine actions like token transfers or minor interactions typically don’t benefit from Jito tips.
Optimize for Efficiency:
Assess the urgency and value of the transaction.
Ensure compute resources and account locking patterns are optimal to avoid unnecessary tip spending.
Monitor Network Conditions: High congestion may necessitate higher tips to compete effectively.
Evaluation Checklist Before Using Jito Tips
Is the transaction time-sensitive?
Does the transaction require specific ordering in the block?
Are the accounts being accessed under high contention (hot state)?
What is the potential ROI compared to the cost of the tip?
Are current network conditions favorable for using Jito?
JavaScript/TypeScript Optimization Tips for Solana
Lazy Loading
Load components only when needed to reduce initial load time:
Optimized Loops
Use for
loops instead of forEach
for better performance with large datasets:
Use Map
or Set
for Lookups
For frequent lookups, Map
and Set
are faster than arrays:
Prefer const
const
enables better optimizations than var
or let
:
Manage Memory
Clear unused intervals or subscriptions to avoid leaks:
Batch or Debounce API Calls
Minimize redundant RPC calls:
Simplify Object Handling
Avoid deep cloning of large objects; use shallow copies:
Optimize JSON Handling
For large payloads, use libraries like json-bigint
:
Why:
Better performance and scalability
Reduced resource usage
Cleaner, maintainable code
Data Transfer Optimizations
Base64 Is Faster than Base58
For serialized transaction data on Solana, Base64 is faster and more efficient than Base58. Base64 avoids complex calculations and is widely supported by Solana APIs.
Use Base64 Encoding:
Why:
Faster encoding/decoding
Native support in APIs
Ideal for performance-critical tasks
Efficient Token Balance Lookup
Instead of:
In the above approach, you make one call to fetch the token accounts, then make multiple additional calls—one per token account—to retrieve balances. This approach quickly becomes expensive for wallets with many token accounts (e.g., NFTs or multiple SPL tokens).
Use:
By requesting jsonParsed
data in a single RPC call, you eliminate the need for separate getTokenAccountBalance
calls for each account. This drastically reduces both the round-trip overhead and the total data transferred (from ~2 KB per account to ~200 B).
Why:
Fewer RPC calls: Collapses N calls into 1.
Less data: Fetching parsed token data directly avoids redundant information.
Smart Program Account Selection
Instead of:
This strategy downloads the entire dataset and filters it on the client side, which can be slow and expensive.
Use:
Using server-side filters (dataSize
and memcmp
) and slicing the data significantly reduces the volume of data your application processes locally.
Why:
Reduced data transfer: Server-side filtering avoids downloading unneeded data.
Better performance: Less CPU usage on the client, fewer bytes over the wire.
Better Transaction History Search
Instead of:
Calling getTransaction
for each signature quickly adds up to hundreds or thousands of RPC calls.
Use:
By batching all signatures into a single getTransactions
call, you drastically reduce total latency.
Why:
Fewer round trips: One request instead of 1000.
Server-side optimization: The RPC node handles bulk processing more efficiently than many small requests.
Real-Time Account Monitoring
Instead of:
A polling approach can waste both bandwidth and compute resources if the account rarely changes.
Use:
Using WebSockets (onAccountChange
) pushes updates to your application in near-real time and eliminates repetitive polling.
Why:
Lower latency: Changes are delivered as they happen, rather than on a fixed schedule.
Less network overhead: You only receive data when it changes, rather than every second.
Block Info Streaming
Instead of:
Polling for each new block can become costly over time.
Use:
By subscribing to slot changes, your application gets block data in real time without constant polling.
Why:
Eliminates polling: New data is pushed as soon as the RPC node observes a new block.
Finer control: You can decide which transaction details to fetch (
signatures
,full
, etc.).
Advanced Query Patterns
Token Holder Breakdown
Instead of:
This approach unnecessarily downloads data for every token account in existence.
Use:
Why:
Targeted queries: Only fetch accounts for the specified mint.
Significant bandwidth savings: Up to a 99% reduction in data transfer.
Program State Analysis
Instead of:
Filtering locally means downloading a large dataset first.
Use:
Why:
Reduced data transfer: Leverage the RPC node to filter by
dataSize
andmemcmp
.Faster client processing: Only download essential fields via
dataSlice
.
Validator Performance Check
Instead of:
Fetching block production metrics individually is inefficient.
Use:
Why:
One request: Retrieves aggregated block production stats in bulk.
Fewer network calls: Lowers overhead and speeds up data processing.
Account Updates Analysis
Instead of:
Re-fetching historical data for every transaction can be slow and memory-intensive.
Use:
Why:
Streaming approach: Capture state changes as they occur.
Less data: Only fetch slices of the account if you need partial info.
Memory Optimization Patterns
Processing Large Data
Instead of:
Processing thousands of accounts at once can lead to out-of-memory errors.
Use:
Chunking ensures that you only load manageable subsets of data at a time.
Why:
Prevents OOM: Keeps memory usage in check by processing smaller batches.
Improved throughput: Parallel processing of chunks can speed up overall operation.
Transaction Graph Analysis
Instead of:
Sequentially processing a large number of transactions can be slow.
Use:
Why:
Faster: Batching transactions reduces overhead.
Controlled memory usage: Large sets are split into smaller requests.
Managing Program Buffers
Instead of:
Holding all buffer data in memory can become very large, very quickly.
Use:
Why:
Lazy loading: Only fetch buffer contents when needed.
90% reduction in initial memory usage: You avoid loading all buffers at once.
Token Account Reconciliation
Instead of:
Fetching all token accounts globally is often unnecessary.
Use:
Why:
Targeted queries: Only query token accounts for known owners.
Less memory usage: An 80% reduction compared to pulling every token account on chain.
Compressed NFT Indexing
Instead of:
If you have a large number of compression trees, this can be a bottleneck.
Use:
Why:
Parallel execution: Processes multiple trees simultaneously.
Timeouts: Prevents tasks from blocking the entire flow.
Network Optimization Patterns
Smart Retry Logic
Instead of:
A rigid retry pattern can fail in scenarios with variable network conditions or rate limits.
Use:
Why:
Adaptive backoff: Dynamically extends wait time for repeated failures.
Handles rate limits: Checks for specific errors (e.g., "429 Too Many Requests").
WebSocket Optimization
Instead of:
Too many individual subscriptions can strain the WebSocket connection.
Use:
Why:
Fewer connections: Consolidates multiple subscriptions into one.
Lower overhead: Reduces the complexity of maintaining many WebSocket channels.
Custom Data Feeds
Instead of:
Receiving updates for every account in a program can flood your application with unneeded data.
Use:
Why:
Reduced bandwidth: Filter out accounts you don’t care about.
Less processing: Limits the data you must handle on each event.
Transaction Monitoring
Instead of:
Polling for signatures can lead to duplicate checks and wasted requests.
Use:
Why:
Push-based: Gets new signatures immediately via logs.
Less duplication: Eliminates repeated polling intervals.
Best Practices
Use Appropriate Commitment Levels
processed
for WebSocket subscriptions.confirmed
for general queries.finalized
only when absolute certainty is required.
Implement Robust Error Handling
Use exponential backoff for retries.
Handle rate limit (HTTP 429) errors gracefully.
Validate responses to avoid processing incomplete or corrupted data.
Optimize Data Transfer
Utilize
dataSlice
wherever possible to limit payload size.Leverage server-side filtering (
memcmp
anddataSize
).Choose the most efficient encoding option (
base64
,jsonParsed
, etc.).
Manage Resources
Batch operations to reduce overhead.
Cache results to avoid redundant lookups.
Bundle multiple instructions into a single transaction where applicable.
Monitor Performance
Track RPC usage and latency.
Monitor memory consumption for large dataset processing.
Log and analyze errors to detect bottlenecks.
Circuit Breakers & Throttling
Employ circuit breakers to halt or pause operations under excessive error rates.
Throttle requests to respect rate limits and ensure stable performance.
By following these techniques and best practices, you can significantly reduce operational costs, enhance real-time responsiveness, and scale more effectively on Solana.
Last updated