pub struct SparkSdk<S: SparkSigner + Send + Sync = DefaultSigner> { /* private fields */ }Expand description
SparkSdk is the main struct for the Spark wallet
Implementations§
Source§impl<S: SparkSigner + Send + Sync> SparkSdk<S>
impl<S: SparkSigner + Send + Sync> SparkSdk<S>
pub fn cleanup(&self) -> Result<(), SparkSdkError>
Source§impl SparkSdk
impl SparkSdk
pub async fn cooperative_exit(&self) -> Result<(), SparkSdkError>
Source§impl<S: SparkSigner + Send + Sync + Clone + 'static> SparkSdk<S>
impl<S: SparkSigner + Send + Sync + Clone + 'static> SparkSdk<S>
Sourcepub async fn create_tree(
&self,
deposit_transaction: Option<Transaction>,
parent_node: Option<Arc<RwLock<TreeNode>>>,
vout: u32,
split_level: u32,
parent_signing_public_key: Vec<u8>,
) -> Result<CreateTreeSdkResponse, SparkSdkError>
pub async fn create_tree( &self, deposit_transaction: Option<Transaction>, parent_node: Option<Arc<RwLock<TreeNode>>>, vout: u32, split_level: u32, parent_signing_public_key: Vec<u8>, ) -> Result<CreateTreeSdkResponse, SparkSdkError>
Creates a new tree of deposit addresses from either a deposit transaction or parent node.
This function handles the creation of a new tree of deposit addresses by:
- Generating the deposit address tree structure based on the split level
- Finalizing the tree creation by signing all nodes and registering with the Spark network
§Arguments
deposit_transaction- Optional Bitcoin transaction containing the deposit UTXOparent_node- Optional parent tree node to create child nodes fromvout- The output index in the deposit transaction or parent node to usesplit_level- The number of levels to split the tree into (determines number of leaves)parent_signing_public_key- The public key of the parent node or deposit UTXO
§Returns
Ok(CreateTreeSdkResponse)- Contains the finalized tree response including data for all created nodes, both branch and leaf nodes.Err(SparkSdkError)- If there was an error during tree creation
§Errors
Returns SparkSdkError if:
- Neither deposit transaction nor parent node is provided
- Failed to generate deposit addresses
- Failed to finalize tree creation with signatures
- Network errors when communicating with Spark operators
§Example
let deposit_tx = Transaction::default(); // Your deposit transaction
let split_level = 2; // Creates 4 leaf nodes
let parent_pubkey = vec![/* public key bytes */];
let tree = sdk.create_tree(
Some(deposit_tx),
None,
0,
split_level,
parent_pubkey
).await?;Source§impl<S: SparkSigner + Send + Sync + Clone + 'static> SparkSdk<S>
impl<S: SparkSigner + Send + Sync + Clone + 'static> SparkSdk<S>
Sourcepub async fn generate_deposit_address(
&self,
) -> Result<GenerateDepositAddressSdkResponse, SparkSdkError>
pub async fn generate_deposit_address( &self, ) -> Result<GenerateDepositAddressSdkResponse, SparkSdkError>
Generates a new deposit address for receiving funds into the Spark wallet.
This function handles the generation of a new deposit address by:
- Creating a new signing keypair for the deposit address
- Requesting a deposit address from the Spark network
- Validating the returned address and proof of possession
§Returns
Ok(GenerateDepositAddressSdkResponse)- Contains the validated deposit address and signing public keyErr(SparkSdkError)- If there was an error during address generation
§Errors
Returns SparkSdkError if:
- Failed to generate new signing keypair
- Network errors when communicating with Spark operators
- Address validation fails (e.g. invalid proof of possession)
§Example
let deposit_address = sdk.generate_deposit_address().await?;
println!("New deposit address: {}", deposit_address.deposit_address.address);Sourcepub async fn claim_deposits(&self) -> Result<(), SparkSdkError>
pub async fn claim_deposits(&self) -> Result<(), SparkSdkError>
Claims any pending deposits for this wallet by:
- Querying unused deposit addresses from Spark
- Checking the mempool for transactions to those addresses
- Finalizing any found deposits by creating tree nodes
§Errors
Returns SparkSdkError if:
- Failed to connect to Spark service
- Failed to query mempool
- Failed to finalize deposits
§Example
sdk.claim_deposits().await?;Sourcepub async fn finalize_deposit(
&self,
signing_pubkey: Vec<u8>,
deposit_tx: Transaction,
vout: u32,
) -> Result<Vec<TreeNode>, SparkSdkError>
pub async fn finalize_deposit( &self, signing_pubkey: Vec<u8>, deposit_tx: Transaction, vout: u32, ) -> Result<Vec<TreeNode>, SparkSdkError>
Finalizes a deposit by creating a tree node and transferring it to self
§Arguments
signing_pubkey- The public key used for signingdeposit_tx- The Bitcoin transaction containing the depositvout- The output index in the transaction
§Errors
Returns SparkSdkError if:
- Failed to create tree node
- Failed to transfer deposits
§Returns
Returns an empty vector of TreeNodes on success
Source§impl<S: SparkSigner + Send + Sync + Clone + 'static> SparkSdk<S>
impl<S: SparkSigner + Send + Sync + Clone + 'static> SparkSdk<S>
Sourcepub async fn new(
network: SparkNetwork,
signer: Arc<S>,
) -> Result<Self, SparkSdkError>
pub async fn new( network: SparkNetwork, signer: Arc<S>, ) -> Result<Self, SparkSdkError>
Creates a new instance of the Spark SDK.
This is the main entry point for interacting with the Spark protocol. It initializes the SDK with the provided network configuration, signer implementation, and optional data storage path.
§Arguments
network- The Spark network to connect to (e.g. Regtest or Mainnet)signer- Implementation of the SparkSigner trait wrapped in Arcfor thread-safe access data_path- Optional path to store wallet data. If None, defaults to WALLET_DB_PATH
§Returns
Returns a Result containing either:
- The initialized SparkSdk instance
- A SparkSdkError if initialization fails
§Examples
use spark_wallet_sdk::{SparkSdk, SparkNetwork, DefaultSigner};
use std::sync::Arc;
use parking_lot::RwLock;
async fn init_sdk() {
let signer = Arc::new(RwLock::new(DefaultSigner::new().unwrap()));
let sdk = SparkSdk::new(
SparkNetwork::Regtest,
signer,
None
).await.unwrap();
}Sourcepub fn get_identity_public_key(&self) -> &[u8] ⓘ
pub fn get_identity_public_key(&self) -> &[u8] ⓘ
Returns the identity public key of the wallet.
The identity public key is a 33-byte compressed secp256k1 public key that uniquely identifies this wallet instance. It is used for authentication and authorization with Spark operators. This key is generated when the wallet is first created and remains constant throughout the wallet’s lifetime.
The identity public key serves several purposes:
- Authenticates the wallet with Spark operators during API calls
- Used in deposit address generation to prove ownership
- Required for validating operator signatures
- Helps prevent unauthorized access to wallet funds
§Returns
A byte slice containing the 33-byte compressed secp256k1 public key in SEC format. The first byte is either 0x02 or 0x03 (the parity), followed by the 32-byte X coordinate.
§Examples
let signer = Arc::new(RwLock::new(DefaultSigner::new().unwrap()));
let sdk = SparkSdk::new(SparkNetwork::Regtest, signer, None).await.unwrap();
let pubkey = sdk.get_identity_public_key();
assert_eq!(pubkey.len(), 33);Sourcepub fn get_network(&self) -> SparkNetwork
pub fn get_network(&self) -> SparkNetwork
Returns the Bitcoin network that this wallet is connected to.
The network determines which Spark operators the wallet communicates with and which Bitcoin network (mainnet or regtest) is used for transactions.
§Network Types
SparkNetwork::Mainnet- Production Bitcoin mainnet environmentSparkNetwork::Regtest- Testing environment using Lightspark’s regtest network
The network is set when creating the wallet and cannot be changed after initialization. All transactions and addresses will be created for the configured network.
§Returns
Returns a SparkNetwork enum indicating whether this is a mainnet or regtest wallet.
§Examples
let signer = Arc::new(RwLock::new(DefaultSigner::new().unwrap()));
let sdk = SparkSdk::new(SparkNetwork::Regtest, signer, None).await.unwrap();
assert_eq!(sdk.get_network(), SparkNetwork::Regtest);Source§impl<S: SparkSigner + Send + Sync> SparkSdk<S>
impl<S: SparkSigner + Send + Sync> SparkSdk<S>
Sourcepub fn get_available_leaves_count(&self) -> Result<u32, SparkSdkError>
pub fn get_available_leaves_count(&self) -> Result<u32, SparkSdkError>
Returns the count of available leaves in the wallet.
An available leaf is one that has a status of LeafNodeStatus::Available, meaning it can be
used for transfers or other operations. This excludes leaves that are locked, pending, or in other states.
§Returns
Ok(u32)- The number of available leavesErr(SparkSdkError)- If there was an error accessing the leaf manager
§Example
let available_count = sdk.get_available_leaves_count().unwrap();
println!("Number of available leaves: {}", available_count);Sourcepub fn get_btc_balance(&self) -> Result<u64, SparkSdkError>
pub fn get_btc_balance(&self) -> Result<u64, SparkSdkError>
Returns the balance of the wallet in satoshis.
This function calculates the total value of all available leaves in the wallet.
§Returns
Ok(u64)- The total balance in satoshisErr(SparkSdkError)- If there was an error accessing the leaf manager
§Example
let balance = sdk.get_btc_balance().unwrap();
println!("Balance: {}", balance);Source§impl<S: SparkSigner + Send + Sync + Clone + 'static> SparkSdk<S>
impl<S: SparkSigner + Send + Sync + Clone + 'static> SparkSdk<S>
pub async fn pay_lightning_invoice( &self, invoice: String, amount_sats: u64, ) -> Result<String, SparkSdkError>
pub async fn create_lightning_invoice( &self, amount_sats: i64, memo: Option<String>, expiry_seconds: Option<i32>, ) -> Result<String, SparkSdkError>
Source§impl<S: SparkSigner + Send + Sync + Clone + 'static> SparkSdk<S>
impl<S: SparkSigner + Send + Sync + Clone + 'static> SparkSdk<S>
Sourcepub async fn split(&self, target_value: u64) -> Result<(), SparkSdkError>
pub async fn split(&self, target_value: u64) -> Result<(), SparkSdkError>
Splits a leaf node into two nodes of specified target value.
This function allows splitting a leaf node into two nodes when the SSP (Spark Service Provider) is down. The split operation is only permitted during SSP downtime as a fallback mechanism.
§Arguments
target_value- The target value in satoshis for one of the split nodes. Must be greater than the dust amount.
§Returns
Ok(())if the split operation succeedsErr(SparkSdkError)if:- The SSP is online (splits not allowed)
- Target value is below dust amount
- No suitable leaf node available for splitting
- Other errors during the split process
§Example
// Split a leaf node into two nodes, one with 10000 sats
sdk.split(10000).await?;Source§impl<S: SparkSigner + Send + Sync + Clone + 'static> SparkSdk<S>
impl<S: SparkSigner + Send + Sync + Clone + 'static> SparkSdk<S>
pub async fn request_leaves_swap( &self, target_amount: u64, leaf_ids: Vec<String>, ) -> Result<(), SparkSdkError>
Source§impl<S: SparkSigner + Send + Sync + Clone + 'static> SparkSdk<S>
impl<S: SparkSigner + Send + Sync + Clone + 'static> SparkSdk<S>
Sourcepub async fn query_pending_transfers(
&self,
) -> Result<Vec<Transfer>, SparkSdkError>
pub async fn query_pending_transfers( &self, ) -> Result<Vec<Transfer>, SparkSdkError>
Queries all pending transfers where the current user is the receiver.
This function retrieves all pending transfers that are waiting to be accepted by the current user. A pending transfer represents funds that have been sent to the user but have not yet been claimed. The transfers remain in a pending state until the receiver claims them, at which point the funds become available in their wallet.
§Returns
Ok(Vec<Transfer>)- A vector of pendingTransferobjects if successfulErr(SparkSdkError)- If there was an error querying the transfers
§Example
let pending = sdk.query_pending_transfers().await?;
for transfer in pending {
println!("Pending transfer: {} satoshis", transfer.amount);
}Sourcepub async fn transfer(
&self,
amount: u64,
receiver_identity_pubkey: Vec<u8>,
) -> Result<String, SparkSdkError>
pub async fn transfer( &self, amount: u64, receiver_identity_pubkey: Vec<u8>, ) -> Result<String, SparkSdkError>
Initiates a transfer of funds to another user.
This function handles the process of transferring funds from the current user’s wallet to another user, identified by their public key. The transfer process involves several steps:
- Selecting appropriate leaves (UTXOs) that contain sufficient funds for the transfer
- Locking the selected leaves to prevent concurrent usage
- Generating new signing keys for the transfer
- Creating and signing the transfer transaction
- Removing the used leaves from the wallet
The transfer remains in a pending state until the receiver claims it. The expiry time is set to
30 days by default (see DEFAULT_TRANSFER_EXPIRY).
§Arguments
amount- The amount to transfer in satoshis. Must be greater than the dust limit and the wallet must have a leaf with exactly this amount.receiver_identity_pubkey- The public key identifying the receiver of the transfer. This should be the receiver’s identity public key, not a regular Bitcoin public key.
§Returns
Ok(String)- The transfer ID if successful. This ID can be used to track the transfer status.Err(SparkSdkError)- If the transfer fails. Common error cases include:- No leaf with exact amount available
- Failed to lock leaves
- Failed to generate new signing keys
- Network errors when communicating with Spark operators
§Example
let amount = 100_000; // 100k satoshis
let receiver_pubkey = vec![/* receiver's public key bytes */];
let transfer_id = sdk.transfer(amount, receiver_pubkey).await?;
println!("Transfer initiated with ID: {}", transfer_id);§Notes
Currently, the leaf selection algorithm only supports selecting a single leaf with the exact transfer amount. Future versions will support combining multiple leaves and handling change outputs.
pub async fn transfer_leaf_ids( &self, leaf_ids: Vec<String>, receiver_identity_pubkey: Vec<u8>, ) -> Result<String, SparkSdkError>
Sourcepub async fn claim_transfer(
&self,
transfer: Transfer,
) -> Result<(), SparkSdkError>
pub async fn claim_transfer( &self, transfer: Transfer, ) -> Result<(), SparkSdkError>
Claims a pending transfer that was sent to this wallet.
This function processes a pending transfer and claims the funds into the wallet. It performs the following steps:
- Verifies the transfer is in the correct state (SenderKeyTweaked)
- Verifies and decrypts the leaf private keys using the wallet’s identity key
- Generates new signing keys for the claimed leaves
- Finalizes the transfer by:
- Tweaking the leaf keys
- Signing refund transactions
- Submitting the signatures to the Spark network
- Storing the claimed leaves in the wallet’s database
§Arguments
transfer- The pending transfer to claim, must be in SenderKeyTweaked status
§Returns
Ok(())- If the transfer was successfully claimedErr(SparkSdkError)- If there was an error during the claim process
§Errors
Returns SparkSdkError::InvalidInput if:
- The transfer is not in SenderKeyTweaked status
May also return other SparkSdkError variants for network, signing or storage errors.
§Example
let pending = sdk.query_pending_transfers().await?;
for transfer in pending {
sdk.claim_transfer(&transfer).await?;
}pub async fn claim_transfers(&self) -> Result<(), SparkSdkError>
Trait Implementations§
Auto Trait Implementations§
impl<S> Freeze for SparkSdk<S>
impl<S = DefaultSigner> !RefUnwindSafe for SparkSdk<S>
impl<S> Send for SparkSdk<S>
impl<S> Sync for SparkSdk<S>
impl<S> Unpin for SparkSdk<S>
impl<S = DefaultSigner> !UnwindSafe for SparkSdk<S>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
Source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T in a tonic::Request§impl<T> WithSubscriber for T
impl<T> WithSubscriber for T
§fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
§fn with_current_subscriber(self) -> WithDispatch<Self>
fn with_current_subscriber(self) -> WithDispatch<Self>
impl<T> ErasedDestructor for Twhere
T: 'static,
impl<T> MaybeSendSync for T
Layout§
Note: Most layout information is completely unstable and may even differ between compilations. The only exception is types with certain repr(...) attributes. Please see the Rust Reference's “Type Layout” chapter for details on type layout guarantees.
Size: 40 bytes