Pubkey script alone is not enough to verify a transaction, you’ll need:
Pubkey script, to evaluate the scripts
Amount, in order to check if the sum of inputs is bigger than or equal to sum of outputs
Index (in TxOut list) and tx hash of the transaction being spent
Block height so that you can set the consensus rules
After gathering all the above (should be found in the node’s UTXO database) you can start evaluating if the provided script is valid which isn’t limited to signature verification but running/evaluating the script that includes checking correctness of the script, OP codes, OP count, script size,… A lot of the verification can be found in validation.cpp file. This is also how I do it in Bitcoin.Net library.
Both have coinbase transactions, which spend to d5d27987d2a3dfc724e359870c6644b40e497bdc0589a033220fe15429d88599-0. Also, first transaction output is not spent by the time second transaction is added to blockchain.
What does this mean? If d5d27987d2a3dfc724e359870c6644b40e497bdc0589a033220fe15429d88599-0 is ever one of the inputs, how much bitcoin does it contain? 50 or 100?
Given an arbitrary signed raw transaction, how can we easily verify if all inputs are correctly signed (assuming all inputs are existent/unspent and the fee is higher than zero)? Bitcoin core’s RPC command testmempoolaccept will check if all inputs are available to be spent in the mempool/blockchain so it’s impossible to test transactions that have parents not yet broadcasted.
Appreciate this is a rather odd question, so I will try to clarify as much as possible. Please also be assured this is a question purely for my own education, I’m not about to rush off and do crazy things in our software on the back of it.
I have a customer requirement for a transaction time of <10ms on a system that is based around an SQL database – in our specific implementation it is Oracle DB. I’m aware that this is not a useful or meaningful requirement, so with my business hat on I’ll be dealing with that. I fully expect that the requirement will be revised to something more useful and achievable.
However, I am curious on a technical level. Could you squeeze transaction time on an SQL DB down below 10ms? Lets be generous and say this is pure SQL execution time, no comms, no abstraction layers etc. Right now, running select 1 from dual on one of our systems gives a reported execution time of 10-20ms and I’d assume that’s about the simplest query possible. What if anything might you do to reduce that time (a) within Oracle/SQL or the server environment (b) by making a different tech choice? I’d assume maybe a higher clock speed on the CPU might help, but I wouldn’t bet on it.
Being inspired by the thread (Redeeming a raw transaction step by step example required) I wanted to understand transaction signatures on the byte level. So I created my own Linux scripts that:
generate key pairs
accumulate all the necessary bytes that a raw transaction needs
leverage OpenSSL to sign the transactions
Everything that deals with private keys I do on an offline computer, while I look at the blockchain explorer on the screen of a different, online system. For my test transaction, I wanted to redeem one utxo of an address that I control on mainnet, and send some satoshis (just above the ‘dust‘ level) to some other addresses using P2PKH. Here is the transaction that I created and signed manually:
… which corresponds to hex representation 01000000010f8d95f4d84a7725f90251c4998f51eb47839fcc282b9bf917e2d61276a67e12010000006a47304402200c441b33dc180ec93e1df07df575399f74112dbf4a0a200151c9c4f1afc7c71e02200fc2fcf42847d5c504f06edef7b8fa81b092e7b8b00169d7f8868a02da6ad124012102dece727c6ddde3140abfcc554ffe50768ab29faa7439c411772fe3c7b93f7cb2ffffffff0220030000000000001976a914b5e5e05c83c470ffd21c3330fb99a6a0101351ad88ac84030000000000001976a914b5d9896cc07a30e1d739097df0c1d47181cbbe7588ac00000000
Electrum has a menu item to ‘load transaction by text‘, what I did with my hex string. It displayed the result correctly as a “transaction unrelated to your wallet” because I had not imported the private key yet. Electrum offered me the option to broadcast the transaction, but I decided not to (*) because I wanted to compare the manually created transaction with the transaction that Electrum would generate when I use their UI features. (*) I later broadcast my manual transaction via blockchain.com So, in order to be able to compare, I imported the private key into Electrum and created a transaction with identical parameters (using the ‘Pay to many’ feature). Here is the transaction that Electrum created:
… which corresponds to hex representation 02000000010f8d95f4d84a7725f90251c4998f51eb47839fcc282b9bf917e2d61276a67e12010000006a47304402200dd93baf0a38e4a352a7029c2a37a9bb8ef06bc32ab33fabb8278c6733193e4a02203393c4f5b73345a2a76694de9dff429d65b4de77581601ccc748c642a0dac308012102dece727c6ddde3140abfcc554ffe50768ab29faa7439c411772fe3c7b93f7cb2feffffff0220030000000000001976a914b5e5e05c83c470ffd21c3330fb99a6a0101351ad88ac84030000000000001976a914b5d9896cc07a30e1d739097df0c1d47181cbbe7588ace5b70900
And here comes my question: When I compare the 2 decoded transactions, I get the following differences:
Who can explain me the differences (or point me to the appropriate BIP) and why does Electrum use them? I’m aware of the fact that the signatures are not deterministic and must consequently differ, and thus the hash/txid must differ as well. But what about version, locktime and the significance of the e5b70900 suffix after the last scriptpubkey (see hex representation of the Electrum tx)?
I’ve been looking at other questions about fees and batching, but it seems no one asked if there is some way to determine the optimal size of a transaction to save on fees (assuming all inputs are spending from segwit native UTXOs for simplicity).
To expand a bit on what I have in mind: I need to pay various amounts to different people (let’s consider their number can be anywhere between 1 and infinite), and I guess that by making one “big” transaction paying them all at once instead of 1 transaction for each of them I can save on fees. But is it the more people I can add up in the same transaction the better, or would I save less at some point if my transaction keeps getting bigger? Is there some model to calculate the “optimal” size for my batched transaction, or maybe it doesn’t make sense?
Here’s my best guess for now: adding more people means really adding outputs, which makes the transaction grow linearily (assuming all ouputs are pretty standard and roughly of the same size), so assuming the fees are split between each receivers the more people you can onboard on the same transaction the cheaper it gets for everyone.
But at some point the sum of the amounts of the ouputs will grow bigger than the one input I added at first and I’ll need to add another input. If I have relatively big ouputs to spend this is probably ok, but if I only have small outputs and/or I’m adding relatively big amounts in the outputs at some point adding one more output could need to add one or maybe more inputs, making the growth in size transaction not worthing the save in fees, so I’d rather just stop it and send the batch at this point.
(EDIT) I came across this article that seems to confirm what I was thinking, and that all things being equal it will always save fees to add a new output.