.net - How to sign and verify an ECDSA with SHA256 ...

ECDSA In Bitcoin

Digital signatures are considered the foundation of online sovereignty. The advent of public-key cryptography in 1976 paved the way for the creation of a global communications tool – the Internet, and a completely new form of money – Bitcoin. Although the fundamental properties of public-key cryptography have not changed much since then, dozens of different open-source digital signature schemes are now available to cryptographers.

How ECDSA was incorporated into Bitcoin

When Satoshi Nakamoto, a mystical founder of the first crypto, started working on Bitcoin, one of the key points was to select the signature schemes for an open and public financial system. The requirements were clear. An algorithm should have been widely used, understandable, safe enough, easy, and, what is more important, open-sourced.
Of all the options available at that time, he chose the one that met these criteria: Elliptic Curve Digital Signature Algorithm, or ECDSA.
At that time, native support for ECDSA was provided in OpenSSL, an open set of encryption tools developed by experienced cipher banks in order to increase the confidentiality of online communications. Compared to other popular schemes, ECDSA had such advantages as:
These are extremely useful features for digital money. At the same time, it provides a proportional level of security: for example, a 256-bit ECDSA key has the same level of security as a 3072-bit RSA key (Rivest, Shamir и Adleman) with a significantly smaller key size.

Basic principles of ECDSA

ECDSA is a process that uses elliptic curves and finite fields to “sign” data in such a way that third parties can easily verify the authenticity of the signature, but the signer himself reserves the exclusive opportunity to create signatures. In the case of Bitcoin, the “data” that is signed is a transaction that transfers ownership of bitcoins.
ECDSA has two separate procedures for signing and verifying. Each procedure is an algorithm consisting of several arithmetic operations. The signature algorithm uses the private key, and the verification algorithm uses only the public key.
To use ECDSA, such protocol as Bitcoin must fix a set of parameters for the elliptic curve and its finite field, so that all users of the protocol know and apply these parameters. Otherwise, everyone will solve their own equations, which will not converge with each other, and they will never agree on anything.
For all these parameters, Bitcoin uses very, very large (well, awesomely incredibly huge) numbers. It is important. In fact, all practical applications of ECDSA use huge numbers. After all, the security of this algorithm relies on the fact that these values are too large to pick up a key with a simple brute force. The 384-bit ECDSA key is considered safe enough for the NSA's most secretive government service (USA).

Replacement of ECDSA

Thanks to the hard work done by Peter Wuille (a famous cryptography specialist) and his colleagues on an improved elliptical curve called secp256k1, Bitcoin's ECDSA has become even faster and more efficient. However, ECDSA still has some shortcomings, which can serve as a sufficient basis for its complete replacement. After several years of research and experimentation, a new signature scheme was established to increase the confidentiality and efficiency of Bitcoin transactions: Schnorr's digital signature scheme.
Schnorr's signature takes the process of using “keys” to a new level. It takes only 64 bytes when it gets into the block, which reduces the space occupied by transactions by 4%. Since transactions with the Schnorr signature are the same size, this makes it possible to pre-calculate the total size of the part of the block that contains such signatures. A preliminary calculation of the block size is the key to its safe increase in the future.
Keep up with the news of the crypto world at CoinJoy.io Follow us on Twitter and Medium. Subscribe to our YouTube channel. Join our Telegram channel. For any inquiries mail us at [[email protected]](mailto:[email protected]).
submitted by CoinjoyAssistant to btc [link] [comments]

ECDSA In Bitcoin

Digital signatures are considered the foundation of online sovereignty. The advent of public-key cryptography in 1976 paved the way for the creation of a global communications tool – the Internet, and a completely new form of money – Bitcoin. Although the fundamental properties of public-key cryptography have not changed much since then, dozens of different open-source digital signature schemes are now available to cryptographers.

How ECDSA was incorporated into Bitcoin

When Satoshi Nakamoto, a mystical founder of the first crypto, started working on Bitcoin, one of the key points was to select the signature schemes for an open and public financial system. The requirements were clear. An algorithm should have been widely used, understandable, safe enough, easy, and, what is more important, open-sourced.
Of all the options available at that time, he chose the one that met these criteria: Elliptic Curve Digital Signature Algorithm, or ECDSA.
At that time, native support for ECDSA was provided in OpenSSL, an open set of encryption tools developed by experienced cipher banks in order to increase the confidentiality of online communications. Compared to other popular schemes, ECDSA had such advantages as:
These are extremely useful features for digital money. At the same time, it provides a proportional level of security: for example, a 256-bit ECDSA key has the same level of security as a 3072-bit RSA key (Rivest, Shamir и Adleman) with a significantly smaller key size.

Basic principles of ECDSA

ECDSA is a process that uses elliptic curves and finite fields to “sign” data in such a way that third parties can easily verify the authenticity of the signature, but the signer himself reserves the exclusive opportunity to create signatures. In the case of Bitcoin, the “data” that is signed is a transaction that transfers ownership of bitcoins.
ECDSA has two separate procedures for signing and verifying. Each procedure is an algorithm consisting of several arithmetic operations. The signature algorithm uses the private key, and the verification algorithm uses only the public key.
To use ECDSA, such protocol as Bitcoin must fix a set of parameters for the elliptic curve and its finite field, so that all users of the protocol know and apply these parameters. Otherwise, everyone will solve their own equations, which will not converge with each other, and they will never agree on anything.
For all these parameters, Bitcoin uses very, very large (well, awesomely incredibly huge) numbers. It is important. In fact, all practical applications of ECDSA use huge numbers. After all, the security of this algorithm relies on the fact that these values are too large to pick up a key with a simple brute force. The 384-bit ECDSA key is considered safe enough for the NSA's most secretive government service (USA).

Replacement of ECDSA

Thanks to the hard work done by Peter Wuille (a famous cryptography specialist) and his colleagues on an improved elliptical curve called secp256k1, Bitcoin's ECDSA has become even faster and more efficient. However, ECDSA still has some shortcomings, which can serve as a sufficient basis for its complete replacement. After several years of research and experimentation, a new signature scheme was established to increase the confidentiality and efficiency of Bitcoin transactions: Schnorr's digital signature scheme.
Schnorr's signature takes the process of using “keys” to a new level. It takes only 64 bytes when it gets into the block, which reduces the space occupied by transactions by 4%. Since transactions with the Schnorr signature are the same size, this makes it possible to pre-calculate the total size of the part of the block that contains such signatures. A preliminary calculation of the block size is the key to its safe increase in the future.
Keep up with the news of the crypto world at CoinJoy.io Follow us on Twitter and Medium. Subscribe to our YouTube channel. Join our Telegram channel. For any inquiries mail us at [[email protected]](mailto:[email protected]).
submitted by CoinjoyAssistant to Bitcoin [link] [comments]

Technical: Upcoming Improvements to Lightning Network

Price? Who gives a shit about price when Lightning Network development is a lot more interesting?????
One thing about LN is that because there's no need for consensus before implementing things, figuring out the status of things is quite a bit more difficult than on Bitcoin. In one hand it lets larger groups of people work on improving LN faster without having to coordinate so much. On the other hand it leads to some fragmentation of the LN space, with compatibility problems occasionally coming up.
The below is just a smattering sample of LN stuff I personally find interesting. There's a bunch of other stuff, like splice and dual-funding, that I won't cover --- post is long enough as-is, and besides, some of the below aren't as well-known.
Anyway.....

"eltoo" Decker-Russell-Osuntokun

Yeah the exciting new Lightning Network channel update protocol!

Advantages

Myths

Disadvantages

Multipart payments / AMP

Splitting up large payments into smaller parts!

Details

Advantages

Disadvantages

Payment points / scalars

Using the magic of elliptic curve homomorphism for fun and Lightning Network profits!
Basically, currently on Lightning an invoice has a payment hash, and the receiver reveals a payment preimage which, when inputted to SHA256, returns the given payment hash.
Instead of using payment hashes and preimages, just replace them with payment points and scalars. An invoice will now contain a payment point, and the receiver reveals a payment scalar (private key) which, when multiplied with the standard generator point G on secp256k1, returns the given payment point.
This is basically Scriptless Script usage on Lightning, instead of HTLCs we have Scriptless Script Pointlocked Timelocked Contracts (PTLCs).

Advantages

Disadvantages

Pay-for-data

Ensuring that payers cannot access data or other digital goods without proof of having paid the provider.
In a nutshell: the payment preimage used as a proof-of-payment is the decryption key of the data. The provider gives the encrypted data, and issues an invoice. The buyer of the data then has to pay over Lightning in order to learn the decryption key, with the decryption key being the payment preimage.

Advantages

Disadvantages

Stuckless payments

No more payments getting stuck somewhere in the Lightning network without knowing whether the payee will ever get paid!
(that's actually a bit overmuch claim, payments still can get stuck, but what "stuckless" really enables is that we can now safely run another parallel payment attempt until any one of the payment attempts get through).
Basically, by using the ability to add points together, the payer can enforce that the payee can only claim the funds if it knows two pieces of information:
  1. The payment scalar corresponding to the payment point in the invoice signed by the payee.
  2. An "acknowledgment" scalar provided by the payer to the payee via another communication path.
This allows the payer to make multiple payment attempts in parallel, unlike the current situation where we must wait for an attempt to fail before trying another route. The payer only needs to ensure it generates different acknowledgment scalars for each payment attempt.
Then, if at least one of the payment attempts reaches the payee, the payee can then acquire the acknowledgment scalar from the payer. Then the payee can acquire the payment. If the payee attempts to acquire multiple acknowledgment scalars for the same payment, the payer just gives out one and then tells the payee "LOL don't try to scam me", so the payee can only acquire a single acknowledgment scalar, meaning it can only claim a payment once; it can't claim multiple parallel payments.

Advantages

Disadvantages

Non-custodial escrow over Lightning

The "acknowledgment" scalar used in stuckless can be reused here.
The acknowledgment scalar is derived as an ECDH shared secret between the payer and the escrow service. On arrival of payment to the payee, the payee queries the escrow to determine if the acknowledgment point is from a scalar that the escrow can derive using ECDH with the payer, plus a hash of the contract terms of the trade (for example, to transfer some goods in exchange for Lightning payment). Once the payee gets confirmation from the escrow that the acknowledgment scalar is known by the escrow, the payee performs the trade, then asks the payer to provide the acknowledgment scalar once the trade completes.
If the payer refuses to give the acknowledgment scalar even though the payee has given over the goods to be traded, then the payee contacts the escrow again, reveals the contract terms text, and requests to be paid. If the escrow finds in favor of the payee (i.e. it determines the goods have arrived at the payer as per the contract text) then it gives the acknowledgment scalar to the payee.

Advantages

Disadvantages

Payment decorrelation

Because elliptic curve points can be added (unlike hashes), for every forwarding node, we an add a "blinding" point / scalar. This prevents multiple forwarding nodes from discovering that they have been on the same payment route. This is unlike the current payment hash + preimage, where the same hash is used along the route.
In fact, the acknowledgment scalar we use in stuckless and escrow can simply be the sum of each blinding scalar used at each forwarding node.

Advantages

Disadvantages

submitted by almkglor to Bitcoin [link] [comments]

Best General RenVM Questions of March 2020

Best General RenVM Questions of March 2020

\These questions are sourced directly from Telegram*

Q: How do I shutdown my Chaosnet Darknode? A: Please follow these directions: https://docs.renproject.io/chaosnet/chaosnet-darknode/untitled-3

Q: Can I run a Chaosnet Darknode and Mainnet Darknode at the same time (on the same computer). A: No, if you want to do that you’ll have to run them on separate computers.

Q: You mentioned DCEP in your latest piece and "12 App Ideas", but it's going to run on a centralized private network. The Bank of England also just released a report on how they're thinking about their CBDC and DLT/centralization, and stress that a DLT could add resilience, but there's also no reason a currency couldn't be more centralized. The Block reported that other central banks (like the EU and Singapore) are considering third-party chains like Corda. Can you comment on which CBDC designs may or may not be compatible with RZL? You previously said "RZL sMPC provides ECDSA signatures because that’s what it is used by Ethereum, Bitcoin, etc. Whatever solution they come up with, will be the solution that RZL has to be upgraded to use (the whole point of RenVM is not to tell other chains how to do things, and still provide interop; this means waiting on them to define their solution and then working with that)." So, what does centralization mean for RZL, and how can we think about compatibility between these designs on the technical side?
A: The topic of centralisation in interoperability comes down to the compounding effect of using multiple networks. Put another way “you’re only as decentralised as your most centralised component”. While there are nuances to this, the core idea rings true.
RenVM can be used to interoperate many different kinds of chains (anything using ECDSA, or naturally supporting lively threshold signatures) is a candidate to be included in RenVM. However, a centralised currency that has been bridged to a decentralised chain is not decentralised. The centralised entity that controls the currency might say “nothing transferred to/from this other chain will be honoured”. That’s a risk that you take with centralised currencies (take a look at the T&Cs for USDC for example).
The benefit of RenVM in these instances is to become a standard. Short-term, RenVM brings interoperability to some core chains. Medium-term, it expands that to other more interesting chains based on community demands. Long-term, it becomes the standard for how to implement interop. For example: you create a new chain and don’t worry about interop explicitly because you know RenVM will have your back. For centralised currencies this is still advantageous, because the issuing entity only has to manage one chain (theirs) but can still get their currency onto other chains/ecosystems.
From a technical perspective, the Darknodes just have to be willing to adopt the chain/currency.

Q: dApps will have their own risk tolerances for centralized assets. Eg USDC was a bigger deal for MakerDAO than Uniswap. If CBDC liquidity were suddenly bridgeable, some dApps would be more eager to adopt it than others - even despite the risks - because they provide native liquidity and can be used to store/hedge in it without cashing it out. My question is more technical as it relates to RenVM as the "Universal Stablecoin Converter". You sound convinced that RenVM can bridge Libra, DCEP, maybe other CBDCs in the future, but I'm skeptical how RenVM works with account-based currencies. (1) Are we even sure of DCEP's underlying design and whether it or other CBDCs even plan to use digital signatures? And (2) wouldn't RenVM need a KYC-approved account to even get an address on these chains? It seems like DCEP would have to go through a Chinese Circle, who would just issue an ERC20.
A: As far as underlying blockchain technology goes (eg the maths of it) I don’t see there being any issues. Until we know more about whether or not KYCd addresses are required (and if they are, how they work), then I can’t specifically comment on that. However, it is more than possible not to require RenVM to be KYCd (just like you can’t “KYC Ethereum”) and instead move that requirement to addresses on the host blockchain (eg KYC Ethereum addresses for receiving the cross-chain asset). Whether this happens or not would ultimately be up to whether the issuer wanted interoperability to be possible.

Q: In that scenario, how would RenVM even receive the funds to be transferred to the KYC'd Ethereum address? For Alice to send DCEP to Bob's KYC'd Ethereum address, RenVM would need a DCEP address of its own, no?
A: Again, this is impossible to say for certain without knowing the implementation of the origin chain. You could whitelist known RenVM scripts (by looking at their form, like RenVM itself does on Bitcoin). But mostly likely, these systems will have some level of smart contract capabilities and this allows very flexible control. You can just whitelist the smart contract address that RenVM watches for cross-chain events. In origin chains with smart contracts, the smart contract holds the funds (and the keys the smart contract uses to authorise spends are handled as business logic). So there isn’t really a “RenVM public address” in the same sense that there is in Bitcoin.
Q: The disbonding period for Darknodes seem long, what happens if there is a bug?
A: It’s actually good for the network to have a long disbonding period in the face of a bug. If people were able to panic sell, then not only would the bug cause potential security issues, but so too would a mass exodus of Darknodes from the network.
Having time to fix the bug means that Darknodes may as well stick around and continue securing the network as best they can. Because their REN is at stake (as you put it) they’re incentivised to take any of the recommended actions and update their nodes as necessary.
This is also why it’s critical for the Greycore to exist in the early days of the network and why we are rolling out SubZero the way that we are. If such a bug becomes apparent (more likely in the early days than the later days), then the Greycore has a chance to react to it (the specifics of which would of course depend on the specifics of the bug). This becomes harder and slower as the network becomes more decentralised over time.
Not mcap, but the price of bonded Ren. Furthermore, the price will be determined by how much fees darknodes have collected. BTW, loongy could you unveil based on what profits ratio/apr the price will be calculated?
This is up to the Darknodes to governance softly. This means there isn’t a need for an explicit oracle. Darknodes assess L vs R individually and vote to increase fees to drive L down and drive R up. L is driven down by continue fees, whereas R is driven up by minting/burning fees.

Q: How do you think renvm would perform on a day like today when even cexs are stretched. Would the system be able to keep up?
A: This will really depend on the number of shards that RenVM is operating. Shards operate in parallel so more shards = more processing power.

Q: The main limiting factor is the speed of the underlying chain, rather than RenVM?
A: That’s generally the case. Bitcoin peaks at about 7 TPS so as long as we are faster than this, any extra TPS is “wasted”. And you actually don’t want to be faster than you have to be. This lets you drop hardware requirements, and lowering the cost of running a Darknode. This has two nice effects: (a) being an operator generates more profit because costs are lower, and (b) it’s more accessible to more people because it’s a little cheaper to get started (albeit this is minor).

Q: Just getting caught up on governance, but what about: unbonded REN = 1 vote, bonded REN = (1 vote + time_served). That'd be > decentralization of Darknodes alone, an added incentive to be registered, and counter exchanges wielding too much control.
A: You could also have different decaying rates. For example, assuming that REN holders have to vote by “backing” the vote of Darknodes:
Let X be the amount of REN used to voted, backed behind a Darknode and bonded for T time.
Let Y be the amount of time a Darknode has been active for.
Voting power of the Darknode could = Sqrt(Y) * Log(X + T)
Log(1,000,000,000) = ~21 so if you had every REN bonded behind you, your voting power would only be 21x the voting power of other nodes. This would force whales to either run Darknodes for a while and contribute actively to the ecosystem (or lock up their REN for an extended period for addition voting power), and would force exchanges to spread their voting out over many different nodes (giving power back to those running nodes). Obviously the exchange could just run lots of Darknodes, but they would have to do this over a long period of time (not feasible, because people need to be able to withdraw their REN).

Q: Like having superdelegates, i.e, nodes trusted by the community with higher voting power? Maybe like council nodes
A: Well, this is essentially what the Greycore is. Darknodes that have been voted in by the community to act as a secondary signature on everything. (And, interestingly enough, you could vote out all members to remove the core entirely.)

Q: Think the expensive ren is a security feature as well. So, doubt this would impact security potentially? I don’t know. I wouldn’t vote to cut my earnings by 40% for example lol
A: It can lead to centralisation over time though. If 100K REN becomes prohibitively expensive, then you will only see people running Darknodes that can afford a large upfront capital investment. In the mid/long-term this can have adverse effects on the trust in the system. It’s important that people “external” to the system (non-Darknodes) can get themselves into the system. Allowing non-Darknodes to have some governance (even if it’s not overall things) would be critical to this.

Q: That darknode option sounds very interesting although it could get more centralized as the price of 100k Ren rises.For instance dark nodes may not want to vote to lower the threshold from 100k to 50k once Ren gets too expensive.
A: A great point. And one of the reasons it would be ideal to be able to alter those parameters without just the Darknodes voting. Otherwise, you definitely risk long-term centralisation.

Q: BTC is deposited into a native BTC address, but who controls this address (where/how is this address’s private key stored)?
A: This is precisely the magic behind RenVM. RenVM uses an MPC algorithm to generate the controlling private key. No one ever sees this private key, and no one can sign things with it without consensus from everyone else.
submitted by RENProtocol to RenProject [link] [comments]

Best General RenVM Questions of January 2020

Best General RenVM Questions of January 2020

‌*These questions are sourced directly from Telegram
Q: When you say RenVM is Trustless, Permissionless, and Decentralized, what does that actually mean?
A: Trustless = RenVM is a virtual machine (a network of nodes, that do computations), this means if you ask RenVM to trade an asset via smart contract logic, it will. No trusted intermediary that holds assets or that you need to rely on. Because RenVM is a decentralized network and computes verified information in a secure environment, no single party can prevent users from sending funds in, withdrawing deposited funds, or computing information needed for updating outside ledgers. RenVM is an agnostic and autonomous virtual broker that holds your digital assets as they move between blockchains.
Permissionless = RenVM is an open protocol; meaning anyone can use RenVM and any project can build with RenVM. You don't need anyone's permission, just plug RenVM into your dApp and you have interoperability.
Decentralized = The nodes that power RenVM ( Darknodes) are scattered throughout the world. RenVM has a peak capacity of up to 10,000 Darknodes (due to REN’s token economics). Realistically, there will probably be 100 - 500 Darknodes run in the initial Mainnet phases, ample decentralized nonetheless.

Q: Okay, so how can you prove this?
A: The publication of our audit results will help prove the trustlessness piece; permissionless and decentralized can be proven today.
Permissionless = https://github.com/renproject/ren-js
Decentralized = https://chaosnet.renproject.io/

Q: How does Ren sMPC work? Sharmir's secret sharing? TSS?
A: There is some confusion here that keeps arising so I will do my best to clarify.TL;DR: *SSS is just data. It’s what you do with the data that matters. RenVM uses sMPC on SSS to create TSS for ECDSA keys.*SSS and TSS aren’t fundamental different things. It’s kind of like asking: do you use numbers, or equations? Equations often (but not always) use numbers or at some point involve numbers.
SSS by itself is just a way of representing secret data (like numbers). sMPC is how to generate and work with that data (like equations). One of the things you can do with that work is produce a form of TSS (this is what RenVM does).
However, TSS is slightly different because it can also be done *without* SSS and sMPC. For example, BLS signatures don’t use SSS or sMPC but they are still a form of TSS.
So, we say that RenVM uses SSS+sMPC because this is more specific than just saying TSS (and you can also do more with SSS+sMPC than just TSS). Specifically, all viable forms of turning ECDSA (a scheme that isn’t naturally threshold based) into a TSS needs SSS+sMPC.
People often get confused about RenVM and claim “SSS can’t be used to sign transactions without making the private key whole again”. That’s a strange statement and shows a fundamental misunderstanding about what SSS is.
To come back to our analogy, it’s like saying “numbers can’t be used to write a book”. That’s kind of true in a direct sense, but there are plenty of ways to encode a book as numbers and then it’s up to how you interpret (how you *use*) those numbers. This is exactly how this text I’m writing is appearing on your screen right now.
SSS is just secret data. It doesn’t make sense to say that SSS *functions*. RenVM is what does the functioning. RenVM *uses* the SSSs to represent private keys. But these are generated and used and destroyed as part of sMPC. The keys are never whole at any point.

Q: Thanks for the explanation. Based on my understanding of SSS, a trusted dealer does need to briefly put the key together. Is this not the case?
A: Remember, SSS is just the representation of a secret. How you get from the secret to its representation is something else. There are many ways to do it. The simplest way is to have a “dealer” that knows the secret and gives out the shares. But, there are other ways. For example: we all act as dealers, and all give each other shares of our individual secret. If there are N of us, we now each have N shares (one from every person). Then we all individually add up the shares that we have. We now each have a share of a “global” secret that no one actually knows. We know this global secret is the sum of everyone’s individual secrets, but unless you know every individual’s secret you cannot know the global secret (even though you have all just collectively generates shares for it). This is an example of an sMPC generation of a random number with collusion resistance against all-but-one adversaries.

Q: If you borrow Ren, you can profit from the opposite Ren gain. That means you could profit from breaking the network and from falling Ren price (because breaking the network, would cause Ren price to drop) (lower amount to be repaid, when the bond gets slashed)
A: Yes, this is why it’s important there has a large number of Darknodes before moving to full decentralisation (large borrowing becomes harder). We’re exploring a few other options too, that should help prevent these kinds of issues.

Q: What are RenVM’s Security and Liveliness parameters?
A: These are discussed in detail in our Wiki, please check it out here: https://github.com/renproject/ren/wiki/Safety-and-Liveliness#analysis

Q: What are the next blockchain under consideration for RenVM?
A: These can be found here: https://github.com/renproject/ren/wiki/Supported-Blockchains

Q: I've just read that Aztec is going to be live this month and currently tests txs with third parties. Are you going to participate in early access or you just more focused on bringing Ren to Subzero stage?
A: At this stage, our entire focus is on Mainnet SubZero. But, we will definitely be following up on integrating with AZTEC once everything is out and stable.

Q: So how does RenVM compare to tBTC, Thorchain, WBTC, etc..?
A: An easy way to think about it is..RenVM’s functionality is a combination of tBTC (+ WBTC by extension), and Thorchain’s (proposed) capabilities... All wrapped into one. Just depends on what the end-user application wants to do with it.

Q1: What are the core technical/security differences between RenVM and tBTC?A1: The algorithm used by tBTC faults if even one node goes offline at the wrong moment (and the whole “keep” of nodes can be penalised for this). RenVM can survive 1/3rd going offline at any point at any time. Advantage for tBTC is that collusion is harder, disadvantage is obviously availability and permissionlessness is lower.
tBTC an only mint/burn lots of 1 BTC and requires an on-Ethereum SPV relay for Bitcoin headers (and for any other chain it adds). No real advantage trade-off IMO.
tBTC has a liquidation mechanism that means nodes can have their bond liquidated because of ETH/BTC price ratio. Advantage means users can get 1 BTC worth of ETH. Disadvantage is it means tBTC is kind of a synthetic: needs a price feed, needs liquid markets for liquidation, users must accept exposure to ETH even if they only hold tBTC, nodes must stay collateralized or lose lots of ETH. RenVM doesn’t have this, and instead uses fees to prevent becoming under-collateralized. This requires a mature market, and assumed Darknodes will value their REN bonds fairly (based on revenue, not necessarily what they can sell it for at current —potentially manipulated—market value). That can be an advantage or disadvantage depending on how you feel.
tBTC focuses more on the idea of a tokenized version of BTC that feels like an ERC20 to the user (and is). RenVM focuses more on letting the user interact with DeFi and use real BTC and real Bitcoin transactions to do so (still an ERC20 under the hood, but the UX is more fluid and integrated). Advantage of tBTC is that it’s probably easier to understand and that might mean better overall experience, disadvantage really comes back to that 1 BTC limit and the need for a more clunky minting/burning experience that might mean worse overall experience. Too early to tell, different projects taking different bets.
tBTC supports BTC (I think they have ZEC these days too). RenVM supports BTC, BCH, and ZEC (docs discuss Matic, XRP, and LTC).
Q2: This are my assumed differences between tBTC and RenVM, are they correct? Some key comparisons:
-Both are vulnerable to oracle attacks
-REN federation failure results in loss or theft of all funds
-tBTC failures tend to result in frothy markets, but holders of tBTC are made whole
-REN quorum rotation is new crypto, and relies on honest deletion of old key shares
-tBTC rotates micro-quorums regularly without relying on honest deletion
-tBTC relies on an SPV relay
-REN relies on federation honesty to fill the relay's purpose
-Both are brittle to deep reorgs, so expanding to weaker chains like ZEC is not clearly a good idea
-REN may see total system failure as the result of a deep reorg, as it changes federation incentives significantly
-tBTC may accidentally punish some honest micro-federations as the result of a deep reorg
-REN generally has much more interaction between incentive models, as everything is mixed into the same pot.
-tBTC is a large collection of small incentive models, while REN is a single complex incentive model
A2: To correct some points:
The oracle situation is different with RenVM, because the fee model is what determines the value of REN with respect to the cross-chain asset. This is the asset is what is used to pay the fee, so no external pricing is needed for it (because you only care about the ratio between REN and the cross-chain asset).
RenVM does rotate quorums regularly, in fact more regularly than in tBTC (although there are micro-quorums, each deposit doesn’t get rotated as far as I know and sticks around for up to 6 months). This rotation involves rotations of the keys too, so it does not rely on honest deletion of key shares.
Federated views of blockchains are easier to expand to support deep re-orgs (just get the nodes to wait for more blocks for that chain). SPV requires longer proofs which begins to scale more poorly.
Not sure what you mean by “one big pot”, but there are multiple quorums so the failure of one is isolated from the failures of others. For example, if there are 10 shards supporting BTC and one of them fails, then this is equivalent to a sudden 10% fee being applied. Harsh, yes, but not total failure of the whole system (and doesn’t affect other assets).
Would be interesting what RenVM would look like with lots more shards that are smaller. Failure becomes much more isolated and affects the overall network less.
Further, the amount of tBTC you can mint is dependent on people who are long ETH and prefer locking it up in Keep for earning a smallish fee instead of putting it in Compound or leveraging with dydx. tBTC is competing for liquidity while RenVM isn't.

Q: I understand correctly RenVM (sMPC) can get up to a 50% security threshold, can you tell me more?
A: The best you can theoretically do with sMPC is 50-67% of the total value of REN used to bond Darknodes (RenVM will eventually work up to 50% and won’t go for 67% because we care about liveliness just as much as safety). As an example, if there’s $1M of REN currently locked up in bonded Darknodes you could have up to $500K of tokens shifted through RenVM at any one specific moment. You could do more than that in daily volume, but at any one moment this is the limit.Beyond this limit, you can still remain secure but you cannot assume that players are going to be acting to maximize their profit. Under this limit, a colluding group of adversaries has no incentive to subvert safety/liveliness properties because the cost to attack roughly outweighs the gain. Beyond this limit, you need to assume that players are behaving out of commitment to the network (not necessarily a bad assumption, but definitely weaker than the maximizing profits assumption).

Q: Why is using ETH as collateral for RenVM a bad idea?
A: Using ETH as collateral in this kind of system (like having to deposit say 20 ETH for a bond) would not make any sense because the collateral value would then fluctuate independently of what kind of value RenVM is providing. The REN token on the other hand directly correlates with the usage of RenVM which makes bonding with REN much more appropriate. DAI as a bond would not work as well because then you can't limit attackers with enough funds to launch as many darknodes as they want until they can attack the network. REN is limited in supply and therefore makes it harder to get enough of it without the price shooting up (making it much more expensive to attack as they would lose their bonds as well).
A major advantage of Ren's specific usage of sMPC is that security can be regulated economically. All value (that's being interopped at least) passing through RenVM has explicit value. The network can self-regulate to ensure an attack is never worth it.

Q: Given the fee model proposal/ceiling, might be a liquidity issue with renBTC. More demand than possible supply?A: I don’t think so. As renBTC is minted, the fees being earned by Darknodes go up, and therefore the value of REN goes up. Imagine that the demand is so great that the amount of renBTC is pushing close to 100% of the limit. This is a very loud and clear message to the Darknodes that they’re going to be earning good fees and that demand is high. Almost by definition, this means REN is worth more.
Profits of the Darknodes, and therefore security of the network, is based solely on the use of the network (this is what you want because your network does not make or break on things outside the systems control). In a system like tBTC there are liquidity issues because you need to convince ETH holders to bond ETH and this is an external problem. Maybe ETH is pumping irrespective of tBTC use and people begin leaving tBTC to sell their ETH. Or, that ETH is dumping, and so tBTC nodes are either liquidated or all their profits are eaten by the fact that they have to be long on ETH (and tBTC holders cannot get their BTC back in this case). Feels real bad man.

Q: I’m still wondering which asset people will choose: tbtc or renBTC? I’m assuming the fact that all tbtc is backed by eth + btc might make some people more comfortable with it.
A: Maybe :) personally I’d rather know that my renBTC can always be turned back into BTC, and that my transactions will always go through. I also think there are many BTC holders that would rather not have to “believe in ETH” as an externality just to maximize use of their BTC.

Q: How does the liquidation mechanism work? Can any party, including non-nodes act as liquidators? There needs to be a price feed for liquidation and to determine the minting fee - where does this price feed come from?
A: RenVM does not have a liquidation mechanism.
Q: I don’t understand how the price feeds for minting fees make sense. You are saying that the inputs for the fee curve depend on the amount of fees derived by the system. This is circular in a sense?
A: By evaluating the REN based on the income you can get from bonding it and working. The only thing that drives REN value is the fact that REN can be bonded to allow work to be done to earn revenue. So any price feed (however you define it) is eventually rooted in the fees earned.

Q: Who’s doing RenVM’s Security Audit?
A: ChainSecurity | https://chainsecurity.com/

Q: Can you explain RenVM’s proposed fee model?
A: The proposed fee model can be found here: https://github.com/renproject/ren/wiki/Safety-and-Liveliness#fees

Q: Can you explain in more detail the difference between "execution" and "powering P2P Network". I think that these functions are somehow overlapping? Can you define in more detail what is "execution" and "powering P2P Network"? You also said that at later stages semi-core might still exist "as a secondary signature on everything (this can mathematically only increase security, because the fully decentralised signature is still needed)". What power will this secondary signature have?
A: By execution we specifically mean signing things with the secret ECDSA keys. The P2P network is how every node communicates with every other node. The semi-core doesn’t have any “special powers”. If it stays, it would literally just be a second signature required (as opposed to the one signature required right now).
This cannot affect safety, because the first signature is still required. Any attack you wanted to do would still have to succeed against the “normal” part of the network. This can affect liveliness, because the semi-core could decide not to sign. However, the semi-core follows the same rules as normal shards. The signature is tolerant to 1/3rd for both safety/liveliness. So, 1/3rd+ would have to decide to not sign.
Members of the semi-core would be there under governance from the rest of our ecosystem. The idea is that members would be chosen for their external value. We’ve discussed in-depth the idea of L<3. But, if RenVM is used in MakerDAO, Compound, dYdX, Kyber, etc. it would be desirable to capture the value of these ecosystems too, not just the value of REN bonded. The semi-core as a second signature is a way to do this.
Imagine if the members for those projects, because those projects want to help secure renBTC, because it’s used in their ecosystems. There is a very strong incentive for them to behave honestly. To attack RenVM you first have to attack the Darknodes “as per usual” (the current design), and then somehow convince 1/3rd of these projects to act dishonestly and collapse their own ecosystems and their own reputations. This is a very difficult thing to do.
Worth reminding: the draft for this proposal isn’t finished. It would be great for everyone to give us their thoughts on GitHub when it is proposed, so we can keep a persistent record.

Q: Which method or equation is used to calculate REN value based on fees? I'm interested in how REN value is calculated as well, to maintain the L < 3 ratio?
A: We haven’t finalized this yet. But, at this stage, the plan is to have a smart contract that is controlled by the Darknodes. We want to wait to see how SubZero and Zero go before committing to a specific formulation, as this will give us a chance to bootstrap the network and field inputs from the Darknodes owners after the earnings they can make have become more apparent.
submitted by RENProtocol to RenProject [link] [comments]

Bottos 2020 Research and Development Scheme

Bottos 2020 Research and Development Scheme

https://preview.redd.it/umh8ivbsua841.png?width=554&format=png&auto=webp&s=5c16d9d9e61503e4c9d44212eecd176eda11550a
As 2020 is now here, Bottos has solemnly released its “2020 Research and development scheme”. On one hand, we adhere to the principle of transparency so that the whole community can comprehend our next step as a whole, but more importantly, it also helps our whole team to think deeply about the future and reach consensus. It is strongly believed that following these consistent follow-ups will help us to in order to achieve the best results.
Based on the efficient development of Bottos, the team’s technical achievements in consensus algorithms and smart contracts are used to deeply implement and optimize the existing technical architecture. At the same time using the community’s technical capabilities, horizontal development, expanding new functional modules and technical directions it stays closely integrated with the whole community.
In the future, we will keep on striving to achieve in-depth thinking, comprehensive planning, and flexible adjustment.


Overview of Technical Routes

https://preview.redd.it/rk9tpg2uua841.png?width=554&format=png&auto=webp&s=77e607b81f31c0d20feaa90eca81f09a23addca4
User feedback within the community is the driving force behind Bottos progress. In the development route of the community and industry we have formulated a roadmap for technical development, pointing out the right path for the team towards the right direction among the massive routes of modern technology.
As part of our 2020 research and development objective we have the following arrangements:
1. Intensifying enormous number of smart contracts and related infrastructures
After many years of development, smart contracts have gradually become the core and standard function in blockchain projects. The strength of smart contracts, ease of use, and stability represent the key capabilities of a blockchain project. As a good start, Bottos has already made great progress in the field of smart contracts. In smart contracts we still need to increase development efforts, making the ease of use and stability of smart contracts the top priority of our future development.
Reducing the barriers for developers and ordinary users to use, shortening the contract development cycle and saving users time is another important task for the team to accomplish. To this end, we have planned an efficient and easy-to-use one-stop contract development, debugging, and deployment tool that will provide multiple access methods and interfaces to the test network to support rapid deployment and rapid debugging.
2. Establishing an excellent client and user portal
The main goal here is to add an entrance point to the creation and deployment of smart contracts in the wallet client. To this end, the wallet needs to be transformed, a local compiler for smart contracts must be added, and an easy-to-use UI interface can be provided for the purpose of creating, deploying, and managing contracts to meet the needs of users with a single mouse click only.
3. Expanding distributed storage
Distributed storage is another focus of our development in the upcoming year. Only by using a distributed architecture can completely solve the issue of performance and scalability of stand-alone storage. Distributed storage suitable for blockchain needs to provide no less than single machine performance, extremely high availability, no single point of failure, easy expansion, and strong consistent transactions. These are the main key points and difficulties of Bottos in field of distributed storage in the upcoming days.
4. Reinforcing multi party secured computing
Privacy in computing is also a very important branch to deal with. In this research direction, Bottos has invested a lot of time and produced many research results on multi-party secured computing, such as technical articles and test cases. In the future, we will continue to give efforts in the direction of multi-party secured computing and apply mature technology achievements into the functions of the chain.

2020 Bottos — Product Development

Support for smart contract deployment in wallets
The built-in smart contract compiler inside the wallet supports compilation of the smart contracts in all languages provided by Bottos and integrates with the functions in the wallet. It also supports one-click deployment of the compiled contract source code in the wallet.
When compiling a contract, one can choose whether to pre-execute the contract code. If pre-execution is selected, it will connect to the remote contract pre-execution service and return the execution result to the wallet.
When deploying a contract, one can choose to deploy to the test network or main network and the corresponding account and private key of the test network or main network should be provided.

2020 Bottos-Technical Research

https://preview.redd.it/x2k65j7xua841.png?width=553&format=png&auto=webp&s=a40eae3c56b664c031b3db96f608923e670ff331
1. Intelligent smart contract development platform (BISDP)
The smart contract development platform BISDP is mainly composed of user-oriented interfaces, as well as back-end compilation and deployment tools, debugging tools, and pre-execution frameworks.
The user-oriented interface provides access methods based on WEB, PC, and mobile apps, allowing developers to quickly and easily compile and deploy contracts and provide contract template management functions. It can also manage the contract remotely by viewing the contract execution status, the consumed resources and other information.
In the compilation and deployment tool a set of smart contract source code editing, running, debugging, and deployment solutions as well as smart contract templates for common tasks are provided, which greatly reduces the threshold for developers to learn and use smart contracts. At the same time, developers and ordinary users are provided with a smart contract pre-execution framework, which can check the logical defects and security risks in smart contracts before actual deployment and promptly remind users a series of problems even before the smart contracts are actually run.
In the debugging tool, there are built-in local debugging and remote debugging tools. Multiple breakpoints can be set in the debugging tool. When the code reaches the breakpoint, one can view the variables and their contents in the current execution stack. One can also make conditional breakpoints based on the value of the variable. The code will not execute until the value reaches a preset value in memory.
In the pre-execution framework, developers can choose to pre-execute contract code in a virtual environment or a test net, checking out problems in some code that cannot be detected during compilation time and perform deeper code inspection. The pre-execution framework can also prompt the user in advance about the time and space resources required for execution.
2. Supporting Python and PHP in BVM virtual machine for writing smart contracts
We have added smart contract writing tools based on Python and PHP languages. These languages can be compiled into the corresponding BVM instruction set for implementation. These two reasons are used as the programming language for smart contracts.
For the Python language, the basic language elements supported by the first phase are:
- Logic control: If, Else, Eli, While, Break, method calls, for x in y
- Arithmetic and relational operators: ADD, SUB, MUL, DIV, ABS, LSHIFT, RSHIFT, AND, OR, XOR, MODULE, INVERT, GT, GTE, LT, LTE, EQ, NOTEQ
-
Data structure:
- Supports creation, addition, deletion, replacement, and calculation of length of list data structure
- Supports creation, append, delete, replace, and calculation of length of dict data structure
Function: Supports function definition and function calls
For the PHP language, the basic language elements supported by the first phase are :
- Logic control: If, Else, Eli, While, Break, method calls
- Arithmetic and relational operators: ADD, SUB, MUL, DIV, ABS, LSHIFT, RSHIFT, AND, OR, XOR, MODULE, INVERT, GT, GTE, LT, LTE, EQ, NOTEQ
Data structure:
- Support for creating, appending, deleting, replacing, and calculating length of associative arrays
Function: Supports the definition and calling of functions
For these two above mentioned languages, the syntax highlighting and code hinting functions are also provided in BISDP, which is very convenient for developers to debug any errors.
3. Continuous exploration of distributed storage solutions
Distributed storage in blockchain technology actually refers to a distributed database. Compared with the traditional DMBS, in addition to the ACID characteristics of the traditional DBMS, the distributed database also provides the high availability and horizontal expansion of the distributed system. The CAP principle of distributed system reveals that for a common distributed system there is an impossible triangle, only two of them can be selected among its three directions, consistency, availability, and partition fault tolerance. Distributed databases in China must require strong consistency. This is due to the characteristics of the blockchain system itself, because it needs to provide reliable distributed transaction capabilities. For these technical issues, before ensuring that the distributed storage solution reaches 100% availability, we will continue to invest more time and technical strength, do more functional and performance testing, and conduct targeted tests for distributed storage systems.
4. Boosting secured multi-party computing research and development
Secured multi-party Computing (MPC) is a cryptographic mechanism that enables multiple entities to share data while protecting the confidentiality of the data without exposing the secret encryption key. Its performance indicators, such as security and reliability are important for the realization of the blockchain. The transparent sharing of the data privacy on the distributed ledger and the privacy protection of the client wallet’s private key are truly essential.
At present, the research and development status of the platform provided by Bottos in terms of privacy-enhanced secured multi-party computing is based on the BIP32 / 44 standard in Bitcoin wallets to implement distributed management of client wallet keys and privacy protection.
Considering the higher level of data security and the distributed blockchain account as the public data of each node, further research and development are being planned on:
(1) Based on RSA, Pailliar, ECDSA and other public key cryptosystems with homomorphic attributes, as well as the GC protocol, OT protocol, and ZKP protocol to generate and verify transaction signatures between two parties;
(2) Introduce the international mainstream public key system with higher security and performance, national secret public key encryption system, and fewer or non-interactive ZKP protocols to achieve secured multi-party computing with more than two parties, allowing more nodes to participate Privacy protection of ledger data.

Summary

After years of exploration, we are now full of confidence in our current research and development direction. We are totally determined to move forward by continuous hard work. In the end, all members of Bottos also want to thank all the friends in the community for their continuous support and outstanding contributions. Your certainty is our greatest comfort and strongest motivation.

Be smart. Be data-driven. Be Bottos.
If you aren’t already in our group, please join now! https://t.me/bottosofficial
Join Our Community and Stay Updated!
Bottos Website | Twitter |Facebook | Telegram | Reddit
submitted by BOTTOS_AI to Bottos [link] [comments]

Anyone know of any good links that explain how vast the bitcoin address space is?

I guess I'm asking how big is 2256? Is is all the atoms in the universe? More?
submitted by highly_suspicious to Bitcoin [link] [comments]

I'm writing a series about blockchain tech and possible future security risks. This is the third part of the series introducing Quantum resistant blockchains.

Part 1 and part 2 will give you usefull basic blockchain knowledge that is not explained in this part.
Part 1 here
Part 2 here
Quantum resistant blockchains explained.
- How would quantum computers pose a threat to blockchain?
- Expectations in the field of quantum computer development.
- Quantum resistant blockchains
- Why is it easier to change cryptography for centralized systems such as banks and websites than for blockchain?
- Conclusion
The fact that whatever is registered on a blockchain can’t be tampered with is one of the great reasons for the success of blockchain. Looking ahead, awareness is growing in the blockchain ecosystem that quantum computers might cause the need for some changes in the cryptography that is used by blockchains to prevent hackers from forging transactions.
How would quantum computers pose a threat to blockchain?
First, let’s get a misconception out of the way. When talking about the risk quantum computers could pose for blockchain, some people think about the risk of quantum computers out-hashing classical computers. This, however, is not expected to pose a real threat when the time comes.
This paper explains why: https://arxiv.org/pdf/1710.10377.pdf "In this section, we investigate the advantage a quantum computer would have in performing the hashcash PoW used by Bitcoin. Our findings can be summarized as follows: Using Grover search, a quantum computer can perform the hashcash PoW by performing quadratically fewer hashes than is needed by a classical computer. However, the extreme speed of current specialized ASIC hardware for performing the hashcash PoW, coupled with much slower projected gate speeds for current quantum architectures, essentially negates this quadratic speedup, at the current difficulty level, giving quantum computers no advantage. Future improvements to quantum technology allowing gate speeds up to 100GHz could allow quantum computers to solve the PoW about 100 times faster than current technology.
However, such a development is unlikely in the next decade, at which point classical hardware may be much faster, and quantum technology might be so widespread that no single quantum enabled agent could dominate the PoW problem."
The real point of vulnerability is this: attacks on signatures wherein the private key is derived from the public key. That means that if someone has your public key, they can also calculate your private key, which is unthinkable using even today’s most powerful classical computers. So in the days of quantum computers, the public-private keypair will be the weak link. Quantum computers have the potential to perform specific kinds of calculations significantly faster than any normal computer. Besides that, quantum computers can run algorithms that take fewer steps to get to an outcome, taking advantage of quantum phenomena like quantum entanglement and quantum superposition. So quantum computers can run these certain algorithms that could be used to make calculations that can crack cryptography used today. https://en.wikipedia.org/wiki/Elliptic-curve_cryptography#Quantum_computing_attacks and https://eprint.iacr.org/2017/598.pdf
Most blockchains use Elliptic Curve Digital Signature Algorithm (ECDSA) cryptography. Using a quantum computer, Shor's algorithm can be used to break ECDSA. (See for reference: https://arxiv.org/abs/quant-ph/0301141 and pdf: https://arxiv.org/pdf/quant-ph/0301141.pdf ) Meaning: they can derive the private key from the public key. So if they got your public key (and a quantum computer), then they got your private key and they can create a transaction and empty your wallet.
RSA has the same vulnerability while RSA will need a stronger quantum computer to be broken than ECDSA.
At this point in time, it is already possible to run Shor’s algorithm on a quantum computer. However, the amount of qubits available right now makes its application limited. But it has been proven to work, we have exited the era of pure theory and entered the era of practical applications:
So far Shor's algorithm has the most potential, but new algorithms might appear which are more efficient. Algorithms are another area of development that makes progress and pushes quantum computer progress forward. A new algorithm called Variational Quantum Factoring is being developed and it looks quite promising. " The advantage of this new approach is that it is much less sensitive to error, does not require massive error correction, and consumes far fewer resources than would be needed with Shor’s algorithm. As such, it may be more amenable for use with the current NISQ (Noisy Intermediate Scale Quantum) computers that will be available in the near and medium term." https://quantumcomputingreport.com/news/zapata-develops-potential-alternative-to-shors-factoring-algorithm-for-nisq-quantum-computers/
It is however still in development, and only works for 18 binary bits at the time of this writing, but it shows new developments that could mean that, rather than a speedup in quantum computing development posing the most imminent threat to RSA and ECDSA, a speedup in the mathematical developments could be even more consequential. More info on VQF here: https://arxiv.org/abs/1808.08927
It all comes down to this: when your public key is visible, which is always necessary to make transactions, you are at some point in the future vulnerable for quantum attacks. (This also goes for BTC, which uses the hash of the public key as an address, but more on that in the following articles.) If you would have keypairs based on post quantum cryptography, you would not have to worry about that since in that case not even a quantum computer could derive your private key from your public key.
The conclusion is that future blockchains should be quantum resistant, using post-quantum cryptography. It’s very important to realize that post quantum cryptography is not just adding some extra characters to standard signature schemes. It’s the mathematical concept that makes it quantum resistant. to become quantm resistant, the algorithm needs to be changed. “The problem with currently popular algorithms is that their security relies on one of three hard mathematical problems: the integer factorization problem, the discrete logarithm problem or the elliptic-curve discrete logarithm problem. All of these problems can be easily solved on a sufficiently powerful quantum computer running Shor's algorithm. Even though current, publicly known, experimental quantum computers lack processing power to break any real cryptographic algorithm, many cryptographers are designing new algorithms to prepare for a time when quantum computing becomes a threat.” https://en.wikipedia.org/wiki/Post-quantum_cryptography
Expectations in the field of quantum computer development.
To give you an idea what the expectations of quantum computer development are in the field (Take note of the fact that the type and error rate of the qubits is not specified in the article. It is not said these will be enough to break ECDSA or RSA, neither is it said these will not be enough. What these articles do show, is that a huge speed up in development is expected.):
When will ECDSA be at risk? Estimates are only estimates, there are several to be found so it's hard to really tell.
The National Academy of Sciences (NAS) has made a very thourough report on the development of quantum computing. The report came out in the end of 2018. They brought together a group of scientists of over 70 people from different interconnecting fields in quantum computing who, as a group, have come up with a close to 200 pages report on the development, funding, implications and upcoming challenges for quantum computing development. But, even though this report is one of the most thourough up to date, it doesn't make an estimate on when the risk for ECDSA or RSA would occur. They acknowledge this is quite impossible due to the fact there are a lot of unknowns and due to the fact that they have to base any findings only on publicly available information, obviously excluding any non available advancements from commercial companies and national efforts. So if this group of specialized scientists can’t make an estimate, who can make that assessment? Is there any credible source to make an accurate prediction?
The conclusion at this point of time can only be that we do not know the answer to the big question "when".
Now if we don't have an answer to the question "when", then why act? The answer is simple. If we’re talking about security, most take certainty over uncertainty. To answer the question when the threat materializes, we need to guess. Whether you guess soon, or you guess not for the next three decades, both are guesses. Going for certain means you'd have to plan for the worst, hope for the best. No matter how sceptical you are, having some sort of a plan ready is a responsible thing to do. Obviously not if you're just running a blog about knitting. But for systems that carry a lot of important, private and valuable information, planning starts today. The NAS describes it quite well. What they lack in guessing, they make up in advice. They have a very clear advice:
"Even if a quantum computer that can decrypt current cryptographic ciphers is more than a decade off, the hazard of such a machine is high enough—and the time frame for transitioning to a new security protocol is sufficiently long and uncertain—that prioritization of the development, standardization, and deployment of post-quantum cryptography is critical for minimizing the chance of a potential security and privacy disaster."
Another organization that looks ahead is the National Security Agency (NSA) They have made a threat assessment in 2015. In August 2015, NSA announced that it is planning to transition "in the not too distant future" (statement of 2015) to a new cipher suite that is resistant to quantum attacks. "Unfortunately, the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing, necessitating a re-evaluation of our cryptographic strategy." NSA advised: "For those partners and vendors that have not yet made the transition to Suite B algorithms, we recommend not making a significant expenditure to do so at this point but instead to prepare for the upcoming quantum resistant algorithm transition.” https://en.wikipedia.org/wiki/NSA_Suite_B_Cryptography#cite_note-nsa-suite-b-1
What these organizations both advice is to start taking action. They don't say "implement this type of quantum resistant cryptography now". They don't say when at all. As said before, the "when" question is one that is a hard one to specify. It depends on the system you have, the value of the data, the consequences of postponing a security upgrade. Like I said before: you just run a blog, or a bank or a cryptocurrency? It's an individual risk assesment that's different for every organization and system. Assesments do need to be made now though. What time frame should organisationds think about when changing cryptography? How long would it take to go from the current level of security to fully quantum resistant security? What changes does it require to handle bigger signatures and is it possible to use certain types of cryptography that require to keep state? Do your users need to act, or can al work be done behind the user interface? These are important questions that one should start asking. I will elaborate on these challenges in the next articles.
Besides the unsnswered question on "when", the question on what type of quantum resistant cryptography to use is unanswered too. This also depends on the type of system you use. The NSA and NAS both point to NIST as the authority on developments and standardization of quantum resistant cryptography. NIST is running a competition right now that should end up in one or more standards for quantum resistant cryptography. The NIST competition handles criteria that should filter out a type of quantum resistant cryptography that is feasable for a wide range of systems. This takes time though. There are some new algorithms submitted and assessing the new and the more well known ones must be done thouroughly. They intend to wrap things up around 2022 - 2024. From a blockchain perspective it is important to notice that a specific type of quantum resistant cryptography is excluded from the NIST competition: Stateful Hash-Based Signatures. (LMS and XMSS) This is not because these are no good. In fact they are excelent and XMSS is accepted to be provable quantum resistant. It's due to the fact that implementations will need to be able to securely deal with the requirement to keep state. And this is not a given for most systems.
At this moment NIST intends to approve both LMS and XMSS for a specific group of applications that can deal with the statefull properties. The only loose end at this point is an advice for which applications LMS and XMSS will be adviced and for what applications it is discouraged. These questions will be answered in the beginning of april this year: https://csrc.nist.gov/news/2019/stateful-hbs-request-for-public-comments This means that quite likely LMS and XMSS will be the first type of standardized quantum resistant cryptography ever. To give a small hint: keeping state, is pretty much a naturally added property of blockchain.
Quantum resistant blockchains
“Quantum resistant” is only used to describe networks and cryptography that are secure against any attack by a quantum computer of any size in the sense that there is no algorithm known that makes it possible for a quantum computer to break the applied cryptography and thus that system.
Also, to determine if a project is fully quantum resistant, you would need to take in account not only how a separate element that is implemented in that blockchain is quantum resistant, but also the way it is implemented. As with any type of security check, there should be no backdoors, in which case your blockchain would be just a cardboard box with bulletproof glass windows. Sounds obvious, but since this is kind of new territory, there are still some misconceptions. What is considered safe now, might not be safe in the age of quantum computers. I will address some of these in the following chapters, but first I will elaborate a bit about the special vulnerability of blockchain compared to centralized systems.
Why is it easier to change cryptography for centralized systems such as banks and websites than for blockchain?
Developers of a centralized system can decide from one day to the other that they make changes and update the system without the need for consensus from the nodes. They are in charge, and they can dictate the future of the system. But a decentralized blockchain will need to reach consensus amongst the nodes to update. Meaning that the majority of the nodes will need to upgrade and thus force the blockchain to only have the new signatures to be valid. We can’t have the old signature scheme to be valid besides the new quantum resistant signature scheme. Because that would mean that the blockchain would still allow the use of vulnerable, old public- and private keys and thus the old vulnerable signatures for transactions. So at least the majority of the nodes need to upgrade to make sure that blocks which are constructed using the old rules and thus the old vulnerable signature scheme, are rejected by the network. This will eventually result in a fully upgraded network which only accepts the new post quantum signature scheme in transactions. So, consensus is needed. The most well-known example of how that can be a slow process is Bitcoin’s need to scale. Even though everybody agrees on the need for a certain result, reaching consensus amongst the community on how to get to that result is a slow and political process. Going quantum resistant will be no different, and since it will cause lesser performance due to bigger signatures and it will need hardware upgrades quite likely it will be postponed rather than be done fast and smooth due to lack of consensus. And because there are several quantum resistant signature schemes to choose from, agreement an automatic given. The discussion will be which one to use, and how and when to implement it. The need for consensus is exclusively a problem decentralized systems like blockchain will face.
Another issue for decentralized systems that change their signature scheme, is that users of decentralized blockchains will have to manually transfe migrate their coins/ tokens to a quantum safe address and that way decouple their old private key and activate a new quantum resistant private key that is part of an upgraded quantum resistant network. Users of centralized networks, on the other hand, do not need to do much, since it would be taken care of by their centralized managed system. As you know, for example, if you forget your password of your online bank account, or some website, they can always send you a link, or secret question, or in the worst case they can send you mail by post to your house address and you would be back in business. With the decentralized systems, there is no centralized entity who has your data. It is you who has this data, and only you. So in the centralized system there is a central entity who has access to all the data including all the private accessing data, and therefore this entity can pull all the strings. It can all be done behind your user interface, and you probably wouldn’t notice a thing.
And a third issue will be the lost addresses. Since no one but you has access to your funds, your funds will become inaccessible once you lose your private key. From that point, an address is lost, and the funds on that address can never be moved. So after an upgrade, those funds will never be moved to a quantum resistant address, and thus will always be vulnerable to a quantum hack.
To summarize: banks and websites are centralized systems, they will face challenges, but decentralized systems like blockchain will face some extra challenges that won't apply for centralized systems.
All issues specific for blockchain and not for banks or websites or any other centralized system.
Conclusion
Bitcoin and all currently running traditional cryptocurrencies are not excluded from this problem. In fact, it will be central to ensuring their continued existence over the coming decades. All cryptocurrencies will need to change their signature schemes in the future. When is the big guess here. I want to leave that for another discussion. There are enough certain specifics we can discuss right now on the subject of quantum resistant blockchains and the challenges that existing blockchains will face when they need to transfer. This won’t be an easy transfer. There are some huge challenges to overcome and this will not be done overnight. I will get to this in the next few articles.
Part 1, what makes blockchain reliable?
Part 2, The two most important mathematical concepts in blockchain.
Part 4A, The advantages of quantum resistance from genesis block, A
Part 4B, The advantages of quantum resistance from genesis block, B
Part 5, Why BTC will be vulnerable sooner than expected.
submitted by QRCollector to CryptoTechnology [link] [comments]

I decided to post this here as I saw some questions on the QRL discord.

Is elliptic curve cryptography quantum resistant?
No. Using a quantum computer, Shor's algorithm can be used to break Elliptic Curve Digital Signature Algorithm (ECDSA). Meaning: they can derive the private key from the public key. So if they got your public key, they got your private key, and they can empty your funds. https://en.wikipedia.org/wiki/Elliptic-curve_cryptography#Quantum_computing_attacks https://eprint.iacr.org/2017/598.pdf
Why do people say that BTC is quantum resistant, while they use elliptic curve cryptography? (Here comes the idea from that never reusing a private key from elliptic curve cryptography (and public key since they form a pair) would be quantum resistant.)
Ok, just gonna start with the basics here. Your address, where you have your coins stalled, is locked by your public- private key pair. See it as your e-mail address (public key) and your password (Private key). Many people got your email address, but only you have your password. If you got your address and your password, then you can access your mail and send emails (Transactions). Now if there would be a quantum computer, people could use that to calculate your password/ private key, if they have your email address/ public key.
What is the case with BTC: they don't show your public key anywhere, untill you make a transaction. So your public key is private untill you make a transaction. How do they do that while your funds must be registered on the ledger? Wel, they only show the Hash of your public key (A hash is an outcome of an equation. Usually one-way hash functions are used, where you can not derive the original input from the output. But everytime you use the same hash function on the same original input (For example IFUHE8392ISHF), you will always get the same output (For example G). That way you can have your coins on public key IFUHE8392ISHF, while on the chain, they are on G.) So your funds are registered on the blockchain on the "Hash" of the public key. The Hash of the public key is also your "email address" in this case. So you give "G" as your address to send BTC to.
By the way, in the early days you could use your actual public key as your address. And miners would receive coins on their public key, not on the hashed public key. That is why all the Satoshi funds are vulnerable to quantum attacks even though these addresses have never been used to make transactions from. These public keys are already public instead of hashed. Also certain hard forks have exposed the public keys of unused addresses. So it's really a false sense of security that most people hang on to in the first place.
But it's actually a false sense of security over all.
Since it is impossible to derive a public key from the Hash of a public key, your coins are safe for quantum computers as long as you don't make any transaction. Now here follows the biggest misconseption: Pretty much everyone will think, great, so BTC is quantum secure! It's not that simple. Here it is important to understand two things:
1 How is a transaction sent? The owner has the private key and the public key and uses that to log into the secured environment, the wallet. This can be online or offline. Once he is in his wallet, he states how much he wants to send and to what address.
When he sends the transaction, it will be broadcasted to the blockchain network. But before the actual transaction that will be sent, it is formed into a package, created by the wallet. This happens out of sight of the sender.
That package ends up carrying roughly the following info: The public key to point to the address where the funds will be coming from, the amount that will be transferred, the public key of the address the funds will be transferred to.
Then this package caries the most important thing: a signature, created by the wallet, derived from the private- public key combination. This signature proves to the miners that you are the rightfull owner and you can send funds from that public key.
So this package is then sent out of the secure wallet environment to multiple nodes. The nodes don’t need to trust the sender or establish the sender’s "identity." And because the transaction is signed and contains no confidential information, private keys, or credentials, it can be publicly broadcast using any underlying network transport that is convenient. As long as the transaction can reach a node that will propagate it into the network, it doesn’t matter how it is transported to the first node.
2 How is a transaction confirmed/ fullfilled and registered on the blockchain?
After the transaction is sent to the network, it is ready to be processed. The nodes have a bundle of transactions to verify and register on the next block. This is done during a period called the block time. In the case of BTC that is 10 minutes.
If you comprehend the information written above, you can see that there are two moments where you can actually see the public key, while the transaction is not fullfilled and registered on the blockchain yet.
1: during the time the transaction is sent from the sender to the nodes
2: during the time the nodes verify the transaction.
This paper describes how you could hijack a transaction and make a new transaction of your own, using someone elses address to send his coins to an address you own during moment 2: the time the nodes verify the transaction:
https://arxiv.org/pdf/1710.10377.pdf
"(Unprocessed transactions) After a transaction has been broadcast to the network, but before it is placed on the blockchain it is at risk from a quantum attack. If the secret key can be derived from the broadcast public key before the transaction is placed on the blockchain, then an attacker could use this secret key to broadcast a new transaction from the same address to his own address. If the attacker then ensures that this new transaction is placed on the blockchain first, then he can effectively steal all the bitcoin behind the original address."
So this means that practically, you can't call BTC a quantum secure blockchain. Because as soon as you will touch your coins and use them for payment, or send them to another address, you will have to make a transaction and you risk a quantum attack.
Why would Nexus be any differtent?
If you ask the wrong person they will tell you "Nexus uses a combination of the Skein and Keccak algorithms which are the 2 recognized quantum resistant algorithms (keccal is used by the NSA) so instead of sha-256, Nexus has SK-1024 making it much harder to break." Which would be the same as saying BTC is quantum resistant because they use a Hashing function to hash the private key as long as no transaction is made.
No, this is their sollid try to be quantum resistant: Nexus states it's different because they have instant transactions (So there wouldn't be a period during which time the nodes verify the transaction. This period would be instant.) Also they use a particular order in which the miners verify transactions: First-In-First-Out (FIFO) (So even if instant is not instant after all, and you would be able to catch a public key and derive the private key, you would n't be able to have your transaction signed before the original one. The original one is first in line, and will therefore be confirmed first. Also for some reason Nexus has standardized fees which are burned after a transaction. So if FIFO wouldn't do the trick you would not be able to use a higher fee to get prioritized and get an earlyer confirmation.
So, during during the time the nodes verify the transaction, you would not be able to hijack a transaction. GREAT, you say? Yes, great-ish. Because there is still moment # 1: during the time the transaction is sent from the sender to the nodes. This is where network based attacks could do the trick:
There are network based attacks that can be used to delay or prevent transactions to reach nodes. In the mean time the transactions can be hijacked before they reach the nodes. And thus one could hijack the non quantum secure public keys (they are openly included in sent signed transactions) who then can be used to derive privatekeys before the original transaction is made. So this means that even if Nexus has instant transactions in FIFO order, it is totally useless, because the public key would be obtained by the attacker before they reach the nodes. Conclusion: Nexus is Nnot quantum resistant. You simply can't be without using a post quantum signature scheme.
Performing a DDoS attack or BGP routing attacks or NSA Quantum Insert attacks on a peer to peer newtork would be hard. But when provided with an opportunitiy to steal billions, hackers would find a way. For example:
https://bitcoinmagazine.com/articles/researchers-explore-eclipse-attacks-ethereum-blockchain/
For BTC:
https://eprint.iacr.org/2015/263.pdf
"An eclipse attack is a network-level attack on a blockchain, where an attacker essentially takes control of the peer-to-peer network, obscuring a node’s view of the blockchain."
That is exactly the receipe for what you would need to create extra time to find public keys and derive private keys from them. Then you could sign transactions of your own and confirm them before the originals do.
By the way, yes this seems to be fixed now, but it most definately shows it's possible. And there are other creative options. Either you stop tranasctions from the base to get out, while the sender thinks they're sent, or you blind the network and catch transactions there. There are always options, and they will be exploited when billions are at stake. The keys can also be hijacked when a transaction is sent from the users device to the blockchain network using a MITM attack. The result is the same as for network based attacks, only now you don't mess with the network itself. These attacks make it possible to 1) retrieve the original public key that is included in the transaction message. 2) Stop or delay the transaction message to arrive at the blockchain network. So, using a quantum computer, you could hijack transactions and create forged transactions, which you then send to the nodes to be confirmed before the nodes even receive the original transaction. There is nothing you could change to the Nexus network to prevent this. The only thing they can do is implement a quantum resistant signature scheme. They plan to do this in the future, like any other serious blockchain project. Yet Nexus is the only of these future quantum resistant projects to prematurely claim to be quantum resistant. There is only one way to get quantum resistancy: POST QUANTUM SIGNATURE SCHEMES. All the rest is just a shitty shortcut that won't work in the end.
(If you use this info on BTC, you will find that the 10 minutes blocktime that is used to estimate when BTC will be vulnerable for quantum attacks, can actually be more then 10 minutes if you catch the public key before the nodes receive them. This makes BTC vulnerable sooner thatn the 10 min blocktime would make you think.)
By the way, Nexus using FIFO and standadrized fees which are burned after the transaction comes with some huge downsides:
Why are WOTS+ signatures (and by extension XMSS) more quantum resistant?
First of all, this is where the top notch mathematicians work their magic. Cryptography is mostly maths. As Jackalyst puts it talking about post quantum signature schemes: "Having papers written and cryptographers review and discuss it to nauseating levels might not be important for butler, but it's really important with signature schemes and other cryptocraphic methods, as they're highly technical in nature."
If you don't believe in math, think about Einstein using math predicting things most coudldn't even emagine, let alone measure back then.
Then there is implementing it the right way into your blockchain without leaving any backdoors open.
So why is WOTS+ and by extension XMSS quantum resistant? Because math papers say so. With WOTS it would even take a quantum computer too much time to derive a private key from a public key. https://en.wikipedia.org/wiki/Hash-based_cryptography https://eprint.iacr.org/2011/484.pdf
What is WOTS+?
It's basiclally an optimized version of Lamport-signatures. WOTS+ (Winternitz one-time signature) is a hash-based, post-quantum signature scheme. So it's a post quantum signature scheme meant to be used once.
What are the risks of WOTS+?
Because each WOTS publishes some part of the private key, they rapidly become less secure as more signatures created by the same public/private key are published. The first signature won't have enough info to work with, but after two or three signatures you will be in trouble.
IOTA uses WOTS. Here's what the people over at the cryptography subreddit have to say about that:
https://www.reddit.com/crypto/comments/84c4ni/iota_signatures_private_keys_and_address_reuse/?utm_content=comments&utm_medium=user&utm_source=reddit&utm_name=u_QRCollector
With the article:
http://blog.lekkertech.net/blog/2018/03/07/iota-signatures/
Mochimo uses WOTS+. They kinda solved the problem: A transaction consists of a "Source Address", a "Destination Address" and a "Change Address". When you transact to a Destination Address, any remaining funds in your Source Address will move to the Change Address. To transact again, your Change Address then becomes your Source Address.
But what if someone already has your first address and is unaware of the fact you already send funds from that address? He might just send funds there. (I mean in a business environment this would make Mochimo highly impractical.) They need to solve that. Who knows, it's still a young project. But then again, for some reason they also use FIFO and fixed fees, so there I have the same objections as for Nexus.
How is XMSS different?
XMSS uses WOTS in a way that you can actually reuse your address. WOTS creates a quantum resistant one time signature and XMSS creates a tree of those signatures attached to one address so that the address can be reused for sending an asset.
submitted by QRCollector to QRL [link] [comments]

Peer-to-peer smart derivatives for any asset over any network!

Taurus0x Overview
Distributed off-chain / on-chain protocol powering smart derivatives from end to end, for any asset over any network.
Background of Taurus0x
Remember around September 2017 when the world lost its cool over Bitcoin prices? It was nearly an ideological war for many. It occurred to me to create an app for people to bid on Bitcoin prices, and I would connect that app to a smart contract to execute bids on the blockchain. It took me a long couple of weeks to figure out how many licenses I would need to acquire to run such a business in the United States. It became evident that market making is a huge undertaking and is better off decentralized in a an open-standard protocol to generate liquidity.
The protocol needed to be fully decentralized as a primary requirement. Why? because I believe in the philosophy of decentralization and creating fair market makers, governed by a public community. It is the right thing to do in order to create equal opportunity for consumers without centralized control and special privileges.
It comes at no surprise to anyone at this point that the vast majority of “ICOs” were empty promises. Real life utility was and is a necessity for any viable project. Transitioning from a centralized world to a tokenized and decentralized one cannot be abrupt. The protocol needed to support both worlds and allow for a free market outcome as far as adoption. Scalability-wise and as of today, Ethereum could not handle a real-time full DEX that could compete with advanced and well-known centralized exchanges. And quite frankly, maybe it’s not meant to. This is when the off-chain thinking started, especially after witnessing a couple of the most successful projects adopting this approach, like Lighting and 0xProject. The trade-off was the complexity of handling cryptographic communications without the help of the blockchain.
I had met my co-founder Brett Hayes at the time. I would need another 3 or 4 articles to explain Brett for you.
To the substance.
What is Asymmetrical Cryptography?
Asymmetrical cryptography is a form of cryptography that uses public and private key pairs. Each public key comes with its associated and unique private key. If you encrypt a piece of data with a private, only the associated public key may be used to decrypt the data. And vice versa.
If I send you a “hello” encrypted with my private key, and you try to decrypt it with my public key (which is no secret). If it decrypts fine, then you are positive that this “hello” came from me. This is what we call digital signatures.
The figure below is from Taurus0x whitepaper and describes the chosen digital signature algorithm (ECDSA).
https://preview.redd.it/n8kavgofbm211.png?width=1000&format=png&auto=webp&s=289695a17cd413b68105b249d615b82bae1fe1dc
What are Smart Derivatives?
Well, what are derivatives in the first place?
In the financial world, a derivative is a contract between two or more parties based upon an asset. Its price is determined by fluctuations in the underlying asset. The most common underlying assets include stocks, bonds, commodities, currencies, interest rates and market indexes. Futures contracts, forward contracts, options, swaps, cryptocurrency prices and warrants are common derivatives.
Smart Derivatives are smart contracts that behave like financial derivatives. They possess enough information and funds to allow for execution with guaranteed and trusted outcomes.
What is Taurus0x?
Taurus0x is a distributed off-chain / on-chain protocol powering smart derivatives from end to end. Taurus0x is both asset and network-agnostic. The philosophy is to also become blockchain-agnostic as more blockchains come to life.
Distributed = fully decentralized set of smart contracts and libraries.
Off-chain = ad-hoc protocol not limited to a blockchain.
On-chain= trusted outcome without intermediaries.
Asset-agnostic = supports any asset, not limited to cryptocurrency.
Network-agnostic = contracts can be transmitted over any network (email, text, twitter, facebook, pen and paper, etc.)
Who can use Taurus0x?
Taurus0x protocol is ultimately built to serve end consumers who trade derivative contracts. Participants may engage in a peer-to-peer derivative contracts among each other without the need for a house in the middle.
The Taurus0x team and advisory realize that the migration from a centralized world to a decentralized one cannot be abrupt, specifically in FinTech. Taurus0x is built to support existing business models as well as C2C peer-to-peer. Exchanges who want to take on the derivative market may use an open-source protocol without worrying about building a full backend to handle contract engagement and settlement. Taurus0x Exchanges would simply connect participants to each other, using matching algorithms.
Taurus0x intends to standardize derivative trading in an open way. Having more exchanges using the protocol allows for creating public and permission-ed pools to generate compounded liquidity of contracts. This helps smaller exchanges by lowering the entry-to-market barrier.
How does Taurus0x work?
The process is simple and straightforward. Implementation details are masked by the protocol making it very easy to build on top. The first 2 steps represent off-chain contract agreement, while 3 and 4 solidify and execute the contract on-chain.
1- Create
A producer creates a contract from any client using Taurus0x protocol, whether from an app, a website or a browser extension. The producer specifies a condition that is expected to happen sometime in the future. For example, I (the producer) might create a binary contract with the following condition:
Apple stock > $200 by July 1, 2018 with a premium of 10 TOKENs (any ERC20 token)
The contract will be automatically signed with my private key, which confirms that I created it. I can then share it (a long hexadecimal text) with anyone over any network I choose.
2- Sign
When the consumer receives the signed contract, they will be able to load it via any client using Taurus0x. If the consumer disagrees with the producer on the specified condition, they will go ahead and sign the contract with their private key. Back to our example above, the consumer would think that Apple stock will remain under $200 by July 1, 2018. Now that the we have collected both signatures, the contract is ready to get published on blockchain.
3- Publish
Anyone who possesses the MultiSig contract and its 2 signatures can go ahead and publish it to the Ethereum blockchain. That would most likely be either the producer, the consumer or a party like an exchange in the middle hosting off-chain orders. As soon as the contract is published, Taurus0x proxy (an open-source smart contract) will pull necessary funds from participating wallets into the newly created Smart Derivative. The funds will live in the derivative contract until successful execution.
4- Execute
If at any point before the contract expiration date the specified condition becomes true (i.e. Apple Stock > $200), the producer can go ahead and execute the derivative contract. The contract will calculate the outcome and transfer funds accordingly. In this binary derivative example, the producer will receive 20 TOKENs in their wallet upon executing the contract. If the expiration date comes and the producer had never successfully executed the contract, the consumer may execute it themselves and collect the 20 TOKENs.
This figure is from the Taurus0x whitepaper depicts the process:
https://preview.redd.it/vr2y9b8ibm211.png?width=1250&format=png&auto=webp&s=1b7a8144fe2a41116a4f64d7418d3dacb4f42fc5
Summary
Taurus0x is a highly versatile and modular protocol built using Ethereum-based smart contracts and wrapper JS libraries to bootstrap developer adoption. While Smart Derivatives are the first application of Taurus0x, it is worth noting that the protocol is not limited to cryptocurrencies or even derivatives for that matter. It is an ad-hoc and scalable contract management solution meant to guarantee trusted outcomes in the future based on conditions specified today. The semi off-chain nature of the protocol helps remediate Ethereum’s scalability limitations and makes it a viable product.
Finally, the plan for Taurus0x is to be governed by a Decentralized Autonomous Organization or DAO as outlined in the roadmap on https://taurus0x.com. This is an area of research and development as of today. Decentralization does not fulfill its purpose if governance remains centralized, therefore it is without compromise that Taurus0x follows a decentralized governance structure.
submitted by Taurus0x to Taurus0x [link] [comments]

Nice Article About How HPB Perform Vs EOS (and so ETH)

HPB: Unique Blockchain Infrastructure
Now most public chains will mention that the problem of tps development is the problem of the blockchain. This is also because the traditional blockchain has the problem of poor performance. In order to reach consensus, the efficiency is sacrificed. But if you want to build an ecosystem of countless DAPPs based on the public chain, there is no guarantee of performance that is almost impossible.
The dream of building a DAPP ecosystem is that Bitcoin has not been completed and it is not necessary to complete it. Bitcoin is only a digital currency and it has initially fulfilled its historical mission. It has become a value storer, and it has opened the world of the blockchain. .
Ethereum started with the goal of building a world-wide computer that provided the infrastructure for building decentralized applications, but so far it has only succeeded in the crowdfunding field. Due to performance, cost, scalability, and other issues, it is not yet possible to become a DAPP infrastructure. By the end of 2017, a simple encrypted cat game would have caused Ethereum to jam. Ethereum tried to get rid of the predicament through techniques such as fragmentation, Plasma, and PoS consensus.
Newcomers, such as EOS, are highlighting their high performance, emphasizing the possibility of reaching mega-level tps. Then, in the future, an infrastructure is needed to build a prosperous DAPP ecosystem on this decentralized infrastructure to meet the user or business needs of different scenarios.
What kind of program is a better choice? This is what blue fox has been paying attention to. Blue Fox focuses on an HPB blockchain project that uses a completely different search path than other public chains or infrastructure. This path is worth paying attention to all the buddies who pay attention to the blockchain.
This path is a combination of hardware and software. It is more demanding and the practice is more difficult. However, if it is truly grounded, it may be a good path.
HPB to become a high-performance blockchain infrastructure
Whether HPB or EOS have the same goals, they must provide a high-performance infrastructure for the decentralized ecosystem. why? Mainly from the blockchain to the mainstream business scene point of view. The current blockchain has achieved some success in security and decentralization, but there are natural constraints in terms of efficiency. This hinders its application scenario to the mainstream.
This is also a direction that Blockchain 3.0 has been exploring. Through higher performance, lower costs, and better scalability to meet the needs of more decentralized application scenarios.
The current bitcoin and Ethereum's throughput are both worrying. Bitcoin supports about 7 transactions per second on average, and Ethereum has about 15 throughputs. If you make the block bigger, you can also increase the throughput, but it will cause the problem of block bloat. Last year, an encrypted cat game made everyone see the blockchain congestion problem. From a performance point of view, it takes a long time for blockchains to reach the mainstream.
In addition to the lack of tps performance, the transaction cost of the blockchain is high. Both ordinary users and developers cannot afford gas costs that are too high. For example, before Ethereum's crypto-games became hot, there were even transaction fees compared to encrypted cats. It is also expensive.
The HPB and EOS goals are similar, but their paths are completely different. HPB uses a combination of hardware and software, has its own dedicated chip hardware server, which makes it theoretically have higher performance.
HPB is also trying to create an operating system architecture that can build applications. This architecture includes accounts, identity and authorization management, policy management, databases, asynchronous communications, program scheduling on CPUs, FPGAs, or clusters, and hardware accelerated technology. Realizes low delay and high concurrency and realizes mega-level tps to meet the needs of commercial scenarios.
It is different from EOS. Its architecture, in addition to its software architecture and its hardware architecture, is a combination of hardware and software blockchain architecture that combines high-performance computing and cloud computing concepts. The hardware system includes a distributed core node composed of high performance computing hardware, a general communication network, and a cloud terminal supported by high performance computing hardware.
The core node supports a standard blockchain software architecture, including consensus algorithms, network communications, and task processing. It also introduces a hardware acceleration engine. It works with software to achieve high-performance tps through BOE technology (Blockchain Offload Engine) and consensus algorithm acceleration, data compression, and data encryption.
BOE makes HPB unique
In the HPB's overall architecture, compared with other blockchain infrastructures, there are obvious differences. One of the important points is its BOE technology.
BOE mentioned above, is the blockchain offload engine. The BOE engine includes BOE hardware, BOE firmware, and matching software systems. It is a heterogeneous processing system that achieves high performance and high concurrent computational acceleration by combining CPU serial capabilities with the parallel processing capabilities of the FPGA/ASIC chip.
In the process of parsing TCP packets and UDP packets, the BOE module does not need to participate in the CPU, which can save CPU resources. The BOE module performs integrity checking, signature verification, and account balance verification on received messages such as transactions and blocks, performs fragment processing on large data to be transmitted, and encapsulates the fragments to ensure the integrity of received data. At the same time, statistics work will be performed according to the received traffic of the TCP connection, and corresponding incentives will be provided according to the system contribution.
BOE has played its own role in signature verification speed, encryption channel security, data transmission speed, network performance, and concurrent connections.
The BOE acceleration engine embeds the ECDSA module. The main purpose of this module is to improve the speed of signature verification. ECDSA is also an elliptic curve digital signature algorithm. Although it is a mature algorithm that is widely used at present, the pure software method can only be performed thousands of times per second and cannot meet the high performance requirements. So the combination of BOE and ECDSA is a good attempt.
In the process of data transmission between different nodes, BOE needs to establish an encrypted channel. In this process, it uses a hardware random number generator to implement the security of the encrypted channel, because the seed of the random number of the key exchange becomes unpredictable.
The BOE acceleration engine also uses block data fragmentation broadcasting technology. Block fragmentation includes a complete block header, which facilitates the broadcast of newly generated blocks to all nodes. With block data fragmentation, network data can be quickly transmitted between different nodes.
The BOE technology can perform traffic statistics of node connections based on hardware, and can calculate network bandwidth data provided by different nodes. Only providing network bandwidth to the system will have the opportunity to become a high contribution value node. In this way, incentives for the contribution of the nodes are provided.
In terms of concurrency, BOE is expected to maintain more than 10,000 TCP sessions and handle 10,000 concurrent sessions through an acceleration engine. BOE's dedicated parallel processing hardware replaces the traditional software serial processing functions such as transaction data broadcasting, unverified blockwide network broadcasting, transaction confirmation broadcasting, and the like.
According to HPB estimates, through the BOE acceleration engine, the session response speed and the number of session maintenance can reach more than 100 times the processing power of the common computing platform node. If the actual environment can be achieved, it is a very significant performance improvement.
Consensus algorithm for internal and external bi-level elections
HPB not only significantly improves performance through BOE, but also adopts a dual-layer internal and external voting mechanism in consensus algorithms. It attempts to achieve more efficient consensus efficiency on the premise of ensuring security and privacy.
Outer election refers to the selection of high-contribution-value node members from many candidate nodes, and the election will use node contribution value evaluation indicators. Inner-layer election refers to an anonymous voting mechanism based on a hash queue. When a block is generated, it calculates which high-contribution value node preferentially generates a block. Nodes with high priority have the right to generate blocks preferentially.
So, how to choose high contribution value node? Here is the first indicator to evaluate the contribution value. The indicators include whether a BOE acceleration engine is configured, network bandwidth contribution (data throughput over a fixed period of time), reputation, and total node token holding time. Among them, the creditworthiness of the node is obtained through the analysis of participating transactions and data analysis such as packaged blocks and transaction forwarding. The total holding time of the node token can be obtained by real-time statistics on the account information.
The outer election adopts an adaptive and consistent election plan. That is, by maintaining the consistency of “books” to ensure the consistency of outer elections, this can reduce network synchronization, and can also use the data of each node on the chain. The first is to put the above-mentioned four evaluation indicators into the block. By keeping the account books consistent, you can calculate the current ranking of all the participating candidate nodes. The higher-ranking high-contribution value nodes will become the official high contribution in the next round. Value node.
With the formal high contribution value node, the goal of the inner election is to find the high contribution value node corresponding to each block as soon as possible. The entire process is divided into three phases: nominations, statistics, and calculations. These three phases combine security, privacy, and performance.
The first is the nomination. At the beginning of the voting period, the BOE acceleration engine generates a random Commit. The high contribution value node submits its Commit, and the Commit synchronizes with the chain generated by the high-performance node. After the voting period is over, the Commit in the blockchain is started and the ticket pool is created. The last is the calculation. The calculation is mainly based on the weight algorithm to calculate the node's generation priority in the block. Generate the highest-priority high-contribution value node and obtain the block package right.
Other nodes can verify the random number and address signature according to the principle of verifiable random function, which not only guarantees security, but also guarantees the unpredictability and privacy of high contribution value nodes.
In general, HPB's consensus algorithm combines security, privacy, and speed through a combination of hardware and software. Using the BOE acceleration engine to generate random numbers, contribution value evaluation indicators, coherence ledgers, anonymous voting mechanisms, weight algorithms, signature verification, etc., privacy, reliability, security, and high efficiency are achieved.
Universal virtual machine design: support for different blockchains
The HPB virtual machine adopts a plug-in design mechanism and can support multiple virtual machines. It can implement the combination of the underlying virtual machine and upper level program language translation and support, and support the basic application of virtual machines. In addition, the external interface of the virtual machine can be realized through customized API operations, which can interact with the account data and external data.
The advantage of this mechanism is that it can realize the high performance of native code execution when the smart contract runs, and it can also implement the common virtual machine mechanism supporting different blockchains. For example, it can support Ethereum virtual machine EVM. The smart contract on EVM can also be used on HPB.
Neo's virtual machine NeoVM can also be used on HPB. When high-performance scenarios are needed, users of both EVM and NeoVM need only a few adaptations to interact with other HPB applications.
The HPB smart contract has also made some improvements, such as the management of the life cycle, auditing and forming a common template. No progress can realize the full lifecycle management of smart contracts, such as the complete and controllable process management and integration rights management mechanism for intelligent contract submission, deployment, use, and logout.
In smart contract auditing, HPB conducts a protective audit that combines automated tool auditing with professional code design. In terms of templates, HPB gradually formed a generic smart contract template to support the flexible configuration of various common business scenarios.
Incentives for a positive cycle of token economy
When the high-contribution value node generates a block, it will receive a token reward from the system. From the design of the HPB, the system will issue a token of no more than 6% per year, and the additional token will be proportional to the total number of high-contribution nodes and candidate nodes.
In order to obtain the token reward from the system, it must first become a high contribution value node, and only the high contribution value node has the right to generate a block.
In order to obtain the right to generate a block, it is necessary to contribute, including holding a certain number of HPB tokens, having a BOE hardware acceleration engine, and contributing network bandwidth to the system.
From its mechanism, we can see that HPB's token economic system design is considered from the formation of a positive incentive system. It maintains the overall HPB system by holding the HPB token, having a BOE hardware acceleration engine, and contributing network bandwidth to the system. safe operation.
HPB landing: supports a variety of high-frequency scenes
In essence, HPB is a high-performance blockchain platform and is an infrastructure where various blockchain applications can be explored. Including blockchain finance, blockchain games, blockchain entertainment, blockchain big data, blockchain anti-fake tracking, blockchain energy and many other fields.
In terms of finance, decentralized lending, decentralized asset management, etc. can all be built on the HPB platform to meet high-frequency lending and transaction scenarios.
In terms of games, although all game operations are not practical, the up-chaining and trading of assets such as game props are important scenes. Once the realization of the game product chain, you can ensure that the game assets are transparent, unique, can not be tampered with, never disappeared, etc., providing great convenience for the transaction between the game products.
Compared with traditional centralized service providers, there are many advantages. For example, there is no need to worry about the loss, confiscation, or change of virtual game products. The transaction process is also simple and convenient. Since HPB has a high-performance blockchain, it is expected to support millions of concurrents, and many high-frequency scenarios can also be satisfied.
For blockchain entertainment, it can support the securitization of star assets, such as star-related token assets. In terms of blockchain big data, it can support the data right, ensure that the data owner controls the data ownership, ensure the authenticity of the data, traceability, can not be modified, and finally realize data transactions according to the needs of different entities. , to ensure personal privacy and data security.
Based on HPB's blockchain infrastructure, based on its high performance, blockchain applications can be built in multiple scenarios. The HPB design provides a blockchain application program interface and application development package. In the HPB blockchain base layer, it provides blockchain data access and interactive interfaces, and supports various applications and development languages ​​using JSON-RPC and RESTful APIs. It also supports multi-dimensional blockchain data query and transaction submission, and the interactive access interface can be integrated with the privilege control system.
The application development package includes comprehensive functional service packages that operate on blockchains based on different development languages. For example, it provides functional interfaces such as encryption, data signature, and transaction generation, and can seamlessly support integration and function expansion of various language service systems. , supports multiple language SDKs such as Java, JavaScript, Ruby, Python, and .NET.
Conclusion
If the future blockchain wants to enter the mainstream population, it must have high-performance public-chain or infrastructure support to form a true application ecosystem. Ethereum's dream to build a decentralized ecosystem cannot be achieved on an existing basis. Ethereum is trying to improve performance and expand scalability through fragmentation, plasma, and pos consensus mechanisms.
At the same time, the current status quo has also spawned other public-linked efforts, including eos, HPB, etc. Among them, HPB has adopted a unique combination of hardware and software, dedicated BOE hardware acceleration, signature verification speed, encryption channel security, data transmission Speed, network performance, and high concurrent support all have their own advantages over simple software solutions.
In the software architecture, consensus algorithms for internal and external elections, flexible virtual machine design, application program interfaces, and development packages are also used to provide infrastructure for the development of blockchain application scenarios.
From the overall design of HPB, its goal is to provide high-performance infrastructure for the entire blockchain to mainstream people. With a high-performance infrastructure, blockchains can only be implemented in many high-frequency scenarios to create more application ecosystems and have the opportunity to reach mainstream people.
The HPB team focused on the technical background, including the founder Wang Xiaoming who was an early evangelist in the blockchain and once participated in the establishment of UnionPay Big Data, Beltal, and Beltal CTO. Co-founder CTO Xu Li has more than 10 years of experience in chip industry R&D and management. He was responsible for the logic design, R&D, and FPGA chip marketing of the core products of the world's top qualified equipment suppliers and the world's largest component distributor. Technical VP Shu Shanlin once worked for Inspur, a well-known Chinese server manufacturer, as an embedded chief engineer, and has extensive R&D experience in embedded software and underlying software. Another co-founder, Li Jinxin, is a former blockchain analyst of Guotai Junan and has extensive experience in digital asset investment.
The background of the team members is in line with the HPB's soft and hard path. According to the latest monthly report, the basic PCB layout design of the BOE board, the overall architecture design of the BOE, and the ECC acceleration scheme have also been completed. At the same time, several tests have been completed for the BOE hardware acceleration engine.
It is hoped that HPB will develop rapidly and will embark on a path with its own characteristics in the future of blockchain infrastructure competition. It will provide support for more decentralized applications and eventually build a prosperous ecosystem.
Risk Warning: All Blue Fox articles do not constitute investment recommendations, investment risks, it is recommended to conduct in-depth inspection of the project, and carefully make their own investment decisions.
Source: https://mp.weixin.qq.com/s/RSuz6R6MTotEL_U__Al_Wg
submitted by azerbajian to HPBtrader [link] [comments]

ECDSA Playground

ECDSA Playground
ECDSA Playground https://8gwifi.org/ecsignverify.jsp

Elliptic Curve Digital Signature Algorithm or ECDSA is a cryptographic algorithm used by Bitcoin to ensure that funds can only be spent by their rightful owners.
This tool is capable of generating key the the curve


https://preview.redd.it/9fwcnzijrgu11.png?width=1127&format=png&auto=webp&s=a4f36c49b74f3122b2bc903f7582c17ea041dec1
"c2pnb272w1", "c2tnb359v1", "prime256v1", "c2pnb304w1", "c2pnb368w1", "c2tnb431r1", "sect283r1", "sect283k1", "secp256r1", "sect571r1", "sect571k1", "sect409r1", "sect409k1", "secp521r1", "secp384r1", "P-521", "P-256", "P-384", "B-409", "B-283", "B-571", "K-409", "K-283", "K-571", "brainpoolp512r1", "brainpoolp384t1", "brainpoolp256r1", "brainpoolp512t1", "brainpoolp256t1", "brainpoolp320r1", "brainpoolp384r1", "brainpoolp320t1", "FRP256v1", "sm2p256v1" 
secp256k1 refers to the parameters of the elliptic curve used in Bitcoin’s public-key cryptography, and is defined in Standards for Efficient Cryptography (SEC)
A few concepts related to ECDSA:
  • private key: A secret number, known only to the person that generated it. A private key is essentially a randomly generated number. In Bitcoin, a private key is a single unsigned 256 bit integer (32 bytes).
  • public key: A number that corresponds to a private key, but does not need to be kept secret. A public key can be calculated from a private key, but not vice versa. A public key can be used to determine if a signature is genuine (in other words, produced with the proper key) without requiring the private key to be divulged.
  • signature: A number that proves that a signing operation took place.
Openssl Generating EC Keys and Parameters
$ openssl ecparam -list_curves secp256k1 : SECG curve over a 256 bit prime field secp384r1 : NIST/SECG curve over a 384 bit prime field secp521r1 : NIST/SECG curve over a 521 bit prime field prime256v1: X9.62/SECG curve over a 256 bit prime field 
An EC parameters file can then be generated for any of the built-in named curves as follows:
$ openssl ecparam -name secp256k1 -out secp256k1.pem $ cat secp256k1.pem -----BEGIN EC PARAMETERS----- BgUrgQQACg== -----END EC PARAMETERS----- 
To generate a private/public key pair from a pre-eixsting parameters file use the following:
$ openssl ecparam -in secp256k1.pem -genkey -noout -out secp256k1-key.pem $ cat secp256k1-key.pem -----BEGIN EC PRIVATE KEY----- MHQCAQEEIKRPdj7XMkxO8nehl7iYF9WAnr2Jdvo4OFqceqoBjc8/oAcGBSuBBAAK oUQDQgAE7qXaOiK9jgWezLxemv+lxQ/9/Q68pYCox/y1vD1fhvosggCxIkiNOZrD kHqms0N+huh92A/vfI5FyDZx0+cHww== -----END EC PRIVATE KEY----- 
Examine the specific details of the parameters associated with a particular named curve
$ openssl ecparam -in secp256k1.pem -text -param_enc explicit -noout Field Type: prime-field Prime: 00:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff: ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:fe:ff: ff:fc:2f A: 0 B: 7 (0x7) Generator (uncompressed): 04:79:be:66:7e:f9:dc:bb:ac:55:a0:62:95:ce:87: 0b:07:02:9b:fc:db:2d:ce:28:d9:59:f2:81:5b:16: f8:17:98:48:3a:da:77:26:a3:c4:65:5d:a4:fb:fc: 0e:11:08:a8:fd:17:b4:48:a6:85:54:19:9c:47:d0: 8f:fb:10:d4:b8 Order: 00:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff: ff:fe:ba:ae:dc:e6:af:48:a0:3b:bf:d2:5e:8c:d0: 36:41:41 Cofactor: 1 (0x1) 
submitted by anish2good to u/anish2good [link] [comments]

InterValue: Analysis Of A New Anti-quantum Attack Cipher Algorithm

InterValue: Analysis Of A New Anti-quantum Attack Cipher Algorithm
https://preview.redd.it/50gpnoe1wdl11.jpg?width=900&format=pjpg&auto=webp&s=c636ddc4a1c49658cba067084009e557a113b8a8

InterValue aims to provide a global value Internet infrastructure. In response to deal with various problems that existing in the present blockchain infrastructure, InterValue optimizes the protocol and mechanism of blockchain technology at all levels, which can achieve the support agreement of value transmission network. At present, the InterValue 2.0 testnet has been released, we designed and implemented a new HashNet consensus mechanism. Transaction speed within one single shard exceeds 280,000 TPS and 4 million TPS for the whole network. Security (anti-quantum attack characteristics) is undoubtedly the highlight of InterValue under the goal of establishing a low-level infrastructure for the whole field of ecology.
What is the quantum attack?
Quantum computing is a new way of building computers—using the quantum properties of particles to perform operations on data, it is probably the same way as traditional computers. In some cases, the amount of algorithmic acceleration is unusual. It is this characteristics that makes some difficult problems that exist in the electronic computer environment become easy to calculate in the quantum computer. This superior computing power of quantum computers has influenced the security of existing public key cryptography which based on computational complexity. This is the quantum attack.
What does anti-quantum attack mean?
Algorithms have always been the underlying core of blockchain technology. Most of the current algorithms are unable to withstand quantum attacks. It means that all the information of the user will be exposed to the quantum computer. If you have an anti-quantum attack algorithm, it means that the personal information is safe, at least with current technology, it cannot be cracked. Anti-quantum attack algorithms mean security. The impact of quantum attacks on digital currencies is devastating. Quantum attacks directly disrupt existing information security systems. Quantum attacks will expose the assets in the digital industry, including the benefits of mining; the keys to your wallet will be cracked and the wallet will no longer be secure. Totally, the existing security system will be disintegrated. Therefore, it is imperative to develop anti-quantum attack algorithms in advance. It is a necessary technical means to firmly protect the privacy of users.
InterValue uses a new anti-quantum attack cryptographic algorithm at the anti-quantum attack level. By replacing the ECDSA signature algorithm with the NTRUsign signature algorithm that based on the integer lattice, and replacing the existing SHA series algorithm with the Keccak-512 hash algorithm, the speed, and threats of the rapid quantum computation decrease.

Adopt NTRUsign digital signature algorithm
Current ECDSA signature algorithm
The current blockchain mainly uses the ECDSA digital signature algorithm based on elliptic curve. The signature algorithm: First, the public-private key pair needs to be generated, the private key user keeps it, the public key can be distributed to other people; secondly, the private key pair can be used and a specific message is signed; finally, the party that owns the signature public key is able to verify the signature. ECDSA has the advantages of small system parameters, fast processing speed, small key size, strong anti-attack and low bandwidth requirements. However, the quantum computer can implement a very efficient SHOR attack algorithm by ECDSA signature algorithm, and the ECDSA signature algorithm cannot resist the quantum attack.
Adopt new NTRUsign-251 signature algorithm
At present, the public key cryptosystem against quantum SHOR algorithm attacks mainly includes public key cryptography that based on lattice theory, code-based public key system represented by McEliece public key cryptosystem and multivariate polynomial represented by MQ public key cryptography. The security of McEliece public key cryptosystem is based on the error correction code problem, which is strong in security but low in computational efficiency. The MQ public key cryptosystem, that is, the multivariate quadratic polynomial public key cryptosystem, based on the intractability of the multivariate quadratic polynomial equations on the finite field, has obvious disadvantages in terms of security. In contrast, the public key encryption system based on lattice theory is simple, fast, and takes up less storage space. InterValue uses the signature algorithm based on the lattice theory NTRUSign-251. The specific implementation process of the algorithm is as follows:

https://preview.redd.it/byyzx8k3wdl11.png?width=762&format=png&auto=webp&s=d454123cabbe730271b66362a55e17b861ad50b4
It has been proved that the security of the NTRUSign-251 signature algorithm is ultimately equivalent to finding the shortest vector problem in a 502-dimensional integer lattice, but the SHOR attack algorithm for the shortest vector problem in the lattice is invalid, and there is no other fast solutions under the quantum computer. The best heuristic algorithm is also exponential, and the time complexity of attacking NTRUSign-251 signature algorithm is about 2168. Therefore, InterValue uses NTRUSign-251 algorithm that can resist SHOR algorithm attack under quantum computing.

Adopt Keccak512 hash algorithm
The common anti-quantum hash algorithm
The most effective attack methods for hash algorithm under quantum computer is GROVER algorithm, which can reduce the attack complexity of Hash algorithm from O (2^n) to O (2^n/2). Therefore, the current bit adopts the Hash algorithm PIREMD160 whose output length is only 160 bits, under this circumstance, quantum attacks algorithm used in the currency system is not safe. An effective way of resisting quantum attacks is to reduce the threat of the GROVER algorithm by increasing the output length of the Hash algorithm. It is generally believed that the Hash algorithm can effectively resist quantum attacks as long as the output length of the hash algorithm is not less than 256 bits. In addition to the threat of quantum attacks, a series of hash functions that are widely used in practice, such as MD4, MD5, SHA-1, and HAVAL, are attacked by traditional methods such as differential analysis, modulo difference, and message modification methods. Therefore, blockchains’ Hash algorithm also needs to consider the resistance of traditional attacks.
Winning the hash algorithm Keccak512
Early blockchain projects such as Bitcoin, Litecoin, and Ethereum used SHA series Hashing algorithms that exist design flaws (but not fatal). Recently, new blockchain projects have been adopted by the National Institute of Standards and Technology. The SHA-3 plan series algorithm is a new Hash algorithm.
InterValue adopts the SHA-3 plan's winning algorithm Keccak512, which contains many latest design concepts and ideas of hash function and cryptographic algorithm. It is simple in design, which is convenient for hardware implementation. The algorithm was submitted by Guido Bertoni, Joan Daemen, Michael Peters, and Giles Van Assche in October 2008. The Keccak512 algorithm uses a standard sponge structure that maps input bits of arbitrary length into fixed-length output bits. The speed is fast, with an average speed of 12.5 cycles per byte under the Intel Core 2 processor.

https://preview.redd.it/z0nnrjp4wdl11.jpg?width=724&format=pjpg&auto=webp&s=bef29aafeb1ef74b21bacb6db3f07987bf0a7ba5
As shown in the figure, in the absorption phase of the sponge structure, each message packet is XORed with the r bits inside the state, and then encapsulated into 1600 bits of data together with the fixed c bits to perform the round function f processing, and then into the squeeze. In the extrusion phase, a hash of n-bit fixed output length can be generated by iterating 24 cycles. Each loop R has only the last step round constant, but the round constant is often ignored in collision attacks. The algorithm proved to have good differential properties, and until now third-party cryptanalysis did not show that Keccak512 has security weaknesses. The first type of original image attack complexity for the Keccak512 algorithm under quantum computer is 2^256, and the second type of original image attack complexity for the Keccak512 algorithm is 2^128, so InterValue combined with the Keccak512 algorithm can resist the GROVER algorithm attack under quantum computing.

Written in the end
Quantum computing has gone through 40 years from the theory to practice. From the emergence to the present, it has entered the stage of quantitative change to qualitative change in technology accumulation, business environment, and performance improvement. For the blockchain, the most deadly part is not investor's doubt, but the accelerated development of quantum computers. In the future, quantum computers are most likely to subvert the traditional technical route of classical computing and have a larger field of development. We are sympathetic to its destructive power to the existing blockchain, and we look forward to helping the entire blockchain industry to shape a new ecosystem. On the occasion of entering the new "quantum era, trusting society", the InterValue team believes that only by fully understanding the essence of quantum cryptography (quantum communication) and anti-quantum cryptography, can we calmly stand on a high level and arrange the outline.
submitted by intervalue to InterValue [link] [comments]

Elliptic Curve Diffie Hellman - YouTube elliptic curve addition Blockchain tutorial 11: Elliptic Curve key pair generation ... rfid via ecdsa Elliptic Curve Cryptography Overview - YouTube

The reason the hackers were able to do so was because Sony misimplemented ECDSA’s algorithm by forcing a static instead of choosing a random one for every signature. Security of ECDSA: A note on existential unforgeability. There are no known proof of ECDSA’s security in the RO model. This may be surprising, given ECDSA’s usage in bitcoin ... I am using python-ecdsa incorrectly; No, the python-ecdsa package just uses a different encoding and you are surprised. the bitcoin wiki is incorrect; No, the bitcoin wiki assumed a particular encoding chosen for their protocol. python-ecdsa is not implemented correctly; Definitely not; at least not with regards to the size of the signature. Bitcoin uses very large numbers for its base point, prime modulo, and order. In fact, all practical applications of ECDSA use enormous values. The security of the algorithm relies on these values ... There is no known algorithm for determining x, so your only option is to keep adding P to itself until you get X or keep subtracting P from X until you get P. On average, x will be somewhere between 0 and 2²⁵⁶-1, about 2¹²⁸. Thus, it will take you on average 2¹²⁸ point addition operations to determine x no matter your approach. Even if your computer could do one trillion point ... ECDSA (‘Elliptical Curve Digital Signature Algorithm’) is the cryptography behind private and public keys used in Bitcoin. It consists of combining the math behind finite fields and elliptic ...

[index] [24066] [30979] [28682] [21307] [40256] [21227] [42597] [3726] [36543] [5548]

Elliptic Curve Diffie Hellman - YouTube

The RSA Encryption Algorithm (1 of 2: Computing an Example) - Duration: 8 ... ECDSA Authentication Explained Using ATECC108 - Duration: 5:07. CryptoAuthentication 7,144 views. 5:07. Lecture 17 ... Verifing card Via ECDSA. Testing response on Cortex-M0 ..... All code size 21kb. A short video I put together that describes the basics of the Elliptic Curve Diffie-Hellman protocol for key exchanges. There is an error at around 5:30 wher... This is part 11 of the Blockchain tutorial explaining how the generate a public private key using Elliptic Curve. In this video series different topics will ... Elliptic Curve Digital Signature Algorithm ECDSA Part 10 Cryptography Crashcourse - Duration: 35:32. Dr. Julian Hosp - Bitcoin, Aktien, Gold und Co. 6,227 views

#