The company uses a decentralized finance protocol known as MonoX that lets users trade digital currency tokens without some of the requirements of traditional exchanges. Specifically, the hack used the same token as both the tokenIn and tokenOut, which are methods for exchanging the value of one token for another. MonoX updates prices after each swap by calculating new prices for both tokens. When the swap is completed, the price of tokenIn—that is, the token sent by the user—decreases and the price of tokenOut—or the token received by the user—increases.
By using the same token for both tokenIn and tokenOut, the hacker greatly inflated the price of the MONO token because the updating of the tokenOut overwrote the price update of the tokenIn. Alas, it did, despite MonoX receiving three security audits this year. Smart contracts need testable evidence that they do what you intend and only what you intend. That means defined security properties and techniques employed to evaluate them.
The Daily Swig has reached out to the security researcher for more comment. Jessica Haworth. Hacker-powered security Human error bugs increasingly making a splash, study indicates. In focus Software supply chain attacks — everything you need to know. Special report North Korean cyber-threat groups become top-tier adversaries. How expired web domains are helping criminal hacking campaigns.
Bug Bounty Radar The latest programs for February Cybersecurity conferences A schedule of events in and beyond. Cryptocurrency Bug Bounty Research. Jessica Haworth JesscaHaworth.
This issue can prevent users from storing the block chain and participating in the Bitcoin network. Users affected by the bug will typically see an error to the effect of Corruption: block checksum mismatch or missing start of fragmented record 2. These log messages both stem from the same underlying problem within LevelDB. Consequently, data is never overwritten, only recreated in another file.
In a straightforward implementation, this mechanism could be accomplished using the write system call sequentially. This buffering mechanism maps regions of fixed size such that the number of pages mapped in any one file will never exceed a particular constant. At any given time, the writing process is filling whichever region is mapped. It then unmaps the region before moving onto the next. Of course, it's possible for a user's write to straddle the boundary between the current region and the next region to be mapped.
When such an overlap occurs, the writer places whatever it can from the user's write into the current region, unmaps the current region, maps the next region and finishes placing the data. On Linux, this sequence of operations is safe because the virtual memory subsystem will keep these pages around until they can be written to disk. Programmers refer to these pages with unwritten data as being "dirty". It's entirely possible for a POSIX-conformant system to simply discard any dirty pages that the user unmapped.
And therein lies the problem. On Mac OS X, the dirty pages seem to be discarded without being flushed to disk. If the munmap call discards the data at the tail of a memory-mapped region, there will be zero-filled holes in the output file. For SST files, these zero-filled holes will likely cause block checksums to not match, triggering the first error message. Manifest and log files, to which the second error above refers, use a slightly different on-disk format to aid in error recovery.
Each write is represented by one or more "fragments" such that no fragment extends across a 32KB boundary. Whenever a write would extend across a 32KB boundary, it is broken up into at least two consecutive fragments, one on each side of the boundary. Any data discarded is most likely to be immediately prior to the 32KB boundary. When reading the log or manifest, the reader will miss the first fragment it's all zeroes , but will see the second fragment, and complain that the first fragment is missing.
The short term fix is to always sync data to disk before calling munmap. Our patch does this for OS X, but the fix could easily be applied on other systems, too. We're already in talks with the upstream LevelDB authors for a more portable solution to avoid further errors of this kind. Since the bounty was posted, people have come out of the woodwork to propose various non-fixes to this problem.
In this section, we explain why these proposals do not explain the behavior and why they do not fix it. I first learned of this bug many months back in a manner unrelated to Bitcoin. I've been trying to find a live sample of this corruption for a few months.
Because OS X is a development, but not production, platform for our users, it was hard to isolate a live test case. This Pull Request comment makes the purpose clear emphasis mine :. As a result, we now have some redundancy in the block double-spending consensus code as Cases 1B and 2B are checked twice, once in CheckTransaction and once in ConnectInputs. The goal of this change was to distinguish between consensus errors like double spending and system errors like running out of disk space as this PR comment makes clear: emphasis mine.
By this time, ConnectInputs had been modularized into multiple methods and this function became the one checking for double-spending. The key change here is that what was once an error was changed to an assert. It halts the program entirely. Why would a programmer want to halt the program here? Well, this is where the purpose of the Pull Request comes in. Here is the relevant snippet of code from that time. This is handling Case 1B and 2B as before. As we already saw, UpdateCoins does the second double-spend check.
CheckBlock does the first double-spend check by calling CheckTransaction :. Indeed that seems to be the reason for the change to an assert. Thus, PR correctly surmised that getting to this state in UpdateCoins must be a system error, not a consensus error. In that case, to prevent further data corruption, the correct thing to do is to halt the program. In , PR was introduced as part of Bitcoin 0. As Segwit was going to make blocks larger, this was one of many changes to speed up the block validation times.
The code change was pretty small:. You can see here that the boolean fCheckDuplicateInputs was added to speed up Block checking. Unfortunately, the code in UpdateCoins was changed in PR to a system corruption check and not meant to be a consensus check. What was once a redundant check was now responsible for a block-level single-tx double-spend Case 2B and halts the program. This still technically enforces consensus rules, just very badly, by halting the program.
How did PR get through? Greg Maxwell referred me to this chat on IRC:. TL;DR, the developers, when discussing PR , were predisposed to think that a block-level single-tx double-spend case 2B was being checked elsewhere from PR without taking PR into account. This caused the developers to not look as closely at PR This meant that Core 0.
Because of where the code is situated, to cause a crash, an attacker would have to:. At best, as an attacker, you take down a narrow slice of full nodes for the cost of Unless the attacker continues to create blocks at a cost of In other words, while the vulnerability is certainly there, the economic incentives for DoS were pretty low.
Starting at 0. Instead of crashing when a block with a single-transaction double-spend came in, the software saw the block as valid. This means a pathological transaction same UTXO being spent multiple times in the same transaction or 2B above which crashes 0. Introduced in 0. As a result, there were many changes, including one to the UpdateCoins function from earlier:. Notice how the code around assert false was taken out entirely.
Noticing this, PR , also in 0. The conditions under which the assert fails now depends on inputs. SpendCoin which looks like this:. These are not obvious terms, but thankfully, core developer Andrew Chow explains here :. FRESH coins are ones that entered the memory pool. An attacking miner can crash the nodes through that assert statement in UpdateCoins.
The economics of this attack seem significantly better than the Denial of Service case as the attacker could potentially create BTC out of thin air. You still need mining equipment to execute the attack, but the potential for inflation might make this worthwhile, or so it seems. Because of these irregularities, people on the network would soon have tracked this down, probably have alerted some developers and the core developers would have fixed it. If there was a fork, the social consensus at that point about which is the right chain would start getting discussed and the chain creating unexpected inflation would have likely lost out.
If there was a stall, there likely would have been a voluntary rollback to punish the attacker. If the attacker double spent a bigger amount, say BTC, there would be even less chance that the inflation block would stick around as the attack would be much more blatant.
In , a security researcher discovered a major vulnerability in Bitcoin Core, the software that powers the Bitcoin blockchain, but after. A previously undisclosed vulnerability in the Bitcoin Core software could have allowed attackers to steal funds, delay settlements or split. This week's major bitcoin bug was even worse than developers initially let on. · The bug originally rocked the bitcoin world when it was reported.