As a casual observer of Firefox development (well, maybe not casual, I skim through the pushlogs occasionally), I find it interesting to compare this to some Firefox security bug guidelines.[0][1]
Although it looks like in these cases the issues were reverse-engineered in <1 day (making release timing less of a solution), two items in those that stand out to me as potentially helpful are "land tests in-tree later" and "bundle with other changes in the same area."
These kinds of security-by-obscurity guidelines are well-known and oft-touted, but only debatably useful when dealing with a codebase as closely monitored and security-sensitive as V8. Many professionals instead recommend full disclosure as early and as immediately as possible, including publishing CVEs and vulnerability warnings so that affected users can take appropriate steps as soon as a patch is available or deploy mitigations (for example, in very severe cases, disabling the V8 JIT, which is a well-known RCE mitigation).
Also, in this case, both bugs would need to be combined with a zero-day Chrome sandbox escape vulnerability to achieve exploitability.
If anything, I'm more worried that the fixes weren't backported more urgently and that CVEs do not seem to have been assigned. But this seems to have been a success for defense in depth, at least.
The reason for omitting tests is because they are essentially the exploits themselves; omission raises the barrier to entry, as somebody has to be skilled enough to craft an exploit based on the source code change.
While security fixes are typically pushed out on all supported branches within a very short window of time, anything that buys time is useful imho.
Sibling comments address this pretty well, but the ideas I mentioned still seem like good ideas for a few reasons:
- The article mentions the sample exploit is based on the test case
- The point of obfuscation isn't to prevent exploitation, it's to delay it
- A sibling mentions the delay between patch and release as an issue; because of the only ~1 day delay here it seems like this was intentionally merged right before release, so buying even a day (or two or three, to account for time to update) would be enough to mitigate most of the impact
There's a big difference between reading a snippet of JS and understanding the inner workings of a JIT, especially under time pressure.
Generally, security fixes like this are back-merged to the stable branch almost immediately, i.e. within hours. The issue is that the stable branch isn't integrated into Chromium, built, and released into stable until the next spin, which could be a couple days or even weeks.
The principal sin, IMHO, is that JavaScript technically has only one number type, double. But doubles are pretty slow, particularly if they are used as array indexes, so JS engines do a ton of numerical analysis (range analysis) to determine when it is safe to do integer math. That reasoning is extremely tricky and I've made gazongas amounts of bugs myself. (I'm the original author of the SimplifiedLoweringPass, which has evolved significantly since I last worked on that part of the code--circa 2015). Unfortunately static analysis of any kind really doesn't help...the implementation language (C++) isn't really at fault...because the bugs are manifesting at the compiler IR level, i.e. how it reasons about JS values in the representation of JS code.
BigInt isn't super relevant here though, it's new and a distinct type so it doesn't come into play for any typical computations. You can't use it as the size of an array, etc.
Technically it’s number. But bigint can be much larger than a 64-bit integer, so from the CPU’s perspective it’s an array rather than a single integer.
This is also why wasm in the browser is not specially "safe". It's the same jit optimization and execution as the JavaScript pipeline. Therefore vulns in that are sometimes useable in wasm.
Speed is a tradeoff. And is why if you have a system that uses v8's or others wasm runtimes, it must be able to update.
Please don't complain about tangential annoyances—things like article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
It frustrates me too! No scrollbar either. I noticed once I clicked on the main content area it scrolled normally (by which I mean with the keyboard), but still... unnecessary frustration.
Is there some reason this sort of user-hostile design is becoming more and more common?
Is there something that I can do, client side perhaps, to fix it? I don't know anything about how modern web works, but perhaps there's an element I can remove with ublock origin to restore sane behavior?
I'm aware of the HN guidelines re: complaining about tangential annoyances, so apologies in advance if this (my) comment is "part of the problem", but I suspect the readership here would be the sort of people who might have an answer.
Edit: Making matters even worse, I can't even save the page as a PDF for reading later! It only captures what's visible at the time. :( Somebody outta do something about this...
I noticed the wonky scroll behavior, but it's debatable to call it broken.
On most web pages, you can scroll the entire document by putting the cursor anywhere on the page.
On this web site, you can only scroll if your cursor is within the <main> element, which has a rather narrow width. So most of the page area is not scrollable. Also, the CSS hides the scrollbar so that no affordances are given.
The author was probably trying to make the header and sidebar sticky by doing that, there are better ways though (such as literally the position: sticky CSS property)
It's easy to turn off the JIT if you want - you can even do it on the iPhone now. It makes the web dogshit slow and is a fundamental nonstarter of an idea.
A slightly more feasible way forward here is replacing JavaScript with dear god anything else but JavaScript but that's a long way off. WebAssembly seems to have stalled.
Are you on a very old iPhone model? I haven’t found disabling jit to be noticeably slower on my iPhone 13 when running iOS 16 in “Lockdown Mode”. The Microsoft Edge team has benchmarks supporting the impact is minimal on Chromium based browsers too (https://microsoftedge.github.io/edgevr/posts/Super-Duper-Sec...).
For the stuff javascript code does directly? Especially now that we have WASM sitting around for heavy math? Hell yeah make it 2x slower. We've advanced the speed so much and have plenty to spare.
...but if that only solves half the vulnerabilities I don't think it's worth it. Still worth keeping in mind in case that ratio shifts significantly.
Although it looks like in these cases the issues were reverse-engineered in <1 day (making release timing less of a solution), two items in those that stand out to me as potentially helpful are "land tests in-tree later" and "bundle with other changes in the same area."
[0] https://firefox-source-docs.mozilla.org/bug-mgmt/processes/s...
[1] https://firefox-source-docs.mozilla.org/bug-mgmt/processes/f...