I've no experience with Synology and have no opinion regarding their motivations, execution, or handling of customers.
However...
Long long ago I worked for a major NAS vendor. We had customers with huge NAS farms [1] and extremely valuable data. We were, I imagine, very exposed from a reputation or even legal standpoint. Drive testing and certification was A Very Big Deal. Our test suites frequently found fatal firmware bugs, and we had to very closely track the fw versions in customer installations. From a purely technical viewpoint there's no way we wanted customers to bring their own drives.
[1] Some monster servers had tripple-digit GBs of storage, or even a TB! (#getoffmylawn)
Synology is consumer and SMB focused, though. High end storage that level of integration makes sense, but for Synology it's just not something *most of their customers care about or want.
That being said, there aren't many major HDD manufacturers anymore, nor do they have many models. Synology is using vanilla linux features like md and lvm. You don't think those manufacturers have tested their drives against vanilla linux?
One man's ongoing journey coaxing an LLM to write Common Lisp code. Bonus "AI generated" poem by Stanislaw Lem, and a five-paragraph story actually generated by LLM: "The Unspeakable Syntax: A Tale of Lispian Horror." [3] Suprisingly entertaining.
> I think a new approach might be to ignore the specifics of the old system, implement a new system
It doesn't work like that. When you're revamping large, important, fingers-in-everything-and-everybody's-fingers-in-it systems you can't ignore anything. A (presumably) hypothetical example is sorting names. Simple, right? You just plop an ORDER-BY in the SQL, or call a library function. Except for a few niggling details:
1. This is an old IBM COBOL system. That means EBCDIC, not UTF or even ASCII.
1.A Fine, we'll mass-convert all the old data from EBCDIC to UTF. Done.
1.A.1 Which EBCDIC character set? There are multiple variants. Often based on nationality. Which ones are in use? Can you depend on all records in a dataset using the same one (hint: no.) Can you depend on all fields in a particular record using the same one? (hint: no.) Can you depend on all records using the same one for a particular field? (hint...) Can you depend on any sane method for figuring out what a particular field in a particular record in a particular dataset is using? Nope nope nope.
1.A.2 Looking at program A, you find it reads data from source B and merges it with source C. Source B, once upon a time, was from a region with lots of French names, and used code page 279 ('94 French). Except for those using 274 (old Belgium). And one really ancient set of data with what appears to be a custom code set only used by two parishes. Program A muddles through well enough to match up names with C, at least well enough for programs D, E, and F.
1.A.3 But it's not good enough for program G (when handling the Wednesday set of batches). G has to cross-reference the broken output from A with H to figure out what's what.
1.B You have now changed the output. It works for D and F, but now E is broken, and all the adhoc, painstakingly hand-crafted workarounds in G are completely clueless.
1.C Oh, and there's consumer J that wasn't properly documented, you don't know exists, and handles renewals for 60-70 year old pensioners who will be very vocal when their licenses are bungled.
2. Speaking of birth years, here's a mishmash of 2-, 4-, and even 3-digit years....
Yes, that's why the new system has to be a complete replacement. Part of it's spec COULD be to provide backwards interfaces too, in case things can't all be cutover at once, but that would increase the project scope and also tie things down to the old system too.
Part of a full replacement system would be the option to use a _different_ set of rules, which better reflect current desires and are, hopefully, easier to implement.
Yes the old data would need to be _transcribed_ during it's restoration to the new system, and human bureaucratic layers can likely handle issues. Heck, they could do a deferred implementation of the new system where one long weekend the new system's brought up, and any of the issues that are noticed as kinks worked out. When there aren't any _noticed_ kinks in those tests have the results sent out to the stakeholders and solicit feedback on if there are any inaccuracies. Which might take a year or two of renewals and updates and the annual business as they see if the new notices are correct or not.
The "twin paradox" [1] is a prime example. The two twins depart from a common point in time and space, go about their separate travels, and meet again at a common point in space-time. Despite both twins always having the same constant speed of light, one of the twins takes a shorter path through time to get to the meeting point--one twin aged less than the other. In the paradox case, the shorter/longer paths are due to differences in acceleration. But the same thing happens due to differences in gravitation along two paths. (In fact, IIUC, acceleration and gravitational differences are the same thing.)
Just thinking about the math makes my head hurt, but it's apparent that two different photons can have taken very different journeys to reach us. For example, the universe was much denser in the dim past. Old, highly red-shifted photons have spent a lot of time slogging through higher gravitational fields. As a layman, that would suggest to me that, on average, time would have.. moved slower for them?... they would be even older than naive appearances suggest. I don't think the actual experts are naive, so that's been accounted for, or there's confounding factors. But I could also imagine that more chaotic differences, such as supernovas in denser galatic centers vs. the suburbs, or from galaxies embedded in huge filaments, could be hard to calculate.
I regret to inform you that waterfall planning is often considered a fail state in the toilet development world (and a messy one to clean up).
I can however recommend the Spiral Model [1] as a lesser known Waterfall variation, which carries a heavier focus on risk management. It resembles a conch shell, and may require up to three attempts [2] to get your toilet development process correct.
Pretty astounding, isn't it? I don't see a paper, but there was a webinar [1]. There's a technical synopsis at 8:00. The phenomenon they're measuring is actually signficant. It's the total number of (free?) electrons between the satellite and the receiver. Typically its about 10^12 electrons/m^3 (@8:00 in video). The disturbance from the 2011 earthquake and tsunami was, if I'm reading the movie/chart correctly, about +/- 1 TECU, which is 10^16 electrons/m^3 (@10:40). The water elevation may only be a few feet in open ocean, but it's over a vast area. That's a lot of power.
They're measuring it by looking for phase differences in the received L-band (~2GHz) signals, rather than amplitude. That eliminates lots of noise. And they're looking for a particular pattern, which lets you get way below the noise floor. For example, the signal strength of the GNSS (GPS) signal itself might be -125 dBm, while the noise level is -110 dBm [2]. That means the signal is 10^-12 _milliwatts_, and the noise is about 30 times larger. But by looking for a pattern the receiver gets a 43 dB processing boost, putting the effective signal well above the noise.
>> They're measuring it by looking for phase differences in the received L-band (~2GHz) signals
The "L-Band signals" are GNSS signals, for example GPS L1 and L2, which use a carrier wavelength of 1575.42 MHz and 1227.6 MHz, respectively. Both L1 and L2 signals are emitted at the same time, but experience differing levels of delay in the ionosphere during their journey to the receiver. The delay is a function of total electron content (TEC) in the ionosphere and the frequency of the carrier wavelength. Since we already know precisely how carrier frequency affects the ionospheric delay, comparing the delay between L1 and L2 signals allows us to calculate the TEC along the signal path.
Another way to think of it is: we have an equation for signal path delay with two unknowns (TEC, freq). Except, it is only one unknown (TEC). Use two signals to solve simultaneously for this unknown. Use additional signals (like L5) to reduce your error and check your variance.
OK, the "typically 10^12 TEC" vs. a +/- 1 TECU (10^16 TEC) disturbance was really bugging me. I think the slide has an error, or there's an apples/oranges issue. The +/- 1 TECU looks to be consistent, but the typical background level is "a few TECU to several hundred" [1]. A Wikipedia page has shows the levels over the US being between 10 - 50 TECU on 2023-11-24, and says that "very small disturbances of 0.1 - 0.5 TEC units" are "primarily generated by gravity waves propagating upward from lower atmosphere." [2].
The red line is axial acceleration. The rocket rapidly slows to terminal velocity, reaching it at about 25 sec., then continues to slowly decelerate as t.v. decreases as the air gets thicker. [edit: *] The black line is estimated velocity, as integration of the acceleration. It gives up trying to calculate that at about 45 sec. Based on the barometer readings, it looks like it was going about 650 fps at impact.
What I find interesting is the 4-second delay before igniting the second stage. This is very inefficient compared to immediately igniting it when the first stage burns out. Max-Q (airspeed pressure) issues? 30,000 ft permit ceiling?
Edit: * At 25 sec. it's still going up, so the velocity is decreasing due mainly to gravity, but the rocket is ballistic so the accelerometer is slightly negative due to air friction adding to the gravity deceleration. At about 40 sec. it has reached max altitude and velocity is zero. Accelerometer is still close to zero. Velocity picks up, as shown by barometric altitude curve. Eyeballing it, at about 65 sec. its reached terminal velocity, as shown by barometer curve being pretty flat. Decrease after that is due to decreasing t.v.
With solid motors lower in the atmosphere with high velocity it's often optimal to delay second stage ignition so that your sustainer motor isn't working against as much atmosphere. So, kinda Max-Q issues, but for performance reasons.
[This is a link to a Mastodon infosec topic. I've completely editorialized the page title, so am posting as Tell HN instead.] [Edit: Well, I submitted it that way. HN stripped the "Tell HN:". The original page's title is pretty useless, so don't know what the proper thing to do is.]
EXT (all versions) has a filesystem flag telling the kernel to panic on FS error. In the link, Will Dormann demonstrates inserting a USB key with a malicous image and instantly rebooting the PC.
In this case, the laptop had USB auto-mounting enabled. However, I believe this should apply to any mounts against user-modifiable or -specifiable sources. NFS, FUSE, user namespaces, even local files with "-o loop" option. And the MOUNT(8) man page has this interesting tidbit:
Since util-linux 2.35, mount does not exit when user permissions are
inadequate according to libmount’s internal security rules. Instead, it
drops suid permissions and continues as regular non-root user. This
behavior supports use-cases where root permissions are not necessary
(e.g., fuse filesystems, user namespaces, etc).
For folks jumping on saying "that's not a carrier thing". All comms are a carrier thing. Whether it's ETWS, SMS, or IP, it's going through the carrier, they process it, and they do extensive traffic management. Carriers absolutely can and will inspect, proxy, aggregate, and do anything else that will tease out another few % of "free" capacity.
[Edit:] All too real scenario: Carrier knows about particular IP addresses and ports used by alert service. Carrier makes provision for separate path for it. Carrier also tries to shave said provisioning to the bone, calculates a worst-case, and adds 5% capacity. Which doesn't get updated when that particular app gets a 6% boost in subscriptions. Back in the old days the traffic management folks would be on top if it, but that's all been outsourced...
PWS is tower based broadcast. Everyone within range of a tower gets the alert. Data source is supposed to be local government weather authority, I think USGS and NOAA in US. Or the Meteorological Agency in Japan.
You can do a location-based two way warning system and there are such services, but it's going to be laggy and won't scale to 100M+ simultaneous subscribers. One-way broadcast scales to the planet if wanted.
It varies, a lot, and depends upon a lot of things. I'm not current on all the current details, but many moons ago was involved in push notification development.
* Notification path. IoS at the time was pretty protective of the user's battery, and had specific services you had to use. I imagine there's special treatment now for emergency communications.
* Phone state. How deeply asleep is it? Are there other background apps frequently contacting the mothership? Multiple apps can get their requests batched together, so as to minimize phone wake-ups. You can also benefit from greedy apps--VoIP apps, for example, might be allowed/figured out a hack to allow frequent check-ins, and the other apps might see a latency benefit.
* Garbage carriers. Hopefully emergency alerts have a separate path, but I've noticed my provider (who shall remain nameless but is a three-letter acronym with an ampersand in the middle) sometimes delays SMS messages by tens of minutes. (TBF, in my case there might also be a phone problem [Android], but since nameless provider forced it on me when they went 4G-only they're still getting the blame.)
In your case, my money would be on the carrier. Pushing a notification to all phones in an area can be taxing, and cheaping out on infrastructure is very much a thing.
For docs, your best bet would be to go to the developer sites and pull up the "thou shalt..." rules, particularly regarding network activity, push notification, and permitted background activities. And yeah, Apple was much more dictatorial, for good reasons.
I believe they did announce that, and also claim that the letters no longer mean anything (which makes sense as telegraph is long dead, and the telephone network is primarily spam), however their website including investor relations has the ampersand everywhere, so maybe they backpedaled.
Or maybe ampersand was dropped before SBC bought the remaining parts of the old business and reformed T-1000 with the ampersand?
I thought that was when they dropped the ampersand when the biggest baby bell bought the remain baby bells to reform the mothership but couldn't use the ampersand since that was the entity that got broken up in the first place. You can't be too obvious about it and flaunt it in everyone's face. Subtlety is an art. And that art is clearly lost on the FTC
However...
Long long ago I worked for a major NAS vendor. We had customers with huge NAS farms [1] and extremely valuable data. We were, I imagine, very exposed from a reputation or even legal standpoint. Drive testing and certification was A Very Big Deal. Our test suites frequently found fatal firmware bugs, and we had to very closely track the fw versions in customer installations. From a purely technical viewpoint there's no way we wanted customers to bring their own drives.
[1] Some monster servers had tripple-digit GBs of storage, or even a TB! (#getoffmylawn)