Since .NET's introduction, Microsoft seems to lack the same kind of culture that Apple and Google have towards into steering their platforms into safer languages (e.g. how constrained NDK happens to be, or first class bindings to all OS APIs in Swift).
It appears that every attempt to do so ends up being sabotaged in some way to assure C++'s reign at Microsoft and Windows subsystems.
Note that Windows is the only desktop/mobile OS where the GUI stack is still fully C++ aware, and they even make a point out of it.
> WinUI is powered by a highly optimized C++ core that delivers blistering performance, long battery life, and responsive interactivity that professional developers demand. Its lower system utilization allows it to run on a wider range of hardware, ensuring your sophisticated workloads run with ease.
That doesn't justify why Managed Direct X, XNA, Silverlight (on WP 7), .NET Native got the axe.
Even the Longhorn failure, which resulted on the "everything COM", that then evolved into WinRT (basically COM + IInspectable + .NET metadata + sandboxing), could have worked out if everyone actually worked together.
I don't believe that if there was actually a willingness from Windows/C++ crowd, they couldn't have helped to push the .NET runtime into improvements similar to .NET Native.
Or to put into another way, the efforts done by Google and Apple improving the Objective-C, Swift and ART, respectively.
In fact, probably the reasoning behind bringing Midori learnings into .NET Core (thus making C# into D like), has more to do with C++/CLI being Windows only, and the managed languages competition outside Windows than anything else.
> could have worked out if everyone actually worked together
Apparently, Sinofsky suffers dreadfully from "not invented here syndrome." He simply does not trust anything that his team has not built, and Midori is the tip of the iceberg. There are a some instances where his attitude worked, but they were far and few between.
You'll notice that there is a very distinct "firewall" between core architecture and "other teams" on the projects he managed, even today (e.g. .Net Office extensions are just COM).
nah, they had Drawbridge running on Midori, which allowed nearly perfect app compat. it really was a case of internal politics as far as I know.
though another way of looking at it is that rewriting the entirety of Windows in Midori would be been a monumental feat, and by the time Satya became CEO it was clear Windows wasn't the future of the company, so that kind of investment didn't make sense.
Maybe, but my recollection from the time (as related to me by Bryan Willman) was that a group of base team architects did an analysis of the .Net runtime and created a list of technical concerns that they felt precluded it's use in any critical component of the OS.
Of course, all of these concerns could presumably have been addressed or mitigated, but by the time analysis was released it was too late
That is exactly my view, instead of addressing the issues, like Apple and Google have been doing the last decade, the decision was to double down on C++ and COM instead.
Back in the day Nokia did road shows to gather employees feedback before going public.
One point regarding Maemo that I and others did, was the missing radio link.
Naturally that was a no go as they would eat into Symbian's turf.
I happened to be in Espoo shortly after the burning platforms memo, and it did not land well, specially since the Qt and PIPS effort was finally gathering some support.
Not really a disagreement with pjmlp, but MS has had several internal projects like this, where they create an βideal stateβ for some part of their software portfolio, then incorporate learned lessons into their existing software packages. The blog posts linked elsewhere in the discussion threads and the one Iβve included below give examples of how both Singularity and Midori contributed to other projects, including C++, C#, .NET, and Windows.
I have been noticing recently people experimenting with things like WASM kernel modules, and even having the kernel itself (or maybe it was a supporting root-owned process?) compile Rust code in a way that validates it doesn't do any dodgy and then loading the result into the kernel. It makes me wonder whether there might be a way to achieve many of the benefits of microkernels without the inefficiencies and complexity ("everything is now a distributed system") that has held them back in the real world. It sounds like Singularity was pushing in this direction, so maybe I should pay more attention to its spiritual successor Midori.
I also wonder whether there's anything stopping _Linux_ slowly becoming something more like this over time by supporting kernel modules that have direct but _limited_ access to talk to each other in more ways that can be verified at load time. In theory even some kinds of "arbitrary shared memory" patterns could work just fine, and get you pretty close to the best of both worlds even with supporting "binary blob" drivers etc. Instead of requiring modules to be compiled to a special kind of bytecode (which would require compiler support), maybe you could even create some special conventions for how memory is accessed by these sandboxed kernel modules (fake syscalls or something?) that let them be rewritten at load time.
I guess it boils down to: how much could you statically verify the behavior of a kernel module without preventing from having the flexibility to do what it needs to do?
> I also wonder whether there's anything stopping _Linux_ slowly becoming something more like this over time by supporting kernel modules that have direct but _limited_ access to talk to each other in more ways that can be verified at load time.
You are more or less describing eBPF and the evolution you are wondering about has indeed started slowly happening for some years now.
It does however use special bytecode but getting compiler support is not such a hard problem compared to building the safe VM.
It's rumored that Midori was built on Singularity, so it's much more than a "spiritual" successor if that's the case. (Also sadly discontinued as well.)
Sure, my remark was more intended to convey that I usually expect the phrase "spiritual successor" to be used for something where there's no direct shared ancestry/code between two projects other than "hey, you had some good ideas", which I would be astonished if were the case here.
I couldn't tell you how much of Singularity remained in Midori, but to give some context, this was a team that was not afraid to rewrite, often significantly, there was essentially a closed environment, so you could make what would be a breaking change in any other environment, and be sure that everything still built and ran.
Example: I completely rewrote the process loader to load binaries from "the cloud" and other than a heisenbug caused by a misconfigured F5 router that had us scratching our heads for a couple of months, everything just kept working
I've long believed that OS/400 is a path not taken in systems programming, one that would have been much more influential had it happened outside of IBM.
The core concept described here significantly overlaps with an OS idea from the 90s that sadly didn't go anywhere. The system was Opal, from the OS group at UWashington CS&E and in particular Jeff Chase.
The key idea in Opal was to put all processes in the same address space (like Singularity does), but to use hardware systems to provide memory protection (whereas Singularity relies on "software mechanisms"). Communication between processes now becomes as simple as one process giving the other a pointer (obviously it has to refer to a page with appropriate access). As with Singularity, the overhead of TLB flushes and page table management goes away, making context switches much faster (especially for large working set processes). To quote:
> Protection in Opal is independent of the single address space; each Opal thread executes within a protection domain that defines which virtual pages it has the right to access. The rights to access a page can be easily transmitted from one process to another. The result is a much more flexible protection structure, permitting different (and dynamically changing) protection options depending on the trust relationship between cooperating parties. We believe that this organization can improve both the structure and performance of complex, cooperating applications.
Fun fact, Wroclaw's University of Science and Technology, one of the leading STEM-focused colleges in Poland, teaches about Singularity as something really new and innovative. They also teach service-oriented architecture and peer-to-peer networks that might, at some point, enable decentralized transfer of money, as if Bitcoin never happened. When I was looking at that specific course, it definitely gave me early 2000s "make everything p2p" era vibes.
This is doubling down on the mistake of .NET, it won't end well.
We need a microkernel based OS that uses hardware protection to keep all the user code well contained. We need a default assumption of NO when it comes to access to resources, instead using capabilities and powerboxes.
What happens if actual native code gets run somehow by a user processes in this system? The system is owned, instantly. There's no secondary layers of protection at all. It's like counting on a single cable to support a bridge.
Per the article, it ended in 2008. Beyond that, there are probably dozens of embedded systems I interact with every day where I would choose performance over the level of security you're suggesting. Beyond even that, I lived through an era of multi-process machines before MMUs were ubiquitous, and the main problem wasn't that malware had root access, it was that poorly written code would corrupt the memory of other processes and the OS, which this system solves.
Unless there's a bug in the JIT compiler, managed code can't accidentally write into another process's memory or jump to an arbitrary address, meaning that when an application goes off the rails it only crashes itself.
>We need a microkernel based OS that uses hardware protection to keep all the user code well contained. We need a default assumption of NO when it comes to access to resources, instead using capabilities and powerboxes.
> Each SIP is actually sealed β They canβt be modified from outside.
Thereβs no shared memory between different SIPs, no signals, only explicit IPC.
There are also no code modifications from within β no JIT, class loaders, dynamic libraries
So obviously this would never fly in the real world. Did they ever resolve how to make JITed code work? Maybe in separate container space that didn't have full privileges?
Our current software is designed around a particular set of operations being the "expensive" ones. Maybe other configurations are workable too, with enough rearchitecting. For example, Singularity IPC looks like it should have about the same cost as a typical userspace dynamic library function call.
(IMO the main problem with Singularity's approach is instead that it leaves everything wide open to Spectre...)
At least in Midori, there was an escape hatch that, IIRC, was only used for the JavaScript engine, but that allowed some of the constraints to be broken.
Even WinRT added back VirtualProtect with execute permissions after trying to ban it. And on the Xbox One and Xbox 360, they wanted to make the hypervisor enforce not making arbitrary pages executable (requiring the equivalent of VirtualProtect to provide crypto signatures), but needed an escape hatch for their back compat emulator.
iOS is just willing to cut themselves off of entire classes of applications in a way that a lot of systems aren't.
It has nothing to do with Spectre. Spectre exploits Intel's (and tbf others') mistake of changing the processor's global state when speculatively executing code. Intel could fix it either by always checking permissions before spec-ex, or by transactionally rolling back all global state changes (e.g. cache lines) when permission failures are detected.
Edit: I misunderstood your question. Yes, Spectre cannot be fixed with software. But if Spectre were fixed in hardware (as it must be in new designs), software process isolation could still work barring new undiscovered hardware vulnerabilities.
>> can we prevent unexpected interactions between applications?
Of course you can. Are there incentives, though, for MS to do so?
>> Goals [...] Lack of robustness
Did Windows 11 _have to_ become dependent on users' internet connection? I don't think so.
Are there incentives that would guide MS towards not introducing a complete and utter dependence on the net, thereby making it possible to opt-out from their telemetry?
Are you by any chance using Windows 11 Professional, a version my mom, her sister, their brother and 99% of my cousins will not get their hands on, because they are not "pro" users, they are "home" users?
Almost all devices containing a CPU run an OS. Can none of them, like IOT devices have service based accounts? This would make illegal almost all current consoles as well. I understand your ire at Microsoft but I donβt think massacring huge swathes of the current device ecosystem is proportionate or necessary.
> This would make illegal almost all current consoles as well.
Good! The inability to play a local game by putting in a disc and playing is enormously stupid, possibly even more than with a general purpose computer.
I'm pretty sure you can play games on consoles without creating an account on some service, at least in last gen consoles. Did that change with the current gen?
I'm not taking an absolutist stance on this, I think there's scope for regulation of how accounts can or cannot be used and privacy protection of user data. I just don't think eliminating online accounts is practical.
Microsoft has such impressive research and such great developers. Makes it all the more frustrating what their products end up like (Iβm guessing due to leadership).
If there's one "conspiracy theory" I'm inclined to believe is that Microsoft deliberately makes every other Windows version subpar to the previous in order to make the market (that has the memory of a goldfish) applaud the follow-up for... going back to what to how things used to be with a few tweaks in between.
It's like they thought that keep improving on the things they work would make them hit a ceiling so instead of steadily climb they take one step back to delay the climb.
And the reason why I think it's deliberate is because none of their other products follows that pattern: Azure faces fierce competition so you cannot afford to stumble, C# is amazing compared to Java at least, Office has alternatives breathing on their back, VS Code... used to have competition but now that it's super popular I'm scared it'll be the next product to follow the "Windows progression pattern".
One of my favorite "just why" moments in Teams: let's say I have an image, `foo.jpg`.
I share it once and it gets uploaded to Sharepoint (as `foo.jpg`). This happens even if you're just sharing a funny image in a private message. It gets uploaded to Sharepoint. Ugh.
Anyway, let's say I share the same image again. Teams says "hey do you want to overwrite `foo.jpg`, or keep the existing one?" I select keep. Teams now creates `foo-1.jpg`.
We're already off to a bad start, but bear with me. Here's where it gets amazing:
1. I share `foo.jpg` a third time.
2. Teams says "Do you want to overwrite or keep?" I select keep.
3. Teams then again asks "Do you want to overwrite or keep?", because `foo-1.jpg` is taken. I select keep.
4. Teams tries to create `foo-2.jpg`. Return to step #2 until an unused integer is found.
That chain of dialogs can continue for a while for common file names on large teams.
One of the biggest reason I find teams not usable for teams. And so easy to fix, just assume when the user does not want to overwrite the first file that they don't want to overwrite consecutive files.
Aside from the annoyance factor, this is a huge gaping hole allowing information leaks. Let's replace `foo.jpg` with `Annual Review 2021.pdf`. You send employee #1 their review, and then send employee #2 theirs, but forget about this quirk, and instead of keeping, you overwrite. Now employee #1 can open employee #2's confidential info in their link.
Thankfully (I guess?) that isn't a problem: the files are uploaded to Sharepoint folders specific to conversations or channels.
So the issue I detailed would only rear it's head in your scenario if you sent employee #1 their review twice.
Humorously, this issue does not occur when uploading files directly in the Sharepoint interface β despite the interface looking like it stepped out of Windows 95.
I'll have to verify this tomorrow, but I am 99% certain I have seen this message when sharing the same file name between different 1:1 conversations. I guess it's possible that it is a configuration issue since our Teams/SharePoint is self-hosted at my company.
Yeah, I am not a licensed Microsoft config guy whatsoever, but do know that Sharepoint can be configured to the moon and back. That was, after all, the reason my old org went with Teams over Slack.
So it's not hard for me to imagine that it could be set up in a way that causes this issue across discussions. Yuck.
Man I hate this. I sort of get wanting to support robust file sharing for teams, but let's be honest: 99% of file uploads on teams are going to be silly gifs.
It is also SO SLOW. It's so slow. Uploading images takes full seconds before they can be sent because Teams has to put them onto Sharepoint first (which itself is slow).
For those still on Teams, my condolences, here's a workaround: copy the image onto your clipboard and paste it into the textbox. It skips the uploading to Sharepoint, prevents the issue I described, and is sendable in milliseconds. Downsides: a. they're randomly ephemeral and sometimes just permanently disappear, b. no animated images - it'll just send the first frame of a GIF/APNG/etc.
And for those on macOS, that means you can get screenshots sent quickly. Ctrl-Cmd-Shift-3, Cmd-V into the text field. Send.
I have stumbled over the same problem and wondered why at minimum it isn't able to recognize that this is the identical file and not ask. Also, it should be able to reshare one file already uploaded.
Only the OS and applications intending to be the only application ever running on a box should do that. When normal applications do that, they don't have the information to share when their peers need those resources too.
Sylabs launched their Singularity product in 2015,[1] whereas Microsoft Singularity is from the mid 2000s,[2] so the ripping off is in the other direction.
> Utilize a safe programming language β no more of Cβs shenanigans, we donβt want to βcookβ pointers out of integers, no more manually freeing memory and no more buffer overflows.
C language is little mora than (portable) assembler. To the CPU the difference between pointers and integers is in the code, not in the type.
These people thinks C is unsafe. C is as safe as your design and skills. Yes, you need to be a very careful C programmer and still getting bugs. But also be able to squeeze all the juice out of your hardware.
The problem is C shouldn't be as safe as your design as skills, modern languages provide assurances that prevent programmers from shooting themselves in the foot and as we know, all programmers do that sometimes.