Hacker Newsnew | past | comments | ask | show | jobs | submit | kklimonda's commentslogin

Isn't WinRM/PowerShell/RDP equivalent of SSH, and dpdk/apt-get is basically .msi with group policies for installation? This has been there for decades probably?

Group Policies also allow you to enforce things like browser configuration (proxy, homepage, search engine etc.) wallpapers, screen locks etc.

Can this be done on Linux? Honestly, I have no idea - I think gnome with gsettings/dconf can do that, but can KDE?


That's the point I want to convey is that while there are tools like MSI on Windows, many years after Linux had dpkg, it's not the same thing. On Linux the package manager rules the filsystem and keeps a complete database of which package owns which file. There are no exceptions, not on the parts of the filesystem where the package manager rules. Even the operating system itself and all patches is handled by the package manager.

That's first and foremost a cultural difference, not a technical. Sure, there's nothing to prevent a Linux vendor to write "install scripts" that copy files willy-nilly across the file system, and many vendors have done this but always with disastrous results and since Linux people hate it, those products are either repackaged or stored in a separate directory far away from other files.

This means installing software at scale (any number of systems), or the question how to cleanly uninstall software it not a question you should ever ask in a Linux environment. The questions you should ask are different in a Linux environment. That is why the tools look different.

Tools like gsettings are culturally alien to the unix world. Instead, home directories are seeded with dotfiles. And dotfiles are kept in version control. Yes, that means that unix people can't answer the quesion how to lock the proxy settings so the user is unable to change them. Instead, should a sensitive system require it, they would instead manage by policy and disallow any traffic outside said proxy.


I mean, Linux package managers are so great that we have at least 2 different ways of delivering software (especially GUI software) to Linux distributions that depends on "app images". To me that shows that none of those approaches are solving 100% of problems that you encounter in the wild.

> This means installing software at scale (any number of systems), or the question how to cleanly uninstall software it not a question you should ever ask in a Linux environment.

And yet this is a problem that so many third-party vendors who try to support multiple Linux distributions have been struggling for years.

> Tools like gsettings are culturally alien to the unix world.

Sure, Linux and UNIX are coming from different roots, but "cultural" means nothing in large organizations, where computers are basically tools not that far from printers, projectors, even hammers. A way to do someone's job. I may hate locked systems, but then I don't have to support users who cannot find their trash bin on the desktop anymore.

You can seed dotfiles for all users, but you can't really enforce that user cannot for example move his taskbar from bottom to the top of the screen without policy enforcement. gsettings/dconf may be culturally alien to this world, but it is (or at least was) solving an actual problem. A problem we may not care about, but some companies do.

Now, I think there is an interesting discussion here to be had - given this latest push from Windows to Linux, as a way of distancing Europe from US, would adding features that bridge this policy enforcement gap between Linux and Windows is desirable?

15-20 years ago I was going to say yes, but back then I cared so much more about Linux as Windows alternative for office use. Today I actually prefer Linux Wild West and how hard it is to lock it into any sort of MDM.


> To me that shows that none of those approaches are solving 100% of problems that you encounter in the wild.

The problem is a self-enforced one by developers. They chase the newest updates instead of focusing on stability. And bundling security and feature changes. And they want to push those updates instead of people pulling it in.

> And yet this is a problem that so many third-party vendors who try to support multiple Linux distributions have been struggling for years?

Are those complaints done in good faith? Most repos allow for custom repositories. And writing a build script are not that difficult. If Calibre, VLC, Firefox, and Blender can be everywhere, so can those applications.

> A problem we may not care about, but some companies do.

Do they? Or is just IT playing with the knobs?


Firefox has /usr/lib/firefox/distribution/policies.json which lets the sysadmin lock down what users can do with the browser. Example: If you wanted to block all extensions except for a whitelist, you could control that via that file.

There's a bazillion tools that let you manage files like that across thousands of servers/desktops but the hot one right now in enterprises is Ansible (which would make it trivial to push out an update to such a configuration).

Chrome has a similar file: /etc/opt/chrome/policies/managed/lockdown.json

"Ah yes, but what stops the user from downloading the portable version of a browser and using that?"

You can mount all user directories with +noexec. Also, Apparmor lets you control which applications can make network connections if you want to get really fine-grained.

Other applications have similar policy files. For example, Visual Studio Code has /etc/code/policy.json which—for example—would let your company lock down which extensions are allowed to be used/installed.


> Group Policies also allow you to enforce things like browser configuration (proxy, homepage, search engine etc.) wallpapers, screen locks etc.

Unix has always be about treating users like adults. The administration tools are more about the whole system and the hardware. You can always provide default or sample config, or prevent anything in HOME for being executed, but enforcing wallpapers is silly. But you can still do it by patching the software.


Lixnux version of AD is FreeIPA, with group policies translating to dconf - at least that was the way "enterprise" linux vendors (like RH or Canonical) were moving towards.

Now, how well is dconf integrated with all the software you want to run is another thing (it was done by GNOME, and ignored by KDE), and whether this is still the way they are all moving is yet another question but the infrastructure was being built.


Ceph can scale to pretty large numbers for both storage, writes and reads. I was running 60PB+ cluster few years back and it was still growing when I left the company.


datablocks.dev has a page explaining what white label and recertified disks are [1]. Those are not disks used for years under heavy load.

1: https://datablocks.dev/blogs/news/white-label-vs-recertified...


For me, and probably a lot of other people who moved from other search engines, long-term viability of Kagi is important - heck, that's the reason I've decided it's worth paying some money for search. Given that, I'd expect them to be very frugal with their spendings. Burning money on T-shirts, on another Browser, AI "improvements", Kagi Email (wtf? first time I've heard of it) show that they have incredibly startupy mindset, and will end up like every other company that takes VC money - bloated, money focused and deaf to their community.


Every entrepreneur obsesses about some competitor or some business model.

You can see various baubles glint in Vlad's eye.

If you are a collection of 10x devs, you can afford to make multiple bets and test for traction. You can sample the Brave waters, or try to head off Proton claiming ownership of privacy first, or get in front of perplexity and phind. Arguably, only products you've shipped can tell you the truth about product market fit.

Which is to say, I don't think these "let 1000 flowers bloom" experiments are a bad thing... so long as the core product has no appearance of inattention and never goes backwards in usability or quality while "net promotion" is still part of the growth plan.


You can't drive two LG 5k screens with a single cable, due to it lacking DSC support.


So this is not about forcing Apple to make clients for competing platforms, but to allow businesses to send spam to more users? Well, thanks Google.


I don't think this is an accurate take. The DMA is about businesses and their relationships with consumers, so any regulation has to be targeted to that.

The argument is that Apple not allowing businesses to use a protocol on par with iMessage is the issue here.


“ businesses and their relationships with consumers”

Sounds like spam to me


You realize there are actual legitimate uses of SMS and people choose to use it, right?


There are few, and it open the door to spam, yes. I’d rather than not even have the possibility.


Any sufficiently advanced networked computer will have the potential for spam or malicious users. iMessage already sees this without being an open protocol; it's deeply-integrated nature makes it a prime vector for malware and 0-click spyware. Adversaries like NSO Group actively exploit this.

The goal isn't a more locked-down phone, it's transparent communications infrastructure that inherently resists attackers. Anything else is an imperfect solution that relies on trust more than mechanical security. If Apple wants to lead the way on that, they should do the world a favor and propose their own open SMS encryption standard. As it stands, their 'ours is better but we wont show you' approach is about as obvious as security theater gets.


How is this related to spam? Apple has the same ability to filter SMS messages as it does iMessage, this is purely about the format.


It’s a cudgel.

Apple has an iMessage for Business service. You can use it to chat directly to representatives of enrolled businesses. Right from iMessage.

Google wants to use that and the DMA to create precedents that they can use against Apple in their quest to get access to iMessage.

It’s fairly transparent, IMHO.


Some people enjoy the internet of old, where you had to put some effort into finding venues for collaborating. There is nothing wrong with some places prioritizing building communities over algorithmical reach of the larger platforms.


"The people" are fine with lowering age of retirement, and in general not that interested in rising enough new people to sustain economy and social policies of their countries. Granted, immigration policies in most of the western countries has been a disaster, but those did not arrive out of nowhere. It would be great to see discussions and planning on how to shape policies, but this would only hopefully change the reception of immigrants, and not the fact that they're there to stay.


Immigration is to keep the wages down. Also, third world immigration is preferred because those immigrants are more tolerant of crap wages, crap working conditions and crap existence lifestyle than domestic workers.

If the elites really wanted more people they could increase the incentives for domestics to have children.


Immigrants are indeed more willing to do the jobs that domestic workers are no longer interested in doing, but their effect on wages is minimal, mostly affecting lower class, mostly other immigrants. It's unclear if there is anything that can be done to incentivise people to have more children, to the extent that it makes a difference on the macro level. And this is kind of moot point anyway, most western countries that deal with lack of labor force don't have time to wait for children, if their citizens want to keep their level of support.


It's always been like this in GPU space - all reviews have always mentioned number of compute units (be it SMS or "cuda cores"), and the total available for the given architecture is also known. A lot can be told about relative performance of two cards based on that, so this information is useful not only to the investors.


AFAIK it's been like that in CPU space too - e.g. that 6-core CPUs are actually 8-core CPUs with 2 cores deactivated, either because of defects or because they needed more 6-core CPUs?


It's always like that in consumer semiconductors. Intel has something like 3 to 5 actual silicon variants per generation that covers all dozen or two SKUs.


This sort of yield-enhancement-by-binning extends to almost every form of semiconductor, from amplifiers to server CPUs.


Sure, but Intel doesn't advertise the number of dead cores.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: