Hacker Newsnew | past | comments | ask | show | jobs | submit | mintyc's commentslogin

Opus12 is undeniably useful.

Some additional features would be useful.

e.g. those of attribute changer https://www.petges.lu/ modifies metadata between source and destination without changing the underlying identical files.

e.g. personally find winmerge and winscp a simpler solution to some aspects of comparison and synchronisation (but need setup)

Overall I do use Opus regularly but largely to just get a count of the contents of a directory (avoiding a manual file propperties right click on the folder.)

I appreciate the software development costs, but Β£25 for a single node locked license is steep. I'd like to see that extended to at least a couple of PCs + a laptop as many advanced users (who can regex :) ) are likely to have at least a couple of PCs round the house in these work from home days.

Overall 3.5 out of five - undeniably powerful but interface is too fussy


> appreciate the software development costs, but Β£25 for a single node locked license is steep.

Total Commander is 50% more expensive than that. I was going to say that it's worse value because of it, then I realised that not only is it a multi-computer license for the registered user, but that also the past 15-20 years of updates would have been free if I bought it.

That's actually decent. I'm part of the Linux ecosystem for now (so I can't run it other than Wine), but if I was on Windows, I'd likely invest in a license. Considering that it's still being updated for new machines after so long, and retains support all the way back to Windows 95, it's good value for money.


I've been using TC since when NC was a thing. Got a proper license for it some 10 years ago, when I could finally afford it. One of the most irreplaceable bits of software ever (at least for me).


> I appreciate the software development costs, but Β£25 for a single node locked license is steep. I'd like to see that extended to at least a couple of PCs + a laptop as many advanced users (who can regex :) ) are likely to have at least a couple of PCs round the house in these work from home days.

Ignoring the cost, Directory Opus is one of the few pieces of software that I have not encountered licensing issues with. Need to reinstall Windows, the certificate is applied without a fuss. Need to transfer the license to a new computer, the certificate is applied without a fuss. Using an old version, the certificate is applied without a fuss and disabling the update manager means it won't pester you to upgrade. The only time I was grumpy with their support policies was when I had to upgrade to update SSH support.


Some further thoughts and a more positive spin based on watching the (breathtakingly fast) what's new video...

Several of the features regarding merging, synchronisation, metadata handling are available now (not in the 'DOpus 12 light' release from 2017 I was using).

I will gladly pay my money for the pro version and look forward to using the latest version with my own cat videos.

(Still consider the features as too configurable and not easily 'discoverable' particularly for 'normal' rather than 'tech' users)

I also recognise how amazingly talented the small DOpus12 team are. They've implemented something that Microsoft should have done....


Wow. A fantastic set of descriptions, service and detection tooling from portswigger and colleagues.

The fact that http/1 downgrading is so prevalent and unlikely to change in the short term is the point.


An extension of the zig ideas is making self contained portable and very small binaries that run everywhere from bare metal to any OS using a cross platform libc and configuring GCC or clang appropriately.

Check out cosmopolitan at https://justine.lol/cosmopolitan/index.html and background at https://justine.lol/ape.html

Aside: Zig and Zag were humourous puppet characters with attitude for adults in the UK. Always made me smile on the 'big breakfast' show in the 90s.


It looks impressive but it isn't clear how much you have to pay for services from them. It isn't free and you aren't in control. Your snapshots and abilities to rollback etc are likely to be dependent on their storage servers.

They certainly should monetise, but not making it clear is what I object to. I've raised as an issue for clarification in their community wiki.

https://github.com/nix-community/wiki/issues/34


Mmmm, no. Nix is not a SaaS that builds your snapshots that you have to pay for (or that "they need to monotize", what's wrong with people?), it's a whole ecosystem ranging from a full OS (NixOS) down to a package manager (Nix Package Manager) and programming language (just "Nix).

Not sure what needs clarification here, it's pretty up-front about it's mission and features already.

> Your snapshots and abilities to rollback etc are likely to be dependent on their storage servers

Not sure where you get this from. Snapshots are stored locally unless you specify otherwise, and then you'll get to chose whatever storage servers you want to use. Absolutely no "hidden" costs with Nix as it's a MIT licensed project and I don't think they even offer any "value-added" services nor paid customer support.

Edit: reading the issue you just created (https://github.com/nix-community/wiki/issues/34), I'm even more confused. "Given the need to monetise" is coming from where? Nowhere do they say that they have to monotize but don't know how/where, so where do you get this from? Not everything is about money.


I'm really impressed with it and am ONLY seeking clarity. I'll be extremely happy if I can use it standalone. Copyright, licensing are all I'm looking at.

e.g. Something like AGPL is considered copyleft and not compatible with 'open source' ethos. Still 'free' but the additional non-compete cloud service clause is both a sensible move, but something I'd just like to understand.

Your statement of 'what's wrong with people' is what is getting me. Borderline defensive. I'm definitely not knocking the product, quite the reverse. Its so good I want to embrace it wholeheartedly.

I have no problem paying for things, contributing voluntarily to a great product.

Its good to see that various organisations are committed to funding the infrastructure costs etc. (which negates my comment about storage servers).

As to monetisation,kind of irrelevant but was referring to paid services at the bottom of https://nixos.wiki/wiki/Nix_Ecosystem


> e.g. Something like AGPL is considered copyleft and not compatible with 'open source' ethos. Still 'free'

I’ve had to read this several times and still don’t understand what you’re talking about. AGPL is a Free Software license. All Free Software licenses (to my knowledge) provide source access. AGPL is also OSI-approved, so it is also an open source license.

It’s unclear what point you’re trying to make?


Issue 34 is now closed. My thanks to the patience of people there who have clarified and improved the visibility of licensing.


As noted below, my initial wording was unhelpful and misplaced.

The infrastructure costs are covered by sponsors. The 'They should monetise' was a 'they' that I now understand are external organisations rather than nixos themselves.


Update: The original post is from January 2019.

Ashley Broadley's github page at https://github.com/ls12styler sadly doesn't contain a repo with his rust dev work to date (I will ask him as it has some really good stuff in the article.)

----

Very nice. I'm doing similar at the moment. Maybe take a look at

https://www.reddit.com/r/rust/comments/mifrjj/what_extra_dev...

A list of useful cargo built in and third party sub commands.

As you note, common recommended app crates (source) should be gathered separately.

I have several other links and ideas eg supporting different targets such as x86_64-linux-unknown-musl but too long for this post!


https://blog.logrocket.com/rust-compression-libraries/

Although implementations arein rust, I assume the provided benchmarks are representative of any optimised implementation...

Many compressor algorithms are compared on several data sets.

The results tables show compressed size, compression and decompression times for a number of normal and pathological cases.

Get a good feel about strengths and weaknesses.

Some algs really go downhill in pathological cases, such as with random data.

Do consider encryption too though you probably want to do that on the compressed data set where possible.

Sometimes external encryption means you will be stuck with something close to pathological...


To be truly portable it is worth considering whether your newly created Dev environment can work without the internet or in an air-gapped environment.

Vscode can be problematic in that respect. Typically with dependencies used by an extension often assuming they can reach out to other servers.

I look at initial internet facing container creation as a separately managed snapshot process to grab dependencies which then gets configured for particular Dev, build, test, runtime and release containers that are built by the dependency collecting original container.

Ie something like vscode isn't installed in the internet facing container, it is installed in the offline build of a Dev container. This is where the difficulties lay in my approach.


Jodie Foster knows how to pick a great movie. Both Contact and Anna and the king are right up there with the best.


Cap'n'proto has a number of drawbacks both from the 'one man band' pov. (Witness the two year lull when Kenton went sandstorming), an awkward API and limited language support.

Personally I don't think there is a perfect protocol because different people want different things whether self describing, easy/optimised memory management, zero copy, partial decode. The list goes on....

At a pinch, flatbuffers with flexbuffer evolution would be close to my goals but I'd much prefer having a meta description of messages and perhaps access ,authentication, transport security and use that (e.g. an OpenAPI v3.1 spec) to generate an implementation whether in protobuf, msgpack, JSON, ASN1 etc. whichever is suitable for a use case and using an appropriate transport whether quic, TCP, UDP.

Some of the high performance work I've seen uses ASN.1 on a very large virtual server at 100Gb line rates because the messages lend themselves to parallel decode.

I think Mike Acton had it right by suggesting things are tailored to the data needs and not overgeneralised.


> Witness the two year lull when Kenton went sandstorming

There wasn't really a lull; development on Cap'n Proto continued that whole time by the Sandstorm team in service of Sandstorm. There was an absence of official releases since Sandstorm always used the latest Cap'n Proto code from git. The same story continues today with Sandstorm replaced by Cloudflare Workers. TBH I should probably give up on "official releases" and just advise everyone to use git...


Wasn't a criticism, just an observation from a 'user' when a key bit of infrastructure is directed by a single or small team.

You are opening up a technology that you are crafting for your own purposes in the best spirit of sharing.

Personally I'm wary of relying on any project driven in this way.

Inspiring, but not something I'm personally keen on using directly.

I do feel grateful that you've given this as an option and I do feel slightly parasitic in not providing a tangible positive contribution instead.


A couple more links that may be helpful in this pursuit:

https://sagargv.blogspot.com/2014/09/on-building-portable-li... and

https://blog.ksub.org/bytes/2016/07/23/ld.so-glibcs-dynanic-...

Watch out for the implications of licenses etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: