Hacker Newsnew | past | comments | ask | show | jobs | submit | muvlon's commentslogin

This doesn't register as corpo talk to me, more tongue-in-cheek nerdy mission control talk. See also "rapid unscheduled disassembly".

There are a bunch of subbrands but there are also a lot of genuine small Android phone companies, especially in China.

Some of these serve some interesting niches that might now disappear due to this DRAM supply issue, e.g. Unihertz for extra small phones or CAT for extra durable worksite phones.


Is there any 'guide' to this ecosystem...because 'odd niche communications gear' is always interesting.

Notably they didn't fully shed it, they compartmentalized it. They proposed to split the standard into two parts: r7rs-small, the more minimal subset closer in spirit to r5rs and missing a lot of stuff from r6rs, and r7rs-large, which would contain all of r6rs plus everyone's wildest feature dreams as well as the kitchen sink.

It worked remarkably well. r7rs-small was done in 2013 and is enjoyed by many. The large variant is still not done and may never be done. That's no problem though, the important point was that it created a place to point people with ideas to instead of outright telling them "no".


> because addicts pay up.

I think it turns out they don't, not really anyway. And that's exactly why Sora is dead. They figured out that addictive AI slop has been so thoroughly commoditized that you can get it on a ton of other platforms for free, so people don't want to pay for it.


Sometimes they do pay up. Google Gemini estimates that 25% of active daily YouTube users pay for ad free service. I know my wife and I do, and we watch a huge range of YouTube material more hours a month than all the other streaming services we subscribe to. There is no area of human knowledge or human interest that YouTube doesn’t have a ton of material for; and of course, the animal videos… The ironic thing in the subject of Sora service being cancelled is that neither my wife or I watch AI generated material.

I think the real answer is that Sora-style AI slop videos just aren't as addictive as we thought they'd be.

I let my kids have access to the app in the hope they would be inoculated against being obsessed with AI video and it actually worked. They got bored in like 2 days.

It simply doesn't compare well with handcrafted short form videos that are already plentiful on TikTok (which I absolutely don't let my kids watch).


Yes, fortunately slop is pretty unwatchable after the novelty wears out. Even the lowest common denominator stuff NFLX churns out is in a different league.

I was talking to other people re: difference between code & other domains. Code is, for customer, what it does.. not how it does it. That is - we can get mad about style, idioms, frameworks, language, indentation, linting, verbosity, readability, maintainability but.. it doesn't really matter for the customer if the code does the thing its supposed to do.

Many things like entertainment products don't work that way. For a good book/movie/show, a good plot (the what) is table stakes. All of the how matters - dialogue, writing style, casting, camera/sound/lighting work, directing, pacing, sound track, editing, etc.

For short format low stakes stuff like online ads, then the AI slop actually probably works however.

Same for say making a power point. LLMs can quickly spit out a passable deck I am sure. For a lot of BS job use cases, that's actually probably fine. But if it is the key element of a sales pitch, really it's just advanced auto-formatting/complete, and the human element is still the most important part. For example I doubt all the AI startups are using AI generated sales pitches when they go to VC for funding.


IMO slop fits best for "art that isn't the point".

A promotional flyer for an event could work perfectly well in plain text. The art is pure social signal - this event is thrown by the type of people who put art in a certain style on their flyers. Your eye is caught and your brain almost immediately discards the art.

Same with power point - you make a power point so that everyone knows this decision was made by the type of people who make power points. A txt file and a png would have gotten the job done.

Same also with memes - you could just _say_ a lot of these jokes, but they're funnier with a hastily-edited image alongside.


Agreed, it's good at placeholder art for which entertainment consumption is not the point. Clip Art for the new generation.

>> you can get it on a ton of other platforms for free, so people don't want to pay for it.

What happens when other platforms start trying to get people to pay? I think there's a race to find a revenue stream for this stuff. As soon as one company can find a way to monetize it, they'll all end up doing it. Right now, we're in a place where companies are losing so much money, they have to decide how much they can lose before they pull the plug.

OpenAI just proved you cannot burn money indefinitely.


The monetization of social media has always been about steering otherwise non paying users into making purchases elsewhere. So if the AI slop can make people spend money on other products that's accomplished the goal.

I actually think this isn't even surprising from OpenBSD philosophically. They still subscribe to the Unix philosophy of old, moreso than FreeBSD and much much more than Linux.

That is, "worse is better" and it's okay to accept a somewhat leaky abstraction or less helpful diagnostics if it simplifies the implementation.

This is why `ed` doesn't bother to say anything but "?" to erroneous commands. If the user messes up, why should it be the job of the OS to handhold them? Garbage in, garbage out. That attitude may seem out of place today but consider that it came from a time when a program might have one author and 1-20 users, so their time was valued almost equally.


> That attitude may seem out of place today

It absolutely doesn't. Everywhere I've worked we were instructed to give terse error messages to the user. Perhaps not a single "?", but "Oops, something went wrong!" is pretty widespread and equally unhelpful.


This is normal to return a terse message to a remote user via API. The remote user may be hostile, actively trying to gather information useful for breaking in.

But the local user who operates pf is already trusted, normally it would be root.

In either case, no error should be silently swallowed. Details should be logged in a secure way, else troubleshooting becomes orders of magnitude harder.


> That attitude may seem out of place today

That attitude was out of place at every point. Now it was excusable when RAM and disk space was sparse, it isn't today, it have entirely drawbacks


Code size would balloon if you try to format verbose error messages. I often look at the binaries of old EPROMs. I notice that 1) the amount of ASCII text is a big fraction of the binary 2) still just categories (β€œIllegal operation”). For the 1970s, we’re talking user programs that fit in 2K.

I write really verbose diagnostic messages in my modern code.


There was also an implicit saving back then that an error message could be looked up in some other system (typically, a printed manual). You didn't need to write 200 chars to the screen if you could display something much shorter, like SYS-3175, and be confident that the user could look that up in the manual and understand what they're being told and what to do about it.

IBM were experts at this, right up to the OS/2 days. And as machines got more powerful, it was easy to put in code to display the extra text by a lookup in a separate file/resource. Plus it made internationalization very easy.


Even in that scenario that attitude seems out of place, considering a feature is implemented once and used many times.


So you're saying instead of assessing the current capabilities of the technology, we should imagine its future capabilities, "accept" that they will surely be achieved and then assess those?


I would assess the directionality and rate of the trend. If it's getting better fast and we don't see a limit to that trend then it will eventually pass whatever threshold we set for adoption.


As a Nix evangelist, I have to say: Nix is really not capable of replacing languag-specific package managers.

> running arbitrary commands to invoke language-specific package managers.

This is exactly what we do in Nix. You see this everywhere in nixpkgs.

What sets apart Nix from docker is not that it works well at a finer granularity, i.e. source-file-level, but that it has real hermeticity and thus reliable caching. That is, we also run arbitrary commands, but they don't get to talk to the internet and thus don't get to e.g. `apt update`.

In a Dockerfile, you can `apt update` all you want, and this makes the build layer cache a very leaky abstraction. This is merely an annoyance when working on an individual container build but would be a complete dealbreaker at linux-distro-scale, which is what Nix operates at.


Fundamentally speaking, the key point is really just hermeticity and reliable caching. Running arbitrary commands is never the problem anyways. What makes gcc a blessed command but the compiler for my own language an "arbitrary" command anyways?

And in languages with insufficient abstraction power like C and Go, you often need to invoke a code generation tool to generate the sources; that's an extremely arbitrary command. These are just non-problems if you have hermetic builds and reliable caching.


I mean, I guess at a theoretical level. In practice, it's just not a large problem.


Well, arbitrary granularity is possible with Nix, but the build systems of today simply do not utilise it. I've for example written an experimental C build system for Nix which handles all compiler orchestration and it works great, you get minimal recompilations and free distributed builds. It would be awesome if something like this was actually available for major languages (Rust?). Let me know if you're working on or have seen anything like this!


A problem with that is that Nix is slow.

On my nixos-rebuild, building a simple config file for in /etc takes much longer than a common gcc invocation to compile a C file. I suspect that is due to something in Nix's Linux sandbox setup being slow, or at least I remember some issue discussions around that; I think the worst part of that got improved but it's still quite slow today.

Because of that, it's much faster to do N build steps inside 1 nix build sandbox, than the other way around.

Another issue is that some programming languages have build systems that are better than the "oneshot" compilation used by most programming languages (one compiler invocation per file producing one object file, e.g. ` gcc x.c x.o`). For example, Haskell has `ghc --make` which compiles the whole project in one compiler invocation, with very smart recompilation avoidance (pet-function, comment changes don't affect compilation, etc) and avoidance of repeat steps (e.g. parsing/deserialising inputs to a module's compilation only once and keeping them in memory) and compiler startup cost.

Combining that with per-file general-purpose hermetic build systems is difficult and currently not implemented anywhere as far as I can tell.

To get something similar with Nix, the language-specific build system would have to invoke Nix in a very fine-grained way, e.g. to get "avoidance of codegen if only a comment changed", Nix would have to be invoked at each of the parser/desugar/codegen parts of the compiler.

I guess a solution to that is to make the oneshot mode much faster by better serialisation caching.


What if you set up a sandbox pool? Maybe I'm rambling, I haven't read much Nix source code, but that should allow for only a couple of milliseconds of latency on these types of builds. I have considered forking Nix to make this work, but in my testing with my experimental build system, I never experienced much latency in builds. The trick to reduce latency in development builds is to forcibly disable the network lookups which normally happen before Nix starts building a derivation:

    preferLocalBuild = true;
    allowSubstitutes = false;
Set these in each derivation. The most impactful thing you could do in a Nix fork according to my testing in this case is to build derivations preemptively while you are fetching substitutes and caches simultaneously, instead of doing it in order.

If you are interested in seeing my experiment, it's open on your favourite forge:

https://github.com/poly2it/kein



I use crane, but it does not have arbitrary granularity. The end goal would be something which handled all builds in Nix.


Sure, and that's useful but not revolutionary nor exclusive to 3D printers. You can use a milling to mill a bunch of pieces for a milling machine. You can use a PCB printer to print the PCBs for a PCB printer. A 3D printer is much, much closer to this than it is to a self-replicating machine.


> You can use a milling to mill a bunch of pieces for a milling machine.

Now that CNC mills get more affordable, people are starting to get vocal about their visions of a self-milling CNC mill. :-)


A classic manual Bridgeport mill, a foundry for making castings, a heat-treating furnace, a steel planer, a lathe, a drill press, a grinder, and a supply of steel is enough for a master machinist to reproduce all that. That's what was used to make machine tools in the first half of the 20th century.


... and now work on

- how these machining processes can be automatized, and

- how the cost, space requirements and noise levels for these machines can be reduced so that every ambitious maker can have them in their apartment

Voila, the start of a home manufacturing revolution ...


I get the feeling there is something interesting here, but the website seems myopically focused on syntax. It doesn't really tell me what this language is good at or how you'd expect people to use it.


They are quite literally negotiable: https://isrg.formstack.com/forms/rate_limit_adjustment_reque...

There are also a bunch of rate limit exemptions that automatically apply whenever you "renew" a cert: https://letsencrypt.org/docs/rate-limits/#non-ari-renewals. That means whenever you request a cert and there already is an issued certificate for the same set of identities.


Your comment is 100% correct, but I just want to point out that this doesn't negate the risks of bob's approach here.

LE wouldn't see this as a legitimate reason to raise rate limits, and such a request takes weeks to handle anyway.

Indeed, some rate limits don't apply for renewals but some still do.


If you’ve hit a rate limit, we don’t have a way to temporarily reset it. [1]

From your link

move the adjustments to production twice monthly.

I don't know about your use case but I couldn't risk being unable to get a new certificate for at least a fortnight because my container was stuck in a restart loop.

[1]: https://letsencrypt.org/docs/rate-limits/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: