There's a lot of work, good and some really bad, about fisher information and physics. vijay balasubramanian and Jim Sethna may interest you, though sethna is more condensed matter. And of course amari.
I think personally the unit you measure divergence in just doesnβt matter. Yes, nats is technically superior, but as long as you do it consistently, all that you really want to do is to measure how similar A is to B.
In that sense I think many explanations of KL are very convoluted.
It's been both normalized and suppressed. I'm old enough to remember not being to able to point out SF crime problems without being called a fascist. It's denial, it's perverse. Noah smith claims that our(USA) "solution" to it, besides just ignoring it, was basically giving up on cities and moving to suburbs.
If you want to hear from the man himself, see link below. It was a fairly soft interview. I listened mainly because it was Noah and wasn't expecting him to be so pro-surveillance. So, even though I don't agree with them, it might be worth listening to their reasoning.
You basically have to be involved if you're meta. Even if there's only 5% chance this AI stuff is as disruptive as the labs claim it is, you can't afford to miss out. Even if you're lagging frontier, you must develop the competency internally. Otherwise you ignored a 5% chance of total annihilation, probably even exposing you to shareholder lawsuits.
This has consistently pissed me off. It seems like we all just accepted that whatever they define as "functioning"/"OK" is suitable. I see the status now shows, but there should be a very loud third party ruthlessly running continuous tests against all of them. Ideally it would also provide proof of the degradation we seem to all agree happens (looking at you Gemini). Like a leaderboard focused on actual live performance. Of course they'd probably quickly game that too. But something showing time to first response, "global capacity reached" etc, effective throttling metric, a intelligence metric. Maybe like crowdsourced stats so they can't focus on improving the metrics for just the IPs associated with this hypothetical third party performance watchdog.
The one that pissed me off the most was Gemini API displaying very clearly 1) user cancelled request in Gemini chat app 2) API showing "user quota reached". Both were blatant lies. In the latter case, you could find the actual global quota cause later in the error message. I don't know why there isn't more outrage. I'm guessing this sort of behavior is not new, but it's never been so visible to me.
Gemini had a partial outage on their status page and then once it was over after 10 days, it became a single day partial outage. For me, it was nearly two weeks of 95% failure rates.
Yup displays as an "auth" issue to me. Just a nice reminder that my original plan was to be provider agnostic but everything was working so well with cc I lost sight lol.
How can some one on hn think AI is a gimmick? I truly don't understand this. You can completely ignore 99% of it that is distasteful to you it seems to me you would still have to come to the conclusion it's not a gimmick. Is alphafold a gimmick? Are boltzmann generators gimmicks? Are improved weather predictions gimmicks? Are diffusion models gimmicks?
reply