Hacker Newsnew | past | comments | ask | show | jobs | submit | jmcqk6's commentslogin

What do you think about the mis-alignment between goals here?

For medical research, the goal is to find general practices that will broadly help, and identify risks with the intervention. Even then, with many interventions, it's understood that they will effect people differently.

For individuals, they don't care about variation in communities, or standard medical practices, they are looking for relief for their specific condition.

Of course, declaring that just because something worked for one person, it should work for others, is wrong in both camps.

I feel like a big part of the disconnect here, and a big reason why people are talking past each other, is that they actually have different goals, and aren't really aware of that difference.


Well, to be honest I think the primary disconnect is in epistemological understanding. The OP did not declare peptides to be a personal revolution, he/she seemingly generalised their own experience to be widely applicable.

Basic human thought patterns usually lead people to think that anecdotes about their personal experience is valuable for understanding the world, but this is wrong. The scientific revolution basically illustrated the flaw in this premise outside of hypothesis generation. It takes specific education to make human beings truly believe that their anecdotal experiences are mostly irrelevant beyond understanding their immediate circumstances. The proportion of humanity that truly think this way is relatively small.

Understanding the world through anecdotes still works okay-ish for a lot of areas, but ascertaining relatively subjective effects of experimental pharmaceuticals is not one of them. But to many people it's non obvious that this is the case. And as a general method of thinking about this issue, it is just the wrong way to go about things.

And that's the disconnect, in my opinion. The OP drew a conclusion from a thought pattern that comes easily to human beings, but that is just wrong in this situation. Of course, perhaps this is reinforced by underlying motivations, but that's not what makes people talk past each other. These kinds of discussion are usually driven by so called "deep disagreements" in epistemological understanding, in my experience.


Am I understanding you correctly that you believe that all of academia has aligned behind "one true way?"

nope, you're definitely not understanding me correctly.

Credentials are a way to externalize trust. The trend with AI is to further erode trust. There will be a reaction against this eventually, and it's likely that more mechanisms to externalize trust will be found, not that they will become unimportant.


I think the problem is, compare credentials with reputation. A programmer who possesses no credentials might create "good" software that is validated as "good" even by those possessing credentials. However, a person with credentials related to programming, might produce malicious software that betrays the "trust" of the credential. Thus, just like the fallacy of the labor theory of value, credentials do not inherently relate to the production of heritable wealth. AI shines a light on this disconnect, and that actually the involuntary nature of certain credentials (as opposed perhaps to credentials themselves) creates certain classical impediments to the creation of wealth.

Thus, maybe credentials might be thought to be "necessary but insufficient" for achieving certain goals or validating trust. But even in this above example, the credential was not even necessary for the production of "good" software. AI of course simply exposes this truth as it gives more direct access to skills and knowledge to the average person: the (involuntary) credential was never necessary for doing some of these things, and we are able to remove the necessity entirely in some cases.


It sounds like you have a particular bone to pick though you're only doing it by talking in generalities in the OP, and now you're talking about programming credentials.

I don't know of any required credential to write software or be a programmer.

When I think about credentials, I think about doctors and lawyers. In both cases, I'm going to demand that the people I work with are credentialed, and there is no way that I'm going to change that.

Can you give a specific example of something that requires a credential today that you would like to see relaxed?


It's mostly a problem with "requiring" the credentials by law

I have no problem with you wanting your doctors or lawyers to have credentials for your own needs

I take issue with people requiring me to seek doctors or lawyers with credentials, as that then has set up a compulsory system which reduces the quality and quantity of law and medicine produced, which has predictably created things like doctor shortages today (see news stories on doctor shortages)

So AI will fill some of these shortages if there aren't burdensome regulations put in place to prevent it from doing so (such as regulations that currently exist)


> That there exists a decoder that can decode the space of useful programs from a much smaller prompt space.

I love this. I've been circling this idea for a while and you put into words what I've struggled to describe.

> "A commercially successful team communication app built around the concept of channels, like in IRC." > Without already knowing Slack, that's not decodable.

I would like to suggest that implicit shared context matters here. Or rather, humans tend to assume more shared context than LLM's actually have, and that misleads us when it comes assessing the aforementioned decoder.

But I think it also suggests that there is a system that could be built with strong constraints and saliency that could really explode the compression ratio of vibe coding.


They saw a social network full of bots and didn't want the competition.


Not the person you're replying to, but I think I can explain it this way:

The quality of life of a human being is directly related to the amount of free energy (i.e. thermodynamic free energy, not free as in no cost) they have access to. Life must be able to generate more energy than it needs, even tiny bacteria. As humans developed, we found more ways to access and utilize free energy.

There is a phrase: Energy return on investment (or EROI). You can map the development of humanity pretty cleanly to an increasing EROI over the entire course of our history.

Fossil Carbon allowed us to explode our EROI and gave us access to never before seen amounts of free energy. Unless we find ways to maintain that EROI, our quality of life will necessarily diminish.

Obviously we need to cut our use of fossil carbon. And if we don't, we're simply going to run out, and then we'll be stuck anyway. But we also don't have anything with a comparable EROI to replace it with.

This is the root problem we're facing. If we had working fusion, it would be a whole lot easier to decarbonize.


Making global declarations about introverts isn't really useful beyond the basics. I'm an introvert and my life has gotten noticeably better once I started intentionally talking to people more. I still need to have my own time to recharge. That hasn't changed. The thing that changed is that I'm not longer inhabiting the self-imposed prison of thinking social interaction was not for me.


He has proven to be untrustworthy much longer than his trip down the right wing rabbit hole. For me, it started when he through out the accusation of pedophilia against the cave diver trying to rescue students. And since then it's become clear that he will say whatever he wants without regard for reality in any meaninful way. Whether it was promising FSD over a decade ago, which he still hasn't delivered, lying about video game proficiency, or even his non-sensical statements about twitter technology after he acquired the company, it's clear that he's entered the realm where consequences don't really matter to him and he will say or do whatever he wants. There is no trust to be found there.


> the Mozilla CEO shares your political views

I think treating every human with equal dignity goes beyond politics. While the specific context here was political, but that is only the context, not the principle.


I own a Jeep Wrangler, and you're right the electronics are terrible. The rest of the vehicle is really solid though. The only problems I've had with it in three years are electronic in nature. And I've really pushed it to the limits: Colorado Passes, Utah Dessert, Montana backroads. I drove it to the Arctic Ocean and back on the Dempster.

Still there is no excuse for how terrible the electronics are in Jeep / Dodge (I'm assuming all Chrysler) vehicles. And it's been that way for decades.


I owned a Jeep 4XE, and I was glad the day we sold it, and I'm doubly glad today. The electronics and software were crap, and the powertrain was simply insufficient. At one point, they issued a notice that amounted to 'it might catch on fire, keep it away from your house.'


Yeah, I have family members with 2 JKs and a JL, unfortunately all plagued with issues, almost entirely related to the electronics. A Jeep Wrangler is a vehicle that sounds great on paper, but actually owning one is an exercise in frustration unless you just enjoy fucking with wiring harnesses. I am sure many others will come out of the woodwork to say that Jeeps are great, unfortunately they are not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: