Hacker Newsnew | past | comments | ask | show | jobs | submit | f311a's commentslogin

Why is this upvoted? The author did not even bother to read what he wrote.

> SOC 2 Type II ready

Huh? You vibecoded the repo in a week and claim it ready?


I meant since this is designed to be deployed in companies private VPC, their data stays with them. Zero vendor data risk. Corrected it. Thanks for pointing it out.

> I still understand how everything works,

That's partly an illusion. Try doing everything manually. After only using inline suggestions for six months a few years ago, I've noticed that my skills have gotten way worse. I became way slower. You have to constantly exercise your brain.

This reminds me of people who watch tens of video courses about programming, but can't code anything when it comes to a real job. They have an illusion of understanding how to code.

For AI companies, that's a good thing. People's skills can atrophy to the point that they can't code without LLMs.

I would suggest practicing it from time to time. It helps with code review and keeping the codebase at a decent level. We just can't afford to vibecode important software.

LLMs produce average code, and when you see it all day long, you get used to it. After getting used to it, you start to merge bad code because suddenly it looks good to you.


I disagree. I used to do a lot of math years ago. If you gave me some problems to do now I probably wouldn't be able to recall exactly how to solve them. But if you give me a written solution I will still be able to give you with 100% confidence a confirmation that it is correct.

This is what it means to understand something. It's like P Vs NP. I don't need to find the solution, I just need to be able to verify _a_ solution.


I have a hard time using languages I know without an LSP when all ive been doing is using lsp and its suggestions.

I cant imagine how it is for people tha try to manually write after years of heavy llm usage


The GP seems to run a decentralized AI hosting company built on top of a crypto chain.

Can you get any fadd-ier than that? Of course they love AI.


Well, I‘m still using my brain from morning to evening, but I‘m certainly using it differently.

This will without a doubt become a problem if the whole AI thing somehow collapses or becomes very expensive!

But it’s probably the correct adaptation if not.


> That's partly an illusion. Try doing everything manually. After only using inline suggestions for six months a few years ago, I've noticed that my skills have gotten way worse. I became way slower. You have to constantly exercise your brain.

YMMV, but I'm not seeing this at all. You might get foggy around things like the particular syntax for some advanced features, but I'll never forget what a for loop is, how binary search works, or how to analyze time complexity. That's just not how human cognition works, assuming you had solid understanding before.

I still do puzzles like Advent of Code or problems from competitive programming from time to time because I don't want to "lose it," but even if you're doing something interesting, a lot of practical programming boils down to the digital equivalent of "file this file into this filing," mind-numbingly boring, forgettable code that still has to be written to a reasonable standard of quality because otherwise everything collapses.


Want to try to do anything more complicated? I have seen a lot of delusional people around, who think their skills are still on the same level, but in interviews they bomb at even simple technical topics, when practical implementations are concerned.

If you don't code ofc you won't be as good at coding, that's a practical fact. Sure, beyond a certain skill level your decline may not be noticeable early because of the years of built-in practice and knowledge.

But considering every year there is so much more interesting technology if you don't keep improving in both hands-on learning and slow down to take stock, you won't be capable of anything more than delusional thinking about how awesome your skill level is.


I love the batteries that RoR or Django gives you, but then I also remember how much time it takes to maintain old projects. Updating a project that was started 5-6 years ago takes a lot of time. Part of that is managing dependencies. For Django, they can easily go above 100. Some of them have to be compiled with specific versions of system libraries. Even Docker does not save you from a lot of problems.

Right now, I would rather use Go with a simple framework, or even without one. With Go, it's so easy just to copy the binary over.


I'm working on a large (at least 300k+ loc) Django code base right now and we have 32 direct dependencies. Mostly stuff like lxml, pillow and pandas. It's very easy to use all the nice Django libs out there but you don't have to.

I was talking about total deps, not direct. By installing something like Celery, you get 8-10 extra dependencies that, in turn, can also have extra deps. And yeah, extra deps can conflict with each other as well.

I find the thought daunting but the reality surprisingly easy.

You just keep up as you go, as long as you keep things close to the framework it's fine.


> You just keep up as you go

He said "Updating a project that was started 5-6 years ago takes a lot of time."


Yes but GP said "In reality it's not that much".

Not much work every few months turns into a lot over years, especially if you skip a few of those "every few months" events.

That is obviously true but doesn't mean as much as you seem to think. Washing laundry is also not much work but it adds up to a lot over the years, especially if you skip a few weeks of laundry every once in a while. That is not an excuse to not do it.

The answer is the same in both cases: acquire some discipline and treat maintenance with the respect it deserves.


I'm confused. It's too much work to upgrade dependencies, but not too much time to write from scratch and maintain, in perpetuity, original code?

Yes. I've probably spent more time maintaining a trivial Rails app originally written in 2007 than I spent writing it in the first place.

It is easy, and people tend to do what is easy. It takes more effort to minimise dependencies. Your boss or your client will not even notice.

Obviously there are some dependencies that you cannot easily avoid (like the things you mention). On the other hand there is a lot off stuff used that is not that hard to avoid - things like wrappers for REST APIs are often not really necessary.


Sometimes I think the issue here is churn. Security fixes aside, what is it that updated dependencies really give? Can't some of these projects just... stop?

The issue with that is, that the longer you wait to upgrade dependencies, the more pronounced the problems upgrading it will become generally speaking, because more incompatibilities accumulate. If those 5-6 year old projects were updated every now and then, then the pain to get them updated would be far less. As you point out, security is an aspect too, so you can leave the project inactive, but then you might hit that problem.

Dependency hell. Usually how it goes is you have to develop a new feature, you find a library or a newer version of the framework that solves the problem but it depends on a version of another library that is incompatible with the one in your project. You update the conflicting dependency and get 3 new conflicts, and when you fix those conflicts you get 5 new conflicts, and repeat.

So churn causes more churn.

Also breaking APIs should be regarded very poorly. It isn’t. But it should be.


I agree, but let's say you are looking for a library to solve your problem - you see one repo updated 2 weeks ago and the other one updated 5 years ago - which one do you choose?

Perhaps some kind of ‘this code is still alive’ flag is key. Even just updating the project. Watching issues. Anything showing ‘active but done’.

That depends. What problem do I have, exactly?

Do I need a library to sort an array? The 5 years ago option is going to be the more likely choice. A library updated 2 weeks ago is highly suspicious.

Do I need a library to provide timezone information? The 2 weeks ago option, unquestionably. The 5 years ago option will now be woefully out of date.


The real issue with Rails apps is keeping up with the framework and language versions. There are really two categories of dependencies.

One-off libraries that don't have a runtime dependency on Rails are typically very low-maintenance. You can mostly leave them alone (even a security vulnerability is unlikely to be exploitable for how you're using one of these, as often user input isn't even getting through to them). For instance a gem you install to communicate with the stripe API is not typically going to break when you upgrade Rails. Or adding httparty to make some API requests to other services.

Then there are libraries that are really framework extensions, like devise for authentication or rspec for testing. These are tightly coupled to Rails, sometimes to its private internals, and you get all sorts of nasty compatibility issues when the framework changes. You have to upgrade Rails itself because you really do need to care about security support at that level, even for a relatively small company, so you can end up in a situation where leaving these other dependencies to fester makes upgrading Rails very hard.

(I run a startup that's a software-enabled service to upgrade old Rails apps).


I think you could only get around this by forcing your whole dependency chain to only add non-breaking security fixes (or backport them for all versions in existence). Otherwise small changes will propagate upwards and snowball into major updates.

Indeed that’s what a lot of Elixir and Erlang packages do, if it’s done then it’s done.

"Security fixes aside" is too dismissive. Transitive dependencies with real CVEs can feel like the tail wagging the dog, but ignore them at your peril.

I have not had this experience as badly with Laravel. Their libraries seem much more stable to me. We've gone up 5 major versions of Laravel over the last year and a half and it was pretty simple for each major version.

Laravel is extremely stable and consistent.

It really depends how they were built. I have large Django apps running for a very long time that require minimal maintenance, but it’s because we were very deliberate about dependencies.

But I learned to do that by working on codebases that were the opposite.


Does batteries included somehow result in upgrading years old projects being a larger lift? I would think the opposite.

My experience has been the opposite, especially since Rails has included more batteries over the years. You need fewer non-Rails-default dependencies than ever, and the upgrade process has gotten easier every major version.

Rails is way more stable and mature these days. Keeping up to date is definitely easier. Probably 10x easier than a Node/JS project which will have far more churn.

I also think it's the opposite, since the dependencies are almost guaranteed to be compatible with each other. And I think Ruby libraries in particular are usually quite stable and maintained for a long time.

My medium-sized Django projects had close to 100 dependencies, and when you want to update to a new Django version, the majority of them must be updated too.

Thankfully, updating to a new Django version is usually simple. It does not require many code changes.

But finding small bugs after an update is hard, unless you have very good test coverage. New versions of middleware/Django plugins often behave slightly differently, and it's hard to keep track of all the changes when you have so many dependencies.


Different experience with Django. I am only using a handful of deps. dj-database-url, dj-static, gunicorn, psycopg are the only "mandatory" or conserved one IMO as a baseline.

Use UV for dep management. Make sure you have tests.

In the past month I migrated a 20 year old Python project (2.6, using the pylons library) to modern Python in 2 days. Runs 40-80 times faster too.


Complete opposite of my experience

I have plenty of RoR in production with millions of users, yearly we upgrade the projects and it's fine, not as catastrophic as it sounds, even easier with Opus now

It used to take at least a day of work. In a post-2025/11 world, it is under an hour. Maybe even 15 minutes if you've landed on a high quality LLM session.

In my experience, the magic makes the easy parts easier and the hard parts harder

I tried to find any recent issues related to AI in the Vim repo, but did not find any.

Offending commit https://github.com/vim/vim/commit/fc00006777594f969ba8fcff67...

Just Claude as a co-author.


There's a bunch of commits that have "supported by AI claude." as well. Whatever that's supposed to mean.

the lesson here is dont put those comments into your commits. use the tools you want to write code and just use them. it's nobody else's business. if someone overuses AI (which is common) it's quite obvious anyway

> it's nobody else's business

Human origin certification is coming. It might be hard to enforce, but you should probably respect the intent if a project tries to enforce it.


Agreed. It's impossible to enforce a user to disclose whether their commit has any AI influence or output anyway. Hard forks like this are just a short-sighted reaction.

If the person behind this fork has been active in FOSS or commercial development at all in the last 3 years, The odds they've never come across undisclosed AI-generated code that looked reasonable has to be close to zero.


It’s someone else's business when you want your works to have a relationship with the world. Or were you content with talking to nobody?

im required to disclose AI use in my own OSS projects on github? no, I'm not

Whether it's someone else's business that you're using AI does not require you to do anything in particular. This is a discussion on the relevance of motivations.

These "Co-Authored-By" messages are added automatically by Claude Code when it makes commits on its own, although you just need to instruct it not to do so.

Language grammars are ~200-250MB though. They are in a separate folder, and often they are all bundled to support all the languages. Some of them are HUGE.

  .rwxr-xr-x  4.6M aa    6 Mar 21:52  ocaml-interface.so
  .rwxr-xr-x  4.6M aa    6 Mar 21:52  rpmspec.so
  .rwxr-xr-x  4.9M aa    6 Mar 21:52  tlaplus.so
  .rwxr-xr-x  5.1M aa    6 Mar 21:52  ocaml.so
  .rwxr-xr-x  5.1M aa    6 Mar 21:52  c-sharp.so
  .rwxr-xr-x  5.3M aa    6 Mar 21:52  kotlin.so
  .rwxr-xr-x  5.4M aa    6 Mar 21:52  ponylang.so
  .rwxr-xr-x  5.5M aa    6 Mar 21:52  slang.so
  .rwxr-xr-x  6.1M aa    6 Mar 21:52  crystal.so
  .rwxr-xr-x  6.8M aa    6 Mar 21:52  fortran.so
  .rwxr-xr-x  9.2M aa    6 Mar 21:52  nim.so
  .rwxr-xr-x  9.5M aa    6 Mar 21:52  julia.so
  .rwxr-xr-x  9.9M aa    6 Mar 21:52  sql.so
  .rwxr-xr-x   16M aa    6 Mar 21:52  lean.so
  .rwxr-xr-x   18M aa    6 Mar 21:52  verilog.so
  .rwxr-xr-x   22M aa    6 Mar 21:52  systemverilog.so

That's exactly what I found. Why these files should exist at all? Some other IDEs just have a bunch of highlighting rules based on regular expressions and have a folder of tiny XML grammar files instead of a folder of bloaty shared libraries.

Because it's far more reliable to use proper parsers instead of a bunch of regular expressions. Most languages cannot be properly parsed with regexes.

Those files are compiled tree-sitter grammars, read up on why it exists and where it is used instead of me poorly regurgitating official documentation:

https://tree-sitter.github.io/tree-sitter


Funny enough, they are less than 10MB when compressed. I guess they could use something like upx to compress these binaries.

The whole Linux release is 15mb, but it uncompresses to 16MB binary and 200MB grammars on disk.

Why do we need to have 40MB of Verilog grammars on disk when 99% of people don't use them?


That would waste CPU time and introduce additional delays when opening files.

They could probably lazily install the grammars like neovim does, but as someone who doesn't have much faith in the reliability of internet infrastructure, I'll personally take it...

Just ran `:TSInstall all` in neovim out of curiosity, and the results were predictable:

  ~/.local/share/nvim/lazy/nvim-treesitter/parser
  files 309
  size 232M

  /usr/lib/helix/runtime/grammars
  files 246
  size 185M
If disk space is important for your use case, I guess filesystem compression would save far more than just compressing binaries with upx. btrfs+zstd handle those .so well:

  $ compsize ~/.local/share/nvim/lazy/nvim-treesitter/parser
  Type       Perc     Disk Usage   Uncompressed Referenced
  TOTAL       11%       26M         231M         231M

  $ compsize /usr/lib/helix/runtime/grammars
  Type       Perc     Disk Usage   Uncompressed Referenced
  TOTAL       12%       23M         184M         184M

I mean, they could decompress it once when using a language for the first time. It will still be fully offline, but with a bit uncompressing.

If this is a concern, why not compress at the filesystem level?

For real parsing a proper compiler codebase (via a language server implementation) should be used. Writing something manually can't work properly, especially with languages like C++ and Rust with complex includes/imports and macros. Newer LSP editions support syntax-based highlighting/colorizing, but if some LSP implementation doesn't support it, using regexp-based fallback is mostly fine.

No, expectations have shifted. In a lot of companies, managers expect you to use LLMs to produce more features faster.

It's a mobile CPU. They did not modify it. Mobile devices run with a single USB port.

That's only true for M Macs. Intel Macs with 8 GB of RAM perform pretty poorly.

My Intel Mac Mini with 8GB of RAM has always seemed fine on the rare occasions I use it.

You don't want a browser with a bunch of RCEs that can be triggered by opening a web page...


You do want a browser with RCE, but you want it to keep the it sandboxed. The hard part is executing the code safely


It's good enough if you don't go wild and allow LLMs to produce 5k+ lines in one session.

In a lot of industries, you can't afford this anyway, since all code has to be carefully reviewed. A lot of models are great when you do isolated changes with 100-1000 lines.

Sometimes it's okay to ship a lot of code from LLMs, especially for the frontend. But, there are a lot of companies and tasks where backend bugs cost a lot, either in big customers or direct money. No model will allow you to go wild in this case.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: