Hacker Newsnew | past | comments | ask | show | jobs | submit | thanksgiving's commentslogin

Of course, anyone would agree that if wishes were fishes, QAs should not exist. We would all use agile with cross-functional teams. Every single team member can do any work that may be needed. All team members can take time off any time they need to because we have full coverage and the world is a beautiful place.

Of course, none of this is true in the real world.

For example, just last week we had a QA essentially bring down our web application on staging environment always reproducible with a sequence of four clicks. Follow the sequence with about the proper timing and boom, exception.

Should this have been caught before a single line of code was written? Yes, it should have been caught before any code was written. However, the reality is that it did not. Should this have been caught by some unit test? Integration test? End to end test? Code review? I'd argue as we barrel down a world of AI slop, we need to slow down more. We need QA more than ever.


If you need a lightning layer on top anyway, there is no reason to use block chain. It will be better for every single person and business to have an account directly with a European Union Central Bank.


> like why they use bundles of analog copper wire for audio instead of digital fiber

Good article. Got me to read the article because I was curious why...


Subject: Website Feedback and Issues

1. Wilson J. Holmes https://wilsonjholmes.com/ The HTTPS version is not working and the page fails to load. SSL Analysis: https://www.ssllabs.com/ssltest/analyze.html?d=wilsonjholmes...

2. Mndnm https://mndnm.io](https://mndnm.io The About page is currently not working. The RSS feature looks like a good addition, though I have not tested it yet.


Somehow I knew in my heart this was about Ronald Reagan even though you said the seventies.


We remember Reagan because he was a colorful character and vociferous advocate of markets, but the changes we associate with him (e.g. Ralph Nader getting shut out of Congress) started under Carter and were continued under Clinton.


Er, that example most definitely isn't one of 'the changes we associate with him'.


We aren't talking about the initial downloads though. We are talking about updates. I am like 80% sure you should be able to send what changed without sending the whole game as if you were downloading it for the first time.


Helldiver's engine does have that capability, where bundle patches only include modified files and markers for deleted files. However, the problem with that, and likely the reason Arrowhead doesn't use it, is the lack of a process on the target device to stitch them together. Instead, patch files just sit next to the original file. So the trade-off for smaller downloads is a continuously increasing size on disk.


Generally "small patches" and "well-compressed assets" are on either end of a trade-off spectrum.

More compression means large change amplification and less delta-friendly changes.

More delta-friendly asset storage means storing assets in smaller units with less compression potential.

In theory, you could have the devs ship unpacked assets, then make the Steam client be responsible for packing after install, unpacking pre-patch, and then repacking game assets post-patch, but this basically gets you the worst of all worlds in terms of actual wall clock time to patch, and it'd be heavily constraining for developers.


from my understanding of the technique youre wrong despite being 80% sure ;)

any changes to the code or textures will need the same preprocessing done. large patch size is basically 1% of changes + 99% all the preprocessed data for this optimization


How about incorporating postprocessing into the update procedure instead of preprocessing?


I love this comment because if you flip the article like see below, the concept remains the same and yet I am eased into the premise with "obvious facts" I already believe. Plus, inverting like this focuses on what we want to do to survive versus the negativity of what causes failure.

> Startups have a notorious failure rate – some estimates say 9 out of 10 startups eventually fail. Yet, contrary to what many first-time founders expect, startups rarely fail because a giant competitor swoops in or because of some external “homicide.” Instead, most startups die by “suicide,” meaning their demise is self-inflicted by internal issues. As YC founder Paul Graham once noted, “Startups are more likely to die from suicide than homicide.” In my experience building two startups, I’ve seen that the biggest threats usually come from within the company’s own walls, not from the outside world.

Updated by me:

Startups have an incredibly small survival rate. One in ten startups survives. The ones that survive don't survive simply because a giant competitor didn't kill it or because some external affliction didn't cause it to fail. Counterintuitively, the startups that survived didn't actively try to kill themselves by internal issues.(The rest I can copy paste) As YC founder Paul Graham once noted, “Startups are more likely to die from suicide than homicide.” In my experience building two startups, I’ve seen that the biggest threats usually come from within the company’s own walls, not from the outside world.


Happy Thanksgiving, everyone!


Exactly this. I can make a hundred commits that are one file per commit and I can later go back and

    git reset --soft HEAD~100 
and that will cleanly leave it as the hundred commits never happened.


Exactly, any work you do on top of this makes your work hostage to Windows.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: