Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We might not be able to quantitatively measure it but we can run studies to evaluate what individuals and teams can handle. Human factors people do this and it’s a sub field in industrial engineering. The military runs these studies and I’d imagine air traffic controllers do them as well, etc. You sometimes get really surprising results.


That's something I've noticed: the way we treat systems developed to work on data is _completely_ different from systems developed to work on, idk, oil.

You can build data refineries (ETF) same as an oil refinery. The difference is, the engineers who build the oil refinery create manuals and standard operating procedures to operate the refinery, because if they don't, then the new board operator will press the wrong button and blow out every window in a five-mile radius.

When you build a data refinery, no one documents _anything_ no matter how many times you ask engineers on the team to do it. Will it blow up in a massive fireball if you do it wrong? No, but it will corrupt data and have a business consequence. You can keep the 40 different microservices for the data refinery in your head though, right?


Oil refineries don't have backup restore points.


Depending on what exactly it is that you're storing or processing, neither does data.

Think sensor data and the sensor is a vital signs monitor in a hospital. The service that reads its output and stores it glitches out due to some sort of misclick by a user. It distorts the fact that the patient has an arrhythmia. Or a service that reads off the medication dosages for a patient for a pharmacist is stuck on a single message.


I strongly doubt they're debating microservice vs monolith in that area.

Maybe you should try an example with cat photos.


Used to work for an EMR company. This is very much a real-world concern, I assure you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: