Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 20 million rows

> 30 seconds

Ugh, that's terrible. grepping the file or reading and parsing a csv file is probably faster.



It was pretty awful watching the vps choke to death when I tried implementing any feature using full set prices. Pegged at 99% cpu usage with the go garbage collector frantically trying to not let the process crash... that was not an environment I wanted to take to production.


Could it be the environment didn't have enough memory to do things? I've seen frantic swapping and garbage collection before in VMs without enough RAM.


> vps

There's your problem. If you're at the "make it fast" part of "make it work, make it right, make it fast", you should almost certainly be on dedicated hardware.


That was actually the "make it work" portion: I was attempting to grab some 10 full set prices at a time for a gallery feature and managed to crash the influxdb.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: