Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've found PostgreSQL to be extremely fast if you store time series in arrays (http://www.postgresql.org/docs/9.4/static/arrays.html) in a round-robin fashion. You can also limit the array size, so that you have a fixed number of points per table row (thereby splitting your series across multiple rows), and if you adjust it such that it fits on one PG page it is quite performant.


I don't think you can limit them: However, the current implementation ignores any supplied array size limits, i.e., the behavior is the same as for arrays of unspecified length.


By "limit" I mean your code would do it, not Postgres. E.g. your series is 86400 datapoints long (seconds in a day), you would store it as 100 rows of 864-element-long arrays.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: