Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is an extremely limited view of "scientific computing" that seems to only focus on analytics, which is a tiiiny part of sci comp.

Your "stack" does nothing for solving/including ODEs, PDEs, DAEs, Fourier analysis, numerical integration, automatic differentiation, linear equation system solvers, preconditioners, nonlinear equation system solvers, the entire field of optimization, inverse problems, statistical methods, Monte Carlo simulations, molecular dynamics, PIC methods, geometric integration, lattice quantum field theory, molecular dynamics, ab initio methods, density functional theory, finite difference/volume/element methods, lattice Boltzmann methods, boundary integral methods, mesh generation methods, error estimation, uncertainty quantification...

Those are just off the top of my head, the list goes on and on.



I completely agree that there are many scientific libraries in python which scale up. I was addressing the article which showed a more advanced way to use python with the purpose of making it applicable to large datasets. If you were to implement a method from scratch or scale up to a larger dataset then you'll end up with using numba, numpy and dask. This is completely from a lower level programming perspective to implement and integrate methods rather than pipelining methods from higher level scientific libraries.

Just for some context: https://www.scipy.org/about.html https://www.scipy.org/topical-software.html


I have yet to see a situation where Numba makes real sense, as compared to just dropping down into C(++) or Fortran when you need to do the heavy lifting. Can you give me a good example?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: