Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

With the mention of memristors, this sounds kind of similar to what Wolfram recently said about hardware for neural networks that can simultaneously be memory and compute:

> But even within the framework of existing neural nets there’s currently a crucial limitation: neural net training as it’s now done is fundamentally sequential, with the effects of each batch of examples being propagated back to update the weights. And indeed with current computer hardware—even taking into account GPUs—most of a neural net is “idle” most of the time during training, with just one part at a time being updated. And in a sense this is because our current computers tend to have memory that is separate from their CPUs (or GPUs). But in brains it’s presumably different—with every “memory element” (i.e. neuron) also being a potentially active computational element. And if we could set up our future computer hardware this way it might become possible to do training much more efficiently.

https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: