Hacker Newsnew | past | comments | ask | show | jobs | submit | adrian_b's commentslogin

The von Neumann report was written after von Neumann had several discussions with the ENIAC team about how to make a better computer as a successor for ENIAC.

The report was not published formally, but it was "leaked", so it does not have any credits for the ideas contained in it.

Because of this, with few exceptions it is impossible to determine with certainty which parts of the report are original ideas of von Neumann and which parts are ideas that von Neumann might have learned during the discussions with the ENIAC team.

An example of an idea that certainly did not came from the ENIAC team was the proposal to use an iconoscope CRT as the main memory (which was implemented first in the British Manchester computers, so such a memory became known as a Williams-Kilburn tube). The ENIAC team had a different idea of what to use as a memory, i.e. delay lines taken from radars. Von Neumann replaced this suggestion with a CRT, because he thought that a random-access memory is better.

The von Neumann report had an exceptional importance because it defined with perfect clarity what a digital computer should be, which should be its structure and then provided a detailed description of how such a computer should be designed, which was good enough to enable anyone who read the report to build such a computer. This effect really happened, and a great number of teams at universities, government agencies, independent research centers like IAS and various companies, both in USA and in other countries, have built electronic computers in the following decade, exploring various design options.

There is no doubt that the clarity of the report is due to von Neumann and whichever were the ideas of the ENIAC team about a future computer, they were much more jumbled.

Because the ENIAC team did not publish their ideas, it does not really matter what they thought. The world has learned how to make general-purpose electronic computers from the von Neumann report.

ENIAC was a programmable computing automaton, but it was not a digital computer in the modern sense of the word, i.e. a digital system with 4 levels of closed positive-feedback loops (the complexity of a digital system is determined by the number of levels of nested positive-feedback loops, combinational logic has 0 levels, a memory has 1 level, an automaton has 2 levels, a processor has 3 levels and a computer has 4 levels; these are minimum numbers, as a real device may have more levels than strictly necessary, to achieve various advantages).


Then IBM also built the hybrid electronic-electromechanical IBM SSEC computer (operational from January 1948), which was a truly general-purpose digital computer, which was available before any fully-electronic computer and for a few years it was the most powerful computer of the world and it solved many important problems.

While ENIAC, being completely electronic, remained faster than SSEC for a few problems, most problems could not be solved at all on ENIAC, because it had no big-capacity memory, so for most computing problems SSEC was the best choice until the completion of the first electronic computers with memories based either on cathode-ray tubes or on delay lines or on magnetic drums.

IBM SSEC was available as a public computing service, so it was used by many companies and institutions. Besides SSEC, before the first electronic computers there were a few others electromechanical computers, e.g. at Bell Labs or at Harvard, but those were slower and had fewer users.


The publication of the von Neumann report has been many orders of magnitude more important for the computing industry than ENIAC.

Soon after that publication, many teams in many places and in several countries have started designing computers and a lot of research results have been published by them, which has lead to the establishment of the computer industry.

The first computer company was a flop. UNIVAC was indeed the first important commercial computer (not the first commercial computer). Nevertheless, UNIVAC was mostly already obsolete at the time of its introduction, due to the use of the delay line memories, for which better alternatives were known.

In USA, UNIVAC had the advantage of being the first on the market, but the IBM computers that followed shortly were more innovative, so IBM deserved becoming the leader of the market instead of UNIVAC. Moreover, IBM was more open at that time and they published a lot of useful technical information about their computers, which contributed to the advancement of the entire computing industry.

The ENIAC team and their successors had a quite minor contribution to the early years of the US computing industry, in comparison with research centers like IAS, universities like MIT, government agencies like NBS (the predecessor of NIST) or companies like IBM, ATT and a few others, all of which introduced essential innovations in computers and they also published the results of their work, enabling the reuse by others.


The open source developers can definitely force you to do a lot of work.

Whenever they make changes to the program that they are maintaining, which break backwards compatibility, for which an example is replacing X11 with Wayland in the Linux distribution that you may have used for many years, then that forces the users affected by the changes to do potentially a lot of work, in order to find alternatives.

For some special application that you use from time to time, finding an alternative and switching to it may be simple, but when the incompatible changes affect a fundamental system component, which must be used all the time and without which nothing works, e.g. Wayland or systemd, then you must change not some single application, but the entire Linux distribution, and that can be time-consuming, because you may have to learn to do a lot of things in a different way than you are accustomed to.

So obviously, users are not happy about such changes that push work on them without any benefits.

The better Linux distributions may offer their users choices even for such important components like X11 vs. Wayland or OpenRC vs. systemd, for example Gentoo, but the most popular Linux distributions tend to not offer choices for this kind of system components, so when they replace such a component, the users must either accept the change or stop using that Linux distribution, and both choices are bad, because they must adapt their workflow.


Taking into account that Iran's own oil facilities had been bombed earlier this week and that they had threatened to retaliate if this happens, what Iran does seems indeed fair play.

Before South Pars Iran bombed:

    Ras Laffan LNG complex in Qatar
    Ras Tanura oil refinery in Saudi
    drone attacks on Saudi oil fields
    Ruwais refinery in Abu Dhabi
    Shah gas field in Abu Dhabi
    port of Fujairah
    Bapco oil refinery in Bahrain

Agreed. I don't know whether to call it hypocrisy, delusion, bullshit, etc, but it is reidiculous that when Iran fights back the media and politicians act as if Iran started all this.

Example: the UK got a pat on the back for initially saying no to the US using their territory to launch bombing raids, and then later allowed it for "defensive" missions.

"Defensive"!?!

I'll punch you in the face, you punch me back, then when I punch you again I am only defending myself.


No.

All GDDR memory is intended only for being soldered around a GPU chip, on the same PCB. This is how they achieve a memory throughput that is 4 to 8 times higher than the DDR memories used in DIMMs or SODIMMs.


The dependencies needed for the replacement of equipment having a lifetime of many decades are infinitely less dangerous than the dependencies for consumables like fuel.

For critical minerals and metals it is easy to stockpile them to have a buffer sufficient for many years of infrastructure replacement.

Such dependencies may remain a problem during a war, when the infrastructure could be destroyed, but in normal times such dependencies would not be sufficient to enable the kind of blackmailing that can be done with consumables, like food and fuel.


I'm not sure the stockpiling you mention will work the way you propose. We already stockpile oil, yet we still see price shocks. Stockpiling metals can still lead to price shocks due to reluctance to release them and the need to eventually replenish them.

We also stockpile foods and medications, and that doesn't provide price stability.


Yes, but you are missing the critical bit.

Food is a constant need, and you can't exist for long without it.

Sure we need to increase battery sotrage, but in ~5 years time, it'll be maintainance, assuming the correct adoption rate. So yes we will still _need_ batteries, but we don't need a constant supply of new batteries to keep the lights on.


A 5 year full adoption rate is not even close to possible.

Once you reach maintenance phase you still have to replace stuff, so just like food you will need a constant supply.


Right, but you realise that even if we manage to buy everything all at once, we are looking at at least a 15 year operating lifespan.

Yes, there will need replacement, but not all at once and not the same volume


In my opinion, fingers are too big.

I need to do precise drawing work, like drawing electronic schematics or PCB layouts or other kinds of technical drawings.

I cannot imagine doing such things comfortable with a touchpad.

I have also not used a mouse for more than a decade, but that is because I had switched at some point from a mouse to a trackball and a few years ago I have switched to something much better, i.e. a small graphic tablet (having the size of a mouse pad) used in mouse mode, a.k.a. "Relative" mode, instead of in its default "Absolute" mode.

With a sharp stylus, I can point much more precisely than I could ever do with my fingers. I can also move the cursor instantaneously to any part of the screen, with a minimal movement of my fingers, because the velocity of the point of the stylus is much greater than I could ever achieve for the tip of my finger.

Moreover, the position in which I hold the stylus is more comfortable than if I would have to put the finger on a touchpad and the movements of the fingers have a smaller amplitude than required by a touchpad, which also improves comfort. (To reach any point on the screen, only the 3 fingers holding the stylus have to move, no hand movement is necessary.)

So from my experience, a touchpad cannot compete with a stylus, neither for accuracy, nor for speed, nor for comfort. (With a very light stylus, like from Wacom, also the transitions between touch typing on the keyboard and using the graphic pointing device are much faster than with a mouse, because touch typing remains possible with the stylus between the fingers.)

A touchpad may be fine for text manipulation or Internet browsing, but for drawing, it is too slow in comparison with a mouse or trackball, and even more so in comparison with a stylus, while the devices that are better for drawing can also easily accomplish the simpler tasks for which a touchpad is acceptable.


The current DIMM and SODIMM modules cannot be used for much higher speeds than are available now.

This is why there are several proposals of improved forms for memory modules, which use different sockets, like LPCAMM2, which should be able to work with faster memories.

However even LPCAMM2 is unlikely to work at the speeds of soldered GDDR7.


Can't they make it easier to solder / desolder?

It is not very difficult to solder/desolder, but you need suitable tools, which are not cheap.

Moreover, when you do this manually, unless it is something that you do every day it may be quite difficult to be certain that soldering has been done well enough to remain reliable during long term use. In the industry, very expensive equipment is used to check the quality of soldering, e.g. X-ray machines.

So unlike inserting a memory module in a socket, which is reasonably foolproof, soldering devices is not something that could be used in a product sold to the general population.

When I was young, there still existed computer kits, where you soldered yourself all the ICs on the motherboard, so you could get a computer at a much lower price than for a fully assembled computer. My first PC was of this kind.

However, at that time PCs were still something that was bought by a small fraction of the population, which were people that you could expect to be willing to learn things like how to solder and who would be willing to accept the risk of damaging the product that they have bought. Today PCs are addressed to the general public, so nobody would offer GPU cards that you must solder.


You mean 500 GB/s, not Gb/s (actually 537 GB/s).

Unfortunately that does not matter. Even in a cheap desktop motherboard the memory bandwidth is higher than of 16-lane PCIe 5.0.

Therefore the memory bandwidth available to a discrete GPU is determined by its PCIe slot, not by the system memory.

If you install multiple GPUs, in many MBs that will halve the bandwidth of the PCIe slots, for an even lower memory throughput.


> in many MBs that will halve the bandwidth of the PCIe slots

Not on boards that have 12 channels of DDR5.

But yeah, squeezing an LLM from RAM through the PCIe bus is silly. I would expect it would be faster to just run a portion of the model on the CPU in llama.cop fashion.


It is much faster, yeah. llama.cpp supports swapping between system memory and GPU, but it’s recommended that you don’t use that feature because it’s rarely the right call vs using the CPU to do inference on the model parts in system CPU memory.

Edit: the settings is "GGML_CUDA_ENABLE_UNIFIED_MEMORY=1"... useful if you have unified memory, very slow if you do not.


Talking about dual socket SP5 EPYC with 24 DIMM slots, 128 PCIe 5.0 lanes

It’s fast for hybrid inference, if you get the KV and MoE layers tuned between the Blackwell card(s) and offloading.

We have a prototype unit and it’s very fast with large MoEs


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: