Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And this is the key insight that "everyone knows" and you apparently do not: every system has exactly one bottleneck at any given point in time. The bottleneck can move, alternate, or not be so significant relative to other near-bottlenecks that it's hard to spot, but there is exactly one.

Your perception that "there are no bottlenecks" is exactly the perception Deming set out to disprove.

Riddle me this: how can a system perform faster than its single slowest component?

It cannot. Ergo, there is a single bottleneck that sets the pace of the entire system.



> every system has exactly one bottleneck at any given point in time.

Consider this system that has 5 sequential steps with these durations:

Step 1: 10 seconds

Step 2: 5 hours

Step 3: 7 seconds

Step 4: 5 hours

Step 5: 18 seconds

It would seem that both step 2 and step 4 are both bottlenecks. Are you saying that in reality one of those 2 steps would not typically be the exact duration so one of them would be considered the actual bottleneck?


In this example, assuming sequential steps, if step 2 must be performed before step 4, then it is step 2 which is the bottleneck.

After step 2 has been optimized, step 4 becomes the new bottleneck—assuming that optimization of step 2 is satisfactory.

While both steps 2 and 4 contribute to a slow system, a bottleneck means something else entirely: it is the single most significant point of slow down for the rest of the process.

To put it another way, it’s hindering the overall execution. If both use the same amount of time, then whichever is closer to the front of the process is by definition hindering more of the process.


> every system has exactly one bottleneck at any given point in time

What, no they don't. Does a straight glass have a bottleneck? No, most bottles have it, but not straight glasses, hence not every system has a bottleneck.

The same applies to IT systems, there the topology is much more complex so often can have many bottlenecks, or sometimes fewer etc.

> Riddle me this: how can a system perform faster than its single slowest component?

A perfectly optimized component can't be a bottleneck but can still be the slowest component, trying to optimize that further will not speed up the system at all.

Here we see that you will miss a lot of optimization opportunities since you think the slowest component is the bottleneck, and not looking further.


I don't find the glass <> IT system analogy compelling (or even sensical) at all.

Describe to me how an IT system can produce results (e.g. tickets closed, if you wish) at a rate higher than the processing rate of the slowest component.

> A perfectly optimized component can't be a bottleneck but can still be the slowest component, trying to optimize that further will not speed up the system at all.

Correct -- but neither will optimizing anything else! That's the whole point!


> Describe to me how an IT system can produce results (e.g. tickets closed, if you wish) at a rate higher than the processing rate of the slowest component.

It can't, but the slowest component can be perfectly optimized and thus not be a bottleneck. You would fail to find a real bottleneck in this case, since you are just looking for the slowest one, hence I have proven that your statement above was false, there are cases where the optimal strategy is not to just look at the slowest component.

If you have some other definition for bottleneck we can continue, but this "the slowest component" is not a good definition.


No, what you've done is you've failed to find a way to improve the system's behavior further. If you have the slowest component and you can't make it faster, then congrats: you cannot make the system faster.

You cannot build cars faster than you can mine metal, nor faster than you can put stickers on the windows on their way out the factory. You are done optimizing.


> If you have the slowest component and you can't make it faster, then congrats: you cannot make the system faster.

That isn't true, I can take the next slowest component and make that faster and now the system is faster.


Lol, no! You cannot!

The system's behavior will not change, you will just have wasted money improving the next-slowest component for quite literally no benefit.

Machine A (10 units per hour) -> Machine B (20 units per hour) -> Machine C (15 units per hour) => Produces 10 units per hour

Machine A (10 units per hour) -> Machine B (20 units per hour) -> Machine C (20 units per hour) => Produces 10 units per hour

Machine A (10 units per hour) -> Machine B (25 units per hour) -> Machine C (20 units per hour) => Produces 10 units per hour

Machine A (10 units per hour) -> Machine B (30 units per hour) -> Machine C (20 units per hour) => Produces 10 units per hour

Machine A (10 units per hour) -> Machine B (30 units per hour) -> Machine C (100 units per hour) => Produces 10 units per hour

Machine A (10 units per hour) -> Machine B (10,000 units per hour) -> Machine C (10,000 units per hour) => Produces 10 units per hour


That is only true for parallel executions, not serial ones. If a process requires executing many components serially (which happens a ton), then it isn't enough to just look at the slowest component.

Anyway, thanks, now we know that you only considered throughput in a factory like setting, there it is true. But your rule there isn't true for software systems in general, optimizing latency speeds and serial performance is extremely common in software.

Edit: Example:

Machine A takes 1 hour -> B 2 hours : System takes 3 hours.

Machine A takes 0.5 hour -> B 2 hours : System takes 2.5 hours, so faster even though we optimized the faster component.


The example I gave was a serial process. And in fact, every parallel process is just a group of serial processes. Factories are obviously linear but in reality every single process is linear through time, including the chaotic, complex ones you see in IT or software orgs. (Unless you have a time machine, in which case ignore me)

Fastest/slowest doesn't mean "takes the longest in clock time," it means "has the lowest throughput."

In your example, if B is only able to produce something every 2 hours, then no, speeding up A will not increase the throughput. You will see a larger backlog of jobs from A waiting for B to become available. Ultimately only 0.5 units per hour will be produced by this process.

If B is able to produce more than something every 2 hours, e.g. it can produce multiple things in parallel, then yes, speeding up A will increase throughput. But that is only because B wasn't the bottleneck to begin with! Your own failure to serialize that parallel process hid that fact from you.

Either of your systems (unless you have invisible parallelism in B) will produce 0.5 units per hour.

If you're saying to yourself, "well this is a process that runs only once per day, so there's no backlog anywhere in here," then congrats: you've just discovered that the true constraint sits upstream of A!

Speeding this up might be a nice quality of life improvement for the people involved, but it will not yield different outcomes for the system as a whole, because there's not enough work coming into A to consume the capacity of B anyway.


Your entire argument is about throughput in sequential pipelines or parallel systems. On systems not shared with other independent tasks.

Yes, in those simple cases there is one bottleneck at any given time.

But most tasks do not fit those conditions, and economically and technically are best optimized reflecting other objectives than just throughput.

Many times parallel or sequential pipelined groups of tasks have optional subtasks, so there will be as many bottlenecks as there are components that may run while others don’t.

Many tasks run intermittently, or for resource reasons, need to run as an uninterrupted sequence, with no opportunity to pipeline, and are optimized for latency. Which means any speed up of any subtask has value.

Many systems run multiple independent tasks to maximize return on resource costs, and so tasks are optimized to minimize active computational time. And any speed up of anything can improve that.

In all those cases, multiple modules can be usefully optimized at any given time.

And many factors can be relevant to making that choice. Such as relative benefit of optimization vs. time to achieve the optimization, cost of making the optimization, and project risk, all come into play.

In practical reality, there is a long tail of such factors. For instance, the skill and interest levels of available developers, relative to the module optimization options.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: