They were made optional in C11, while Google paid for the development effort to clean the Linux kernel of them, because they are only yet another source of possible CVEs.
Maybe you did not realize this, but I somewhat helped with this effort. In the kernel it may make (some) sense. The overall idea that VLA are always bad is incorrect.
> The overall idea that VLA are always bad is incorrect.
I mean, I'm yet to see a definitive example where automatic* VLA was actually right tool for the job.
I know they were supposedly introduced for the numerical analysis, but I fail to see what problem it actually did solve. Yes, the syntax way neater than piecemeal allocation, but it could have been solved with functions and macros anyway. Did VLA just hit some sweet spot in performance between fixed size arrays and heap-allocated ones, which made number crunching better optimized?
* as opposed to VM types used in function parameters and for allocating multi-dimensional arrays on heap
Stack allocation is much faster than heap allocation, and compared to fixed-size arrays VLAs save stack space. For example, a recursive divide-by-conquer algorithm that allocates arrays on the stack may use log(n) stack space compared to n². Using VLAs instead of the heap simplifies the logic because you get automatic deallocation with proper scoping. VLAs and VM were introduced as one feature, so I am not sure there was ever a the question of adding only one of them.
In terms of security, there are some issues but it is largely overblown and misunderstood. A dynamic buffer is always a sensitive piece of code when dealing with data from the network. VLA were the language tool of choice and then got a bad reputation because they were involved in many CVEs. But correlation is not causation. The main real issue is that - if the attacker can control the size of the VLA -, it is possible to overflow the stack into the heap. This can be avoided by using stack clash protection which compilers support only since a couple of years. With stack protection I believe VLAs can be safer than fixed size arrays due to improved bounds checking.
> The main real issue is that - if the attacker can control the size of the VLA -, it is possible to overflow the stack into the heap.
I think the main gripe most programmers have with VLA is lack of control over them. Fixed size array is, nomen omen, fixed and predictable - can even be tested beforehand. malloc() at least returns NULL on fail (although memory overcommitment muddies the situation), so program can take some action. But what happens when VLA fails? Segfault? Stack crash protection if compiled with it or worse if without? None of those options is graceful from the perspective of end user.
> This can be avoided by using stack clash protection which compilers support only since a couple of years.
As you yourself say, it's been only few years since this protection made its way into compilers. But that's not the issue. The issue is that `-fstack-clash-protection` isn't part of C language, it's part of compiler. What's the incentive for the developer to use less certain feature when there are easier alternatives?
Yes this is something we need to work on: Some way to detect stack overflow or prevent it from happening in C. And obviously you need to understand the resource consumption of your algorithm, but this also applied to other stack allocations and recursion etc. And with memory over-commitment you also do not have control over malloc anymore. You simply get a fault once you access the memory...
Your second point I do not understand: How is it the fault of the C language, if it is poorly implemented? And yes, if you use a poor implementation, you may need to avoid VLA. I am not blaming programmers who do this. My point is that it not inherently a bad language feature and with a good implementation it may also be better than the alternatives depending on the situation.
VLA are poorly defined feature for which some implementations offer additional protection.
And I agree, automatic VLA aren't inherently a bad feature; it's just a poor feature of C99, C11, C17 and sadly still C23*.
I have faith we will be able to finally tackle it in C2y, but until then, I'm with Eskil in opinion ISO C would be better off without them all those years.
* Thank you for your work on N3121, hopefully we will vote it in during next meeting :)
I do not think VLA are anymore poorly defined then most other stuff in the C standard. It generally allows a wide range of implementations. As a language features, it is has excellent properties. (automatic life time and dynamic bounds).
Similar to UB and many other related issues, the main issues are primarily about what compilers do or not do. I have countless examples were C compiler could easily warn about issues or be more safe, but don't. Very similar to the arguments against VLAs there are people who want to throw the complete C language away. The story is always the same: Compilers do a poor job, but the language is blamed. Then programmers blame WG14 instead of complaining to their vendor. I do not see how VLAs are any different in this regard compared to other parts of C.
And the answer can never be to go back on fundamentally good language feature (a bounded buffer - damit!) but always push towards better implementations.
And I am part of WG14 (so a hapless goldbrick). The reason VLA were made optional to make implementing C easier, as C99 was not adopted quickly. Although the reason for the slow adoption were others. MSVC did not really support C at all at C99 time because they thought everyone should transition to C++. Only very recently C support catched up in MSVC.