Those links are very useful. However, I note that at each refuted point, the author is constantly being surprised by an actor creating a solution that does indeed do something new. This includes interoperability with VMWare hypervisors and userland networking performance with virtual networks.
Given the above and that most networking hardware is still configured by hardware engineers, this sector does seem ripe for disruption.
I'm a networking guy, and wouldn't consider myself a hardware person by any means. Sure, I'm not a coder, but I don't think it's accurate to call it hardware. Very little network engineering is hardware specific. Hardware goes bad, and there are cabling issues, but those problems will exist as long as networks have wires.
The challenging portion of configuring networks is keeping track of all the stacks and protocols. MAC, IP, virtual L2 features, virtual L3 features, virtual L2 with L3 features, routing virtualization, L2 loop detection/prevention, L3 loop detection/prevention, maximizing bandwidth without adding complexity or cost, multicast delivery strategy, broadcast/multicast control, best practices, and integration the application/TCP layer as well. I would say nearly all of those problems exist with or without something like OpenFlow, and it's already software. I love this part about networking, as every layer in the stack is modular and can be virtualized, so you're continually keeping track of protocols on top of protocols, and it all evolves so quickly. To say network engineers are no longer necessary after you virtualize networking is analogous to trying to cut 90% of server support staff once you're 90% virtualized. The added complexity may just compensate for the lack of hardware.
The other option is once the virtualization density is so high that virtual routers become feasible, and virtualized switches have more value, that you start to see 'hardware' go away. There are some very large cloud providers that are at this scale and can benefit, but the everyday enterprise isn't quite there yet. That's why these technologies are still a few years from mainstream, but they will definitely catch on.
Outside the datacenter, and even for the data center core, I think we'll continue to see customized silicon as opposed to commodity computing processors pushing the network traffic. Even the 'software' Google switches are on something like the Broadcom chip set, specific to network forwarding. Cisco's one of the few companies that develop their own chip set, which is one of the points of contention in the article, but they continue to find reasons to justify the cost.
Given the above and that most networking hardware is still configured by hardware engineers, this sector does seem ripe for disruption.