> Instead the types allow Julia to dispatch your method call to the correct implementation for the type of your variable at runtime. This is Julia's secret sauce, and it's the main reason you'll hear about Julia programmers declaring about how composable Julia packages are. I can "reach into your package", and define the behaviour of your functions on my own custom types and then whenever anyone tries to call the function/method with one of my types, it'll just work. I don't need to author a pull request to your package or futz around with anything that you've wrote.
That's not a bug, that's a feature. Libraries are abstractions. I get an API and it encapsulates the behavior of the library. That is a nice thing. Being able to modify runtime behavior of internal libraries can lead to insane jumblygoo of code spathetti that would bring the finest programmers to their knees.
I don't want implicit behavior at runtime. I want explicit behavior in case of non-contractual externalities (wrong user input for e.g. at runtime). I want the program to fail so it can be patched.
You still get an API that encapsulates the behavior. This is not like monkey-patching (directly changing the behavior of libraries), but separating the abstraction layers. Every complex enough system will have multiple layers (for example when working on communications, if you're working with the network layer you don't need to focus on the physical layer below or the application layer above). Multiple dispatch allows the library ecosystem to better work in the same way:
For machine learning models we have the layer that handles the low level operation (sums, multiplication), which are swappable (you can have an implementation that runs in the CPU - Julia's Base - and an implementation that run in the GPU - CUDA.jl - and even a TPU - XLA.jl or Torch as backend). Above you have the tracker (the layer responsible for the autodifferentiation logic, which includes Tracker, Zygote, ForwardDiff). And above you have the library with rules for generating gradients (DiffRules, ChainRules), and above you have ML constructs (NNLib), and above ML frameworks (Flux, Knet) and above more specialized libraries like DiffEqFlux.
Whoever writes the ML framework doesn't need to care about the backend, whoever writes the GPU backend doesn't need to care about ML framework. This is not because the person writing the GPU backend patched the ML framework, but because the ML framework legitimately doesn't care about how the low level operations are executed, it doesn't work on that level of abstraction. And the user of the ML library can still see it like a monolith not unlike Pytorch or Tensorflow when he imports a library like Flux, until he wants to extend them and then he will find that they are in fact many independent swappable systems that compose into something more than the sum of it's parts.
I think the previous poster's use of the phrase "reach into your package" was a bit colorful. You're never actually modifying the internal code of a library. You're only extending a generic function from a library to work on your own custom type. So that extension only affects code that uses your new custom type. It's akin to using class inheritance in OOP languages.
That's not a bug, that's a feature. Libraries are abstractions. I get an API and it encapsulates the behavior of the library. That is a nice thing. Being able to modify runtime behavior of internal libraries can lead to insane jumblygoo of code spathetti that would bring the finest programmers to their knees.
I don't want implicit behavior at runtime. I want explicit behavior in case of non-contractual externalities (wrong user input for e.g. at runtime). I want the program to fail so it can be patched.