During my university studies, I took courses in electro-accoustic music composition. Significant amounts of time dealt with synthesis and signal processing because those were critical elements in these kinds of compositions.
It's absolutely different than composition for traditional instruments in this regard because the sounds you are using to compose with are being created by the composer and much as are the notes, rhythms, and structure of the composition.
The first sentence of the foreword brings to the point, what the book is about:
"The Theory and Technique of Electronic Music is a uniquely complete source of
information for the computer synthesis of rich and interesting musical timbres."
Whereas tools like Max Mathews' (btw. the author of the foreword) MUSIC programs and their successors clearly separate music composition and instrument building (i.e. sound synthesis), later tools like Max, PD or SuperCollider are blurring this difference. Nevertheless the difference is still maintained by all institutions where electronic music is studied and performed (e.g. IRCAM).
> "The Theory and Technique of Electronic Music is a uniquely complete source of information for the computer synthesis of rich and interesting musical timbres."
It's really a great book, but it is far from "complete" as it omits some very important synthesis techniques - most notably granular synthesis and physical modeling! To be fair, no single book would be able to cover the entire spectrum of electronic sound synthesis. The second edition of "The Computer Music Tutorial" by Curtis Roads (https://mitpress.mit.edu/9780262044912/the-computer-music-tu...) comes close, but it is a massive book with over 1200 pages and took literally decades to write. (The second edition has been published 27 years after the first edition!)
What I find really cool about Miller's book is that all examples are written in Pd so anyone can try them out and experiment further.
On the matter of institutions: IRCAM is the paradigmatic example of composer / technologist role demarcation, but I would question whether this extreme position "is still maintained by all institutions" -- it certainly was not at my alma mater and I doubt at UCSD either. As you say, Max (coincidentally a product of Miller Puckette and IRCAM) and it's more recent ilk have empowered composers to independently build their own instruments and this practice has been ongoing within the academy for at least 35 years now.
As someone who studied computer music in the mid 2010s I can second that! All the composers in my generation who use live electronics do it themselves.
The devide between composer and programmer has disappered for the most part and I think the main reason is that both hardware and software has become so affordable and accessible. Back in the old days, you needed expensive computers, synthesizers and tape machines and people who could assist you with operating them. Today, anyone can buy a laptop and learn Pd / Max / SuperCollider!
That being said, institutions like IRCAM still have their place as they allow composers to work with technology that is not easily accessible, e.g. large multi channel systems or 360° projections. They also do a lot of research, too.
> Today, anyone can buy a laptop and learn Pd / Max / SuperCollider!
And anyone can buy a laptop and contribute to the development of Pd, SuperCollider, Chuck, et al.
Not sure how much overlap there is between those two groups. Arguing against my earlier point: there still seems to be a separation between music systems users and music systems developers.
> there still seems to be a separation between music systems users and music systems developers.
That's true, but just like a pianist typically doesn't need to build their own piano, computer musicians rarely need to build their own DAWs or audio programming languages. However, computer musicians do build their own systems on top of Pd, SC, etc. and these can evolve into libraries or whole applications. So the line between computer musicians and audio application developers is blurry.
That being said, I can tell for sure that only few computer musicians end up contributing code to Pd, SC, etc., simply because most of them have no experience in other programming languages and are not really interested in actual software development. Of course, there are other important ways to contribute that are often overlooked, like being active on forums, filing bug reports, etc.
Maybe I'm a bit biased because I was there for a study visit in the eighties. Of course it depends on the use case; if the composition is fully electronic, the composer can essentially be the same person as the performer, conductor and producer, so there is no big need for a score; live coding goes even further and "the composition" appears during the performance; specific tools have been implemented for these use-cases (e.g. Standford has a long tradition for such tools).
It's absolutely different than composition for traditional instruments in this regard because the sounds you are using to compose with are being created by the composer and much as are the notes, rhythms, and structure of the composition.
So for me, the title makes perfect sense.