This is what Twitter does. It's possibly the least performant major website I've used.
Conditional loading certainly can improve perf. in theory. I've yet to see any evidence it does so in practice. The aggregate of bundle-size, bundle-parse, client-side execution resource-usage & added latency of the plethora of metadata normally bundled with API responses is more than enough to negate any actual perf. gains.
As for "easier to maintain", I've never seen anyone even try to make that argument in theory, nevermind practice. Pretty sure it's widely accepted even by advocates of this architecture that it's a trade-off of perf. gains for ease-of-maintenance losses.
Just because Twitter does that doesn't mean it is the case everywhere else.
It moves some of the rendering work from the backend (having to query the data and generate markup) to the browser (query the API and generate the content based on the responses).
At my current job, it's made a significant improvement. The server returns compact JSON data instead of HTML, so it's easier to generate the data and uses less bandwidth.
It also looks faster for the user, because they change search parameters and only part of the page changes, rather than reloading the entire page.
As for "easier to maintain", that may be subjective. Code to generate a simple HTML template from results is replaced by JavaScript code to hit the API and generate the DOM. Although HTML5 templates makes that much easier.
I'm not saying it's impossible - glad to hear you've successfully implemented it in your workplace. I'm just saying that by-and-large it has the opposite effect to the stated intent.
If most examples of a strategy make things worse, and only one person uses that strategy to improve things, then going around saying "everyone is doing it wrong" rather than questioning the strategy isn't particularly sound.
I've build plenty of (small) client-side rendered UIs myself that lazy-load content; I know the trade-offs and I even believe I can achieve a performant outcome on my own. But that's anecdotal. In the wild, I have not seen a single major website improve perf. via lazy-fetched content rendering.
I can't say i've seen a single site where in practice that worked as advertised. Also some times it introduces UX annoyances (e.g. back button not working as expected).
It is one of those things where in theory if absolutely everything was done right and no other stuff was done differently it can work. E.g. if the only difference between a JS-enabled and a JS-disabled version of the site was the content change and nothing else (no additional JS frameworks, functionality or whatever) then yes it most likely can be faster (though for the difference to be noticeable the site needs to be rather heavy in the first place).
Problem being that in practice this comes with a bunch of other baggage that not only throws the benefit out of the window but introduces a bunch of other issues as well.
Is there any evidence of this? All the sites I've seen that use extra requests to load text always seem to take multiple seconds to load. Whereas most pages that use server side rendering generally load under 100ms.
It's a tradeoff, basically the question is "Will most users need and read all the content or not". Displaying everything at once without making extra querries is best, but not always possible . The frontend is fetching the backend. So it's going to say "Hey, send me all the comments from all the posts from november 2021". If there are 3 it's fine, but if there are like 23,000 of them you can't really load everything at once , that's why we use pagination on the backend. We say "Hey send me results 1 to 25 of the comments from all the posts from November 2021" This way the frontend only displays 25 comments for a quick page load and we hope that it will be enough. To display the other comments either we ask the backend to let us know how many pages of 25 elements there are and we display that amount of pagination element links (pagination), or we simply tell the frontend to ask the next page once we reach the bottom (infinite scroll). Even if displaying all the content is possible, if there are content that only 1% of your users will read you might want to offer faster loading for 99% of users and add a few seconds of loading for the 1%.
>This way the frontend only displays 25 comments for a quick page load
Many years ago smart frameworks implemented smart stuff like you can display only what is visible. For example you could have a table with 1 million rows but in your html page you will not create 1 million row elements, you can create GUI widgets only for the visible part, as the user scrolls you can recycle existing widgets.
As a practical example , you go to a yotube channel page and they load only 2 or 3 rows of videos and you have to scroll to force more to appear, this means you can't do a Ctrl+F and seatrch and is also less efficient because as you scroll the items at the top are not recycled and reuse so probably more memory is used.
The json for all the videos is not huge,some strings with title and thumbnails, maybe some numbers but the issue is that is not possible to natively do the best/correct thing, only recently we got lazy loading for example so basicaly html was desibned for documents and frameworks/toolkits designed for apps did the correct thing many years ago... this is an explanation but no excuse why things are such a shit show with pagination today.
The argument is that JS-heavy site design indicates worthless content on average. Not that it's easier to maintain for the site owner (which might or might not be the case), or more realistically, creates job opportunities for "web developers".