If you sound like a car ad from Road & Track, I'm going to flag you as bot.
"No rough handling. No struggles to accelerate. Just pure performance. The new Toyota GT. It's not just a car—it's a revolution."
Most of the tropes listed on this page give text a more "car ad" (or sometimes "movie trailer") quality. I wonder if magazine scans and press releases unduly weighted the training set.
You know how no one ever wrote their own software and then generative AI came along and suddenly we could have app meals home-cooked by barefoot developers? (The use of such cottagecore terminology for a process that requires being an ongoing client of a hundred-gigabuck, planet-burning megacorporation rubs me in many wrong ways.)
If AI finally gets rid of the thing that drove me nuts for years: "leverage" as a verb mean roughly "to use"—when no human intervention seems to work, then I shall be over-the-moon happy. I once worked at a place where this particular word was lever—er, used all the damn time and I'd never encountered something so NPC-ish. I felt like I was on The Twilight Zone. I could've told you way back then that you sounded like a bot doing that, now people might actually believe me and thank god.
I will stick by the em dashes however. And I might just start using arrows too. Compose - > → right arrow. Not even difficult.
> (The use of such cottagecore terminology for a process that requires being an ongoing client of a hundred-gigabuck, planet-burning megacorporation rubs me in many wrong ways.)
I hadn't noticed this - great point. To be fair the "home cooked meal" metaphor comes from 2020, predating genAI coding[1]. But even then, CPUs themselves are so normalised that we just kind of... forget how vertiginously complex the entire supply chain is.
I vaguely remember these but I more clearly remember the Samsung ad which featured a similar looking robot in a dress turning letters on a gameshow, implying that Samsung would still be around even after Vanna White was replaced by a machine. Vanna White sued, claiming a breach of her publicity rights (despite her name, the name "Wheel of Fortune", or her actual likeness not being used) and actually prevailed in court, establishing a precedent in the United States that very broadly protects celebrities' rights to control whether and how they are represented.
Claude: No, but if you hum a few bars I can fake it!
Except "faking it" turns out to be good enough, especially if you can fake it at speed and get feedback as to whether it works. You can then just hillclimb your way to an acceptable solution.
I feel the same way. But this is a new economy now, software is cheap, and regarding the skill and fulfillment you derive writing it yourself, to quote Chris Farley: "that and a nickel will get you a nice hot cup of JACK SQUAT!!!"
Maybe it used to but with companies like Disney lengthening copyright times way beyond the original intention, or corporations patenting absurd things, it seems to be more of a way to entrench power than any sort of democratization. I'm glad generative AI seem to be bypassing all this and actually democratizing returns on the creative process, by flagrantly violating the concept of IP.
In the case of BSD-like licenses, IP is applied in a way that discourages plagiarism, while giving all the practical freedoms to the users, including making proprietary products.
In the case of copyleft licenses like GPL, IP is applied in a way to ensure that users have the code.
These things are taken away when the code is laundered through AI.
Again, start talking to people outside the field of programming and ask them how they like it when their labor of passion is "democratized" by AI turning it into unattributable slurry.
I don't really care how they like it because it's not up to them how I use the tools I want to use. It's literally the same argument photographers faced 100 years ago and in another 100 years I guarantee no one will be talking about AI in the terms you are today.
Even today, in 2026, it is possible to use photography in ways that infringe copyright! You literally cannot just snap your shutter over anything whatsoever and call it yours!
I don't see any issues with "appropriating" a work especially if it's not a one to one copy which AI does not produce (without out some pretzel level prompting), especially with regards to visual media (what even is appropriation in this case? Your example of photographers taking images of paintings is not the same as how AI training occurs). In other words, training is and should be free and fair use.
Of course the AI robber barons would that it be so, but it must not be and should not be.
Training gobbles up works in their entirety, verbatim.
Fair use of the verbatim words of a written work requires the excerpt to be small.
Fair use also usually requires attribution, which is missing.
Transformative works like parodies are also fair use, but the LLM isn't transformative int his sense; it's strawman transformative like a meat grinder.
Parodies use the structure of something existing, as a vehicle for original thought which is why they are protected from copyright claims by the authors of whatever is pariodied.
Again, IP is an outdated concept in this day and age. In all honestly there shouldn't even be the notion of fair use, any transformative work should be allowed. There is nothing about LLM training that isn't transformative, just as, well, grinding meat from a steak into stuffed sausages transforms it.
I'm not even talking about big corporations with proprietary models, in fact I oppose their not being open source or weight, I want more open models not fewer as that at least democratizes the value of LLMs. The worst case is having copyright hawks allowing regulatory capture by big AI corps by pushing regulations about licensing content, which, of course, no open model company will be able to afford in the future. I find that infinitely worse than having more lax copyright laws, where only a few corporations can tell you want to think via usage of their LLMs.
Lastly, no one can tell me from first principles why LLM training is bad, on the copyright side, other than, it just is, because copyright law dictates it so. Perhaps copyright law is what needs to be abolished, not LLMs.
"Transformative" has a specific meaning under the fair use doctrine. You can't just Rot13 or gzip someone's novel and call that transformative.
> Perhaps copyright law is what needs to be abolished, not LLMs.
Sure, now that it's inconvenient for some billionaires --- who themselves have nothing to protect, because everything they offer is a service the user can only access through the network, while they have a subscription.
I'm talking about the concept of transformation, not the specific legal language, which, again, I said is not worth discussing, because the legal concept of intellectual property is not useful.
No, not just now, since forever. I suppose Stallman being right all along is about this concept. And just to be clear, I'm not a supporter of current closed source AI companies, like I said I want to see open models succeed.
As I asked above, it really does look like no one can explain why LLM training is bad, besides saying it's bad. Therefore I will continue to reject IP as a concept.
What I keep hearing is that the people who weren't very good at writing software are the ones reluctant to embrace LLMs because they are too emotionally attached to "coding" as a discipline rather than design and architecture, which are where the interesting and actually difficult work is done.
Really? To me it seems that quite the opposite is true - people who were never very good at writing code are excited about LLMs because suddenly they can pretend to be architects without understanding what's happening in the codebase.
Same as with AI-art, where people without much drawing skills were excited about being able to make "art".
This is more accurate, I've written enough code in my life to never really want to do it again ....but I still love creating (code was merely the way to do it) so LLMs help with my underlying passion.
Corporate speak, as satirized in the Weird Al hit "Mission Statement", actually serves an important social function. It signals "I'm one of you, the business class, I will align my goals with those of the organization."
It's like that phenomenon of, you have these British people, Hyacinth Bucket types. They want to be seen as upper class when they're not. So they speak in an overly polite register that they think makes them sound upper class. Actual aristocrats, by contrast, speak rather plainly amongst each other. They know where they are in society, and they know that everyone else who matters also knows.
Similarly, the people who speak of operationalizing new strategies and leveraging core competencies are trying to sound impressive to those below, and like good little do bees to those above. The people who lead an organization to success speak in terms of the actual problems they encounter and the real things that need to be done to solve them.
reply