For most aspects my opinions don't change. But on coding, it has changed a lot. Two years ago I would just dismiss at the possibility that llm could write up a simple function. Now it has proven me wrong.
I don't know if "AI agrees" is reliable enough to count. Do you have a particular scientific article that you based on? And what are your example? Your post is mostly describing without concrete example so I can't follow.
An open model that is competitive to commercial models is a big deal if true. I hope someday I can find a way to run such high performance model locally on my laptop.
There is an article said that a DOGE member had write access as well (See [1]). But it was quickly changed back to read only. So there was a risk, but I can only hope nothing happened.
ImageNet dataset is the main thing AFAIK. But even so I find Dr. Li's contribution big enough. For a context, datasets for computer vision at her time were mostly small, so nn was rarely considered a good method for CV. Not until AlexNet won the challenge, and the world changes after that. I remember many people initially scoffed at ImageNet, arguing that the dataset was flawed and that a βbadβ method like NN (AlexNet) could only win because of those flaws. Simply saying βpayingβ is an understatement because we also need to account for the academic politics of her time.
A little fun fact, even if most research papers nowadays try to propose new dataset, if we take imagenet and pretrain the backbone, we usually end up with a very strong baseline.
Btw, not sure why you think Karpathy has a bigger impact than Fei-Fei Li. I can't think what he is doing that is actually changing the playing field.
When I was a college student, I attended a conference regarding IoT. I still remember a developer of a face ID software for a safe box was so sure that their technology was secure because Apple deployed face ID on iPhone. Then a few days later, an article about fooling iPhone's face ID with a printed paper with the victim's face got published.