I noticed that gameplay speed depends on the window size. I'm assuming that larger canvas takes longer to render. It seems too fast at small window sizes and maybe too slow at 4K, not sure what is the intended speed.
That doesn't sound like ban, you have to disclosure yearly the amount of stock you have demolished, but there is no mention of penalty or anything like that.
Any decision maker can be cyberbullied/threatened/bribed into submission, LLMs can even try to create movements of real people to push the narrative. They can have unlimited time to produce content, send messages, really wear the target down.
Only defense is to have consensus decision making & deliberate process. Basically make it too difficult, expensive to affect all/majority decision makers.
Communities also evolve and devolve with time even without large external event. Maybe you don't feel the same belonging in the friend group after ten years or community grows to become something it wasn't in the beginning.
Maybe you have to accept that communities are here and now, but they can dissolve at any time.
Even if you can achieve awesome things with LLMs you give up the control over tiny details, it's just faster to generate and regenerate until it fits the spec.
But you never quite know how long it takes or how much you have to shave that square peg.
I see that Software as a Service banked too much on the first S, Software. But really customers want the second S, the Service.
When you sell a service, it's opaque, customer don't really care how it is produced. They want things done for them.
AI isn't killing SaaS, it's shifting it to second S.
Customers don't care how the service is implemented, they care about it's quality, availability, price, etc.
Service providers do care about the first S, software makes servicing so much more scalable. You define the service once and then enable it to happen again and again.
They didnt, dont make the mistake of thinking Saas companies are just software companies. They are Sales companies who happen to sell software. Companies like Dropbox & Atlassian have long been surpassed in Tech but they live only because they continue selling even when demand was hard to get. Their moat is sales & networking and software has to be just good enough. And other part is service, these companies still have one of best costumer service since the start of early 2010s. You can still get refund on Uber quite easily, but if you try doing that at a regular old school company you would require a prayer and couple of business weeks.
Yes, many don't like Sharepoint, but still they use it. It's the tool they can use.
Customers don't care if Sharepoint uses LLM, they just want to share ideas, files, reports, pages, etc. If LLM makes it easier, great! If some other product makes it easier, great!
It's not about the product it's about the results.
You're proving the point? Sharepoint, teams: availability + price. Every company has microflows, sharepoint and teams are automatically available and part of the price or lower priced than the competition.
Nah it's not that at all. Most of the services are totally fungible and everyone has a short attention span. You need to be in a market which is extremely difficult to disrupt and have a product which people are totally dependent on. And those tend to have a rather large cost to enter unless you were in early.
I just don't want to pay $50/user/month for an initially open source product that was relicensed and then crippled that the initial group giving something away decided they wanted to make a business of it.
I think we will use more tools to check the programs in the future.
However I don't still believe in vibecoding full programs. There are too many layers in software systems, even when the program core is fully verified, the programmer must know about the other layers.
You are Android app developer, you need to know what phones people commonly use, what kind of performance they have, how the apps are deployed through Google App Store, how to manage wide variety of app versions, how to manage issues when storage is low, network is offline, battery is low and CPU is in lower power state.
LLMs can handle a lot of these issues already, without having the user think about such issues.
Problem is - while these will be resolved (in one way or another) - or left unresolved, as the user will only test the app on his device and that LLM "roll" will not have optimizations for the broad range of others - the user is still pretty much left clueless as to what has really happened.
Models theoretically inform you about what they did, why they did it (albeit, largely by using blanket terms and/or phrases unintelligible to the average 'vibe coder') but I feel like most people ignore that completely, and those who don't wouldn't need to use a LLM to code an entirety of an app regardless.
Still, for very simple projects I use at work just chucking something into Gemini and letting it work on it is oftentimes faster and more productive than doing it manually. Plus, if the user is interested in it, it can be used as a relatively good learning tool.
Skills.md will in time have same problem as MCP, they will bloat the context. I wonder if we could just have the scripts without the descriptions and LLM would have been trained to search the most useful things in specific folder.
This seems like a solvable engineering problem. For example, you could have a lightweight subagent with its own context for reading the skills and determining which to use
reply