If yours is, I totally get where you’re coming from, AI-generated code is pretty bad right now, albeit it does often work.
The way I often do things is have an agent create a rough first implementation according to my own architecture, so I have done all that high-level thinking that it currently struggles with; then I have a dedicated improvement agent come and clear that up substantially, and then I review whatever is left.
Plenty of people think their art is awful, even when it isn’t, programmers especially so.
Its not about just respect for your own art, but respect for everyone else in the art form. If you wrote bad code but put in effort, you still put effort into the art, therefore its not an insult to the medium itself, and the community around it.
Furthermore, how do you expect to get better at that art when you have the easel do so much of the painting for you?
Which, yeah, isn’t great, but I stand with my intsance’s viewpoint on generative AI:
In other words, we’re not against Generative AI as a technology but we are against Generative AI as promoted by capitalism and corporate interests. To put this into perspective, it’s like saying: We’re not against using camera equipment, but we’re against the surveillance industry.
I can definitely see a lot of the utility llms have, but a claude.md shows that they’re using Claude, a corporate, proprietary model, and they’re Claude code, which means they’re likely using it to make significant amounts of code.
Grow up.
One can grow up with time but one cannot mature without experience.
So you go and become more mature.
Programming is an art and using LLMs makes you as much of a hack as a visual artist resorting to Stable Diffusion.
Everything matters. From high-level architecture to little details of implementation.
Last time I checked us programmers had preferences for details as small as tabs vs. spaces, or curly bracket placement.
You think I’m going to let a fucking machine make decisions about implementation for me?
Let alone high-level architectual ones?
Everything about your software should have intent, and its impossible for LLMs to actually intend to do something.
So you can fuck right off with that, and while you’re at it, stop poisoning the worlds codebases (and drinking supply.)
Trust me my code aint worthy of being “art”.
If yours is, I totally get where you’re coming from, AI-generated code is pretty bad right now, albeit it does often work.
The way I often do things is have an agent create a rough first implementation according to my own architecture, so I have done all that high-level thinking that it currently struggles with; then I have a dedicated improvement agent come and clear that up substantially, and then I review whatever is left.
Drawing a dick on a foggy mirror is art.
Plenty of people think their art is awful, even when it isn’t, programmers especially so.
Its not about just respect for your own art, but respect for everyone else in the art form. If you wrote bad code but put in effort, you still put effort into the art, therefore its not an insult to the medium itself, and the community around it.
Furthermore, how do you expect to get better at that art when you have the easel do so much of the painting for you?
What does it mean?
It means the Dev used Claude.
Which, yeah, isn’t great, but I stand with my intsance’s viewpoint on generative AI:
The full wiki page.
Just a lot
homonculipeople see AI and start screaming “AI slop” from the hilltops.I can definitely see a lot of the utility llms have, but a claude.md shows that they’re using Claude, a corporate, proprietary model, and they’re Claude code, which means they’re likely using it to make significant amounts of code.
That’s not true. You can use Claude code CLI with local LLM models (via Ollama or llama.cpp)
Do you think they are, though? Besides, doesn’t that require some hacks to get working
Not at all, it’s a Markdown file, plaintext. You just instruct your agent to assimilate that file into it’s context.
I understand that, but wouldn’t other tooling use the generic
AGENTS.mdinstead?