We'll get right on it after we stop people from hacking computers forever.
whateveracct 3 hours ago [-]
"is this library natty?"
registeredcorn 3 hours ago [-]
Not really. The opposite is far, far more desirable in my eyes.
Example:
* Do I care if an LLM was used to determine the volume of my doorbell? Not particularly.
* Do I care if an LLM was used to generate code to unlock my front door remotely? Absolutely!
I need a warning label cautioning me of the risks associated with generative materials. I don't care in the slightest when it isn't present, because the inherent risks associated are inherently lesser.
Batteries, not chicken breasts.
LocalH 3 hours ago [-]
Can we implant an upgraded 10NES chip inside every human at birth so that they can handshake to prove that they're human? /s
skybrian 1 hours ago [-]
Since using AI costs money, some way of contributing AI patches when asked might make sense here? Let the project maintainers decide what’s worth attempting to solve with AI.
Suppose there were a website that helped would-be contributors of AI assistance to match up with projects that want help?
level09 45 minutes ago [-]
I would judge commits by what it does not by who wrote it.
spicyusername 11 hours ago [-]
On the one hand open source projects are going to be overrun with AI code that no one reviewed.
On the other hand, code produced with AI and reviewed by humans can be perfectly good, maintainable, and indistinguishable from regular old code.
So many processes are no longer sufficient to manage a world where thousands of lines of working code are easy to conjure out of thin air. Already strained open source review processes are definitely one.
I get wanting to blanket reject AI generated code, but the reality is that no one's going to be able to tell what's what in many cases. Something like a more thorough review process for onboarding trusted contributors, or some other method of cutting down on the volume of review, is probably going to be needed.
xxs 10 hours ago [-]
>reviewed by humans can be perfectly good, maintainable, and indistinguishable from regular old code
That depends on the 'regular old code' but most stuff I have seen doesn't come close to 'maintainable'. The amount of cruft is proper.
yarn_ 5 hours ago [-]
Another good example of "the people writing good code with AI are the people who could have done it regardless"
simiones 10 hours ago [-]
A policy like this has two points. One, to give good faith potential contributors a guideline on what the project expects. Two, to help reviewers have a clear policy they can point to to reject AI slop PRs, without feeling bad or getting into conflicts about minutiae of the code.
LLMCodeAuditor 8 hours ago [-]
Right, "good faith" is a key idea that is being ignored. If you want to lie to the lead SDL maintainers and claim your code is 100% human-written, you can probably get away with it. But that is unethical and cynical behavior in pursuit of an astonishingly petty goal. And it's correct for SDL to simply ignore the contribution because it came from a dishonest developer, even if the specific code appears to be very good.
bakugo 11 hours ago [-]
> On the other hand, code produced with AI and reviewed by humans can be perfectly good, maintainable, and indistinguishable from regular old code.
I have yet to see a single example of this. The way you make AI generated code good and maintainable is by rewriting it yourself.
llmssuck 3 hours ago [-]
I know it's unpopular to say (here), but I see it all the time. Myself I sometimes cannot recognize what I wrote and what the agent wrote. It's just that I often have a physical memory of typing it, but that's it. (I also saw a lot of garbage, to be fair.)
There is quite a bit of skill to it, however. You cannot just take an AI from blank to "good code" without doing work. Yes, it takes work and quite a bit of it. By this I mean you have to write a good code style guide and a proper explanation of your architectural style(s), your preferences, your goals, plenty of examples, etc. Proper thought has to be put into this.
If you come across bad code, you need to investigate not castigate: why did this happen? How can we prevent this in the future? Those sort of processes need to become second nature. They actually should be already, because it's not that much different from managing a bunch of humans.
Humans come with lots of implicit knowledge and you also select them to match your company's style when you're hiring them. When they sit down at their keyboards you (and society) has already guided them towards a desirable path. (And even then they often still misfire.)
AI agents operate different. Their range of expression is completely alien to us. We cannot be both von Neumanns and complete morons. LLMs have no problem there. It takes a good while to get used to that.
bheadmaster 11 hours ago [-]
> On the other hand, code produced with AI and reviewed by humans can be perfectly good and indistinguishable from regular old code.
I don’t use public repos very often but I had toyed with the idea of just creating a git user specifically for an agent to use for this purpose so it would not be my user account, is this not standard practice already? Kinda seems obvious to me, I mean so people can tell which parts of my public project were commits managed by an agent.
jmalicki 3 hours ago [-]
I do this so that AI can only have limited GitHub permissions. It can't merge, doesn't have admin rights, etc.
This after I started catching it commit directly to upstream main without PRs among other things.
throw5 11 hours ago [-]
Why are these projects still on Github? Isn't it better to move away from Github than go through all this shenanigans? This AI slopam nonsense isn't going to stop. Github is no longer the "social network" for software dev. It's just a vehicle to shove more and more Copilot stuff.
The userbase is also changing. There are vast numbers of new users on Github who have no desire to learn the architecture or culture of the project they are contributing to. They just spin up their favorite LLM and make a PR out of whatever slop comes out.
At this point why not move to something like Codeberg? It's based in Europe. It's run by a non-profit. Good chance it won't suffer from the same fate a greedy corporate owned platform would suffer?
raincole 10 hours ago [-]
> It's based in Europe. It's run by a non-profit
The main SDL maintainer is paid by a US for-profit company, Valve. They don't necessarily share your EU = automatically good attitude.
But anyway, if Codeberg really takes off it'll be flooded with AI bots as well. All popular sites will.
embedding-shape 10 hours ago [-]
> But anyway, if Codeberg really takes off it'll be flooded with AI bots as well. All popular sites will.
History might prove me wrong on this one, but I really believe that the platforms that are pushing people to use as much LLMs as possible for everything (Microsoft-GitHub) will surely be more flooded by AI bots than the platforms that are focusing on just hosting code instead (Codeberg).
throw5 10 hours ago [-]
> The main SDL maintainer is paid by a US for-profit company, Valve. They don't necessarily share your EU = automatically good attitude.
I'm not sure how one follows from the other. I am paid by a US for-profit company. But I still think EU has done some things better. People's beliefs are not determined by the company they work for. It would be a very sad world if people couldn't think outside the bubble of their employers.
kdhaskjdhadjk 6 hours ago [-]
In a "existential war" type situation, people who don't wave the flag and shout the slogans of their "home" country and have known sympathies for other places (any at all) will automatically be suspect, and their names will end up in a database for later use.
You can be assured that the leanings of Valve are always going to be USA, USA, USA, for reasons that will be clear when you follow the chain of ownership to its source.
hurricanepootis 6 hours ago [-]
Pretty sure Gabe's been partying it up in New Zeland ever since he got stuck there because of Covid
kdhaskjdhadjk 5 hours ago [-]
1) Gabe's a front man. He doesn't run Valve.
2) New Zealand is a favorite place for Western apparatchiks to build their bunkers. They don't move there out of a love for Kiwi culture and desire to integrate with the locals. Much like their interest in Wyoming/Montana also; they see a place they like, and they go take it over and drive out/murder whoever was there before.
hurricanepootis 4 hours ago [-]
Gabe may be a the front man, but he's still like the benevolent dictator for life of Valve. Kind of like how Linux Torvalds is the BDFL of Linux
anymouse123456 10 hours ago [-]
> There are vast numbers of new users on Github who have no desire to learn the architecture or culture of the project they are contributing to.
The Eternal September eventually comes for us all.
fuhsnn 10 hours ago [-]
TinyCC's mob branch on repo.or.cz just got trolled with AI commits today. Nowhere is safe it seems.
MiiMe19 5 hours ago [-]
How does something being based in Europe actually help anyone?
embedding-shape 10 hours ago [-]
> Why are these projects still on Github?
At this point, projects are already on GitHub due to inertia, or they're chasing vanity-metrics together with all the other people on GitHub chasing vanity-metrics.
Since the advent of the "README-profiles" many started using with badges/metrics, it been painfully obvious how large this group of people are, where everything is about getting more stars, merging more PRs and having more visits to your website, rather than the code and project itself.
These same people put their project on GitHub because the "value" they want is quite literally "GitHub Stars" and try to find more followers. It's basically a platform they hope will help them get discovered via.
Besides Codeberg, hosting your own git server (via Forgejo or Gitea) is relatively easy and let you do so how private/public you want.
duskdozer 9 hours ago [-]
>Besides Codeberg, hosting your own git server (via Forgejo or Gitea) is relatively easy and let you do so how private/public you want.
As I've seen it, there's a lot of git=GitHub going on. It wasn't even clear to me for a while that you didn't even need a "git server" and could just use a filepath or ssh location for example.
juped 11 hours ago [-]
While this is a perfectly fine policy in the space of possible policies (it's probably what I'd pick, for what it's worth) the arguments being given for it leave a bad taste in my mouth.
or_am_i 10 hours ago [-]
Same. Plenty of perfectly valid reasons to outright ban generated PRs, but "Look, I asked ChatGPT to generate a PR which would break SDL, and it did not bother reading AGENTS.md" is a pretty weak take - gotta know thy enemy a little bit better than that.
raincole 9 hours ago [-]
It's not the argument the maintainer gives. I unironically suggest at least use AI to summarize that thread if you don't bother reading it before commenting.
duskdozer 9 hours ago [-]
That seemed like just a curiosity after they already decided on the policy.
sph 10 hours ago [-]
Good move, and a good reminder of how much of an echo chamber Hacker News is on AI matters.
In here, and big tech at large, it's touted like the unavoidable future that either you adapt or you die. LLMs are always a few months away from the (u|dys)topia of never having to write code ever again. Elsewhere, especially in fields where craft and artistry are valued (i.e. game development), AI is synonym of wanting to cut corners, poor quality, and to put it simply, slop. Sure, we're now inundated from people with a Claude subscription and a dream hoping to create the next Minecraft, but no one is taking them seriously. They're not making the game forum front pages, that's for sure.
Personally, I have made my existential worries a little better by pivoting away from big tech where the only metric is line of code committed per day, and moving towards those fields where human craftsmanship is still king.
fnimick 10 hours ago [-]
And who knows how much of that "unavoidable future" "adapt or die" rhetoric is driven by motivated actors using LLM tools to shape the conversation?
duskdozer 9 hours ago [-]
The incentives are clearly that way. Otherwise, why would random people care if other developers fell hopelessly behind? It would only increase the high status of the AI experts.
LLMCodeAuditor 8 hours ago [-]
FWIW I do think most of it is "grassroots," ordinary rank-and-file STEM workers adopting zero-sum industrialist mindsets. And speaking personally, the psychology works the same way for both sides of the AI debate:
- I have refused to use LLMs since 2023, when I caught ChatGPT stealing 200 lines of my own 2019-era F#. So in 2026 I have some anxiety that I need to practice AI-assisted development or else Be Left Behind. This makes me especially cross and uncharitable when speaking with AI boosters.
- Instead of LLMs I have tripled-down on improving my own code quality and CS fundamentals. I imagine a lot of AI boosters are somewhat anxious that LLM skills will become dime-a-dozen in a few years, and people whose organic brains actually understand computers will be highly in-demand. So they probably have the same thing going on as me - "nuh uh you're wrong and stupid."
I hope it's clear I'm trying to be charitable!
tkel 10 hours ago [-]
Curious , what have you pivoted towards? A different field?
sph 10 hours ago [-]
Game development, and writing small tools in the game dev space. This week I've been working on an image editing app, mostly to play with dithering algorithms and palettes, using Odin and SDL.
I mean, it's either that or I quit software development completely; it would be a shame to throw away two decades of experience in the field.
ryandvm 8 hours ago [-]
I don't know. For as long as I can remember, game dev has had the reputation of being the most sweat-shoppish of all the software engineering disciplines. I have a hard time believing that game devs aren't also going to find themselves being crushed under the CTO imperative to "use AI or else" like the rest of us.
sph 7 hours ago [-]
Ok I should’ve said indie/solo game dev
quikoa 5 hours ago [-]
I'm interested in tools (or blog posts about this) for image editing apps. Would you mind sharing what you've build?
sph 4 hours ago [-]
Nothing ready to ship just yet; I was thinking of building an image editing app that simply focuses on transformations — imagine Photoshop, without the editing part. Instead of having layers, you have a series of transformation you can tweak visually and then export to be reused and applied in batch later.
The itch I want to scratch is that I'm on Linux, and our native image editing apps are very clunky, or you have to spend a weekend every time reacquainting yourself with ImageMagick.
The other project in the back of my head is a font repository, manager and downloader for Linux. It's an unserved niche, and there is no popular central repository of fonts, despite a large majority of them are released with permissive licenses. I just want to be able to do `font-app install Inter Iosevka "IBM Plex"` and they appear under ~/.local/share/fonts
quikoa 3 hours ago [-]
Alright, if you do build something I hope you share it here. I'm always looking forward to any image editing/processing apps or techniques.
JKCalhoun 8 hours ago [-]
I'm not sure.
I think it likely that a typical HN'er [1] has actually used an LLM in coding and if they sound like they are proposing that LLMs in coding are inevitable ("the unavoidable future") it may well be from an informed, personal experience.
(Of course there's no reason not to believe that those pushing back against LLM-Assisted-Coding are also doing so from personal experience. Me, I am on "Team-LLMAC".)
[1] Never used that term before, not sure I like it.
PeterStuer 10 hours ago [-]
"AI is synonym of wanting to cut corners, poor quality, and to put it simply, slop"
A craftsman knows how to use his tools. You can with AI produce very complete, polished, maintainable and tested, secure, performant high quality code.
It does take planning and lots of work on your part, but there is a high payoff.
So many people just dump a one paragraph brainfart into a prompt and then label the AI "slop".
Slop in , slop out. Play silly games, win stupid prizes. Don't blame your tools. Sometimes, you are 'holding it wrong'.
palmotea 7 hours ago [-]
> Good move, and a good reminder of how much of an echo chamber Hacker News is on AI matters. In here, and big tech at large, it's touted like the unavoidable future that either you adapt or you die.
When you look across all software development, I think this kind of AI contribution ban is probably the exception. Because open source maintainers can have standards and have the ability to decide to enforce them.
Corporate America is enraptured by an even dumber and less thoughtful version of the HN echo chamber.
> Elsewhere, especially in fields where craft and artistry are valued (i.e. game development), AI is synonym of wanting to cut corners, poor quality, and to put it simply, slop. Sure, we're now inundated from people with a Claude subscription and a dream hoping to create the next Minecraft, but no one is taking them seriously. They're not making the game forum front pages, that's for sure.
Are you talking about indie games? Because I could see that having a similar dynamic to open source. I would think a big studio would be similar to any other corporate America office.
ratrace 7 hours ago [-]
[dead]
pelasaco 11 hours ago [-]
What’s the point? People will just fork it and improve it with AI anyway.
In another hand, it would be an interesting experiment to watch how the original and the fork diverge over time.
Especially in terms of security discoveries and feature development.
sph 11 hours ago [-]
Go ahead, we're all still waiting for these "AI-improved" projects to appear.
Meanwhile I'll keep using SDL from the official maintainers which have been working on it for decades.
pelasaco 10 hours ago [-]
> Meanwhile I'll keep using SDL from the official maintainers which have been working on it for decades.
That's just Virtue signaling.
"AI-improved" projects like "rewrite $FOO in rust" are popping up everywhere. I dont support it, sqlite3 being rewritten in rust makes me just sad https://turso.tech/blog/introducing-limbo-a-complete-rewrite..., but this "$PROJECT bans AI" is just ridiculous. Ideally we should try to use it for the good, instead of ban it.
xxs 10 hours ago [-]
> "$PROJECT bans AI" is just ridiculous
why so? If they don't feel like reviewing code (or ensure copyright compliance) they are free to reject that.
If you feel strong about it, go fork and maintain it on your own.
orwin 10 hours ago [-]
I think you don't understand how tiring it is to review full-llm code. I think banning it temporarily until people calm down with AI-generated PRs is a very sane solution. If it is still the solution in 3 years, maybe you would have a point then.
I only manage 3 'new' hires and I am of the mind of banning AI usage myself despite my heavy usage (the new hires don't level up, that's my main issue now, but the reviewing loops and the shit that got through our reviews are also issues).
ratrace 7 hours ago [-]
[dead]
LLMCodeAuditor 8 hours ago [-]
I am not sad about rewriting sqlite in Rust because this is the third such attempt I've seen, and just like the other two it looks like this project is totally doomed: https://github.com/tursodatabase/turso/
Like, look: https://github.com/tursodatabase/turso/issues/6412 It's stunning considering this project is advertised as a beta. There are hundreds of bugs like this. It's AI slop that gets worse the more AI is thrown at it.
SDL is 100% correct to keep this AI mess as far away from their project as possible.
raincole 10 hours ago [-]
I'm pretty pro-AI, but I find it very amusing that every single time an open source project enacts no-AI policy, someone will chime in and explain how it will be outcompeted by the yes-AI version, while in reality it never happens.
pelasaco 9 hours ago [-]
> while in reality it never happens.
it never happens in 3 weeks? The AI revolution is just starting.. too soon to jump in conclusions, i guess?
ethin 2 hours ago [-]
Huh? I've been seeing the "hopelessly doomed because of AI" trope practically since ChatGPT came out. It wasn't even remotely as bad as it is now, but it's been there all along.
skydhash 8 hours ago [-]
Make it to 2 or more years. That’s the amount of times that I’ve been seeing comments equating not using AI with hopelessly doomed project/career.
pelasaco 6 hours ago [-]
I am sure you noticed how fast the things started to change since the beginning of 2026 right? In terms of tooling, model, context, pricing, etc?
thunderfork 4 hours ago [-]
This is also something we've all been hearing for ages. "<Model version>/MCP/agents/yadda yadda are totally like anything that's come before!"
pelasaco 4 hours ago [-]
> "<Model version>/MCP/agents/yadda yadda are totally like anything that's come before!"
and they are right. We never saw that before. That's why we all fear it.
ethin 2 hours ago [-]
> and they are right. We never saw that before. That's why we all fear it.
Please please please tell me this is sarcasm. Because if you are serious, I think a lot of people have a long list of bridges to cell you.
arnvald 11 hours ago [-]
Will they? Will someone have enough time, skill and dedication to maintain it? I don’t think using AI will by itself make a big enough difference, it’s still a lot of work to maintain a project
pelasaco 10 hours ago [-]
> I don’t think using AI will by itself make a big enough difference, it’s still a lot of work to maintain a project
I think you are wrong. The "a lot of work maintaining a project" would be reduced, specially issues investigation, code improvement, security issues detection and fixes. SDL isn't a that relevant project, but "ban AI-written commit" - which reading the issue, sounds more like ban "AI usage" - is counterproductive to project.
skydhash 10 hours ago [-]
> SDL isn't a that relevant project,
SDL is kinda the king of “I want graphic, but not enough to bring a whole toolkit, or suffer with opengl”. I have a small digital audio player (shangling m0) where the whole interface is built with SDL.
nottorp 10 hours ago [-]
> and improve it with AI anyway
No. My impression is that most AI PRs aren't made to improve anything, but to inflate the requester's reputation as an "AI" expert.
> and feature development
There's also this misconception that more features == better...
pelasaco 10 hours ago [-]
there is no misconception here. Bug fixes, issue triage and feature implementation reduced time is a thing.
nottorp 10 hours ago [-]
The misconception is that new features are always necessary, not that it would be nice if they were done faster.
ChrisRR 7 hours ago [-]
If people want to fork at and work in their own manner then that's fine, but that doesn't mean you shouldn't protect the project that you're personally working on
signa11 11 hours ago [-]
don't mind if you do 'guv, don't mind at all.
democracy 10 hours ago [-]
tbh if the change works and the code is ok who cares what was used to build it? ChatGPT or C++ code generator. If the code looks crap - reject PR, why drama?
orwin 10 hours ago [-]
Because to decide if it's crap, you still have to read it.And because AI respect coding guidelines, you have to actually understand what the code does to detect crap. Also the sheer number is unmanageable.
This reasonably means AI contributions where a human has guided the AI are not subject to copyright, and thus can't be supported by a project's license.
dtech 10 hours ago [-]
That's quite a stretch, and untested in court.
At least a monkey is an unambiguous autonomous entity. A LLM is a - heck of a complicated - piece of software, and could very well be ruled a tool like any other
redwall_hp 5 hours ago [-]
Tested all the way up to the Supreme Court, who declined to hear an appeal, so the precedent stands in the context of AI output.
It's still early, but this is absolutely going to be precedent used in a software related case, and it's going to lead to fun times with SOX/PCI style compliance issues, where developers will have to attest that merges did not use AI so compliance can ensure repos don't pass a threshold where there's too much LLM code.
tapoxi 10 hours ago [-]
I mean, aren't we all bragging about autonomous agents doing the coding for us? I don't see how that's remotely a stretch.
The legal question was "did a human author the work"?
Sharlin 10 hours ago [-]
From a less self-centered viewpoint there are plenty of reasons to be critical of LLMs and their use.
sscaryterry 11 hours ago [-]
Stopping a flood with a tissue.
sscaryterry 8 hours ago [-]
Don't understand the down vote. Policies like these are not truly enforceable. There are many many unscrupulous humans out there, that are more than willing to make any code they submit, look like a human wrote it, even though an LLM created it.
duskdozer 7 hours ago [-]
Maybe, but the fact that a restaurant owner probably can't enforce a rule for the waiters not to spit in the food isn't an argument that they should say it's ok to spit in the food.
cwillu 6 hours ago [-]
Illusion of transparency: you think your analogy was clear, other people found it opaque and dismissive, and expended what they considered to be a similar level of effort to engage with it as was used in creating it.
sscaryterry 2 hours ago [-]
Wow, slow clap.
thunderfork 4 hours ago [-]
The purpose of rules is not limited to enforcement. This seems to be a common misconception in these threads.
ecopoesis 10 hours ago [-]
What’s next? Are they going to forbid the use of Intellisrnse? Maybe IDEs in general?
Why not just specify all contributions must be written with a steady hand and a strong magnet.
throwawayqqq11 10 hours ago [-]
> Whats next
To show you your hyperbole: Allowing monkeys on typewriters.
LLMs are neither IDEs nor random.
I am very sceptical about iterative AI deployment too. People pretend the success threshold is vibing somethging that gets widely used, but its more than that. These one-shot solutions are not project maintenance. Answer yourself this one, could LLMs do what the linux kernel cummunity did over the same time span? This would be a good measure of success and if so, a strong argument to allow generated contributions.
askI12 10 hours ago [-]
What's next? Forbid cribbing from your neighbor in an exam? The audacity!
They simply don't want people like you and lose nothing.
reactordev 11 hours ago [-]
People who can wield AI properly have no use for SDL at all. It’s a library for humans to figure out platform code. AI has no such limitations.
fhd2 10 hours ago [-]
So AI generated code doesn't benefit from stable foundations maintained by third parties? Fascinating take I don't currently agree with. Whether it's AI or hand written, using solid pre-existing components and having as little custom code as possible is my personal approach to keep things maintainable.
miningape 10 hours ago [-]
This is probably the most insane take I've read all year. As though an LLMs don't have an increased chance to bork code when they have to write it multiple times for different platforms - even LLM users benefit from the existence of libraries that handle cross platform, low level implementation details and expose high level apis.
canelonesdeverd 10 hours ago [-]
10/10 parody, perfectly nailed the delusion.
reactordev 10 hours ago [-]
gotta channel some of that Kai Lentit energy.
LLMCodeAuditor 10 hours ago [-]
“Claude, please purchase a few USB steering wheel controllers from Amazon and make sure they work properly with our custom game engine. Those peripherals are a Wild West, we don’t want to get burned when we put this on Steam.”
>> ………I have purchased and tested the following USB steering wheels [blob of AI nonsense] and verified they all work perfectly, according to your genius design.
“Wow, that was fast! It would take a stoopid human 48 hours just to receive the shipment.”
[I would think Claude would recommend using SDL instead of running some janky homespun thing]
reactordev 10 hours ago [-]
HID and XInput, you don’t need SDL for Steering Wheels.
jhasse 4 hours ago [-]
You absolutely do need SDL, it's full of knowledge by humans from trial and error over years of using input devices in the real world.
thunderfork 4 hours ago [-]
Xinput is a pretty constrained interface that plenty of novel controllers, including steering wheels, don't/can't adhere to. Good luck getting the PS5 controller's fancy rumble working over xinput, for example
ramon156 11 hours ago [-]
> Given that the source of code generated by AI is unknown, we can't accept it under the Zlib license.
So what about SO code snippets? I'm not here to make a stance for AI, but this thread is leaning towards biased.
Address the elephant, LLM-assisted PR's have a chance of being lower quality. People are not obligated to review their code. Doing this manually, you are more inclined to review what you're submitting.
I don't get why these conversations always target their opinion, not the facts. I totally agree about the ethicality, the fact it's bound to get monopolized (unless GLM becomes SOTA soon), and is harming the environment. That's my opinion though, and shouldn't interfere with what others do. I don't scoff at people eating meat, let them be.
The issue is real, the solution is not.
johndough 10 hours ago [-]
> So what about SO code snippets?
StackOverflow snippets are mostly licensed under CC BY-SA 3.0 or 4.0, so I'd wager that they are not allowed, either.
The SDL source code makes a few references to stackoverflow.com, but the only place I could find an exact copy was where the author explicitly licensed the code under a more permissive license: https://github.com/libsdl-org/SDL/blob/5bda0ccfb06ea56c1f15a...
Sharlin 10 hours ago [-]
Most SO snippets likely aren't unique or creative enough to count as works. If a hundred programmers would write essentially the same snippet to solve a problem, it's not copyrightable.
And the judge in that case famously stated: “I couldn’t have told you the first thing about Java before this problem. I have done, and still do, a significant amount of programming in other languages. I’ve written blocks of code like rangeCheck a hundred times before. I could do it, you could do it. The idea that someone would copy that when they could do it themselves just as fast, it was an accident. There’s no way you could say that was speeding them along to the marketplace. You’re one of the best lawyers in America, how could you even make that kind of argument?”
shevy-java 10 hours ago [-]
I don't think this can be used as a counter-argument.
Most SO contributions are dead-simple; often just being a link to the documentation or an extended example. I mean just have a look at it.
Finding a comparable SO entry that is similar to Google versus Oracle example, is in my opinion much much harder. I have been using SO in the last 10 years a lot for snippets, and most snippets are low quality. (Some are good though; SO still has use cases, even though it kind of aged out now.)
embedding-shape 10 hours ago [-]
> Most SO snippets likely aren't unique or creative enough to count as works.
How is this different from LLM outputs? Literally trained on the output of N programmers so it can give you a snippet of code based on what it has seen.
sdJah18 10 hours ago [-]
The "humans do it, too" or "humans have always done it" arguments break down very quickly.
Not only by comparing the scale of infringement, but because direct Stackoverflow snippets are very rare. For example, C++ snippets are 95% code cleverness monstrosities and you can only learn a principle but not use the code directly.
I'd say that Stackoverflow snippets in well maintained open source projects are practically zero. I've never seen any PR that is accepted that would even trigger that suspicion.
rzmmm 10 hours ago [-]
[dead]
LLMCodeAuditor 10 hours ago [-]
Most SO snippets that you might actually copy-paste aren’t copyrightable: it is a small snippet of fairly generic code intended to illustrate a general idea. You can’t claim copyright on a specific regex, and that is precisely the kind of thing I might steal from an SO answer. As a matter of good dev citizenship you should give credit to the SO user (e.g. a link in a comment) but it’s almost never a copyright issue. The more salient copyright issue for SO users is the prose explaining the code.
missingdays 10 hours ago [-]
> I don't scoff at people eating meat, let them be.
Example:
* Do I care if an LLM was used to determine the volume of my doorbell? Not particularly.
* Do I care if an LLM was used to generate code to unlock my front door remotely? Absolutely!
I need a warning label cautioning me of the risks associated with generative materials. I don't care in the slightest when it isn't present, because the inherent risks associated are inherently lesser.
Batteries, not chicken breasts.
Suppose there were a website that helped would-be contributors of AI assistance to match up with projects that want help?
On the other hand, code produced with AI and reviewed by humans can be perfectly good, maintainable, and indistinguishable from regular old code.
So many processes are no longer sufficient to manage a world where thousands of lines of working code are easy to conjure out of thin air. Already strained open source review processes are definitely one.
I get wanting to blanket reject AI generated code, but the reality is that no one's going to be able to tell what's what in many cases. Something like a more thorough review process for onboarding trusted contributors, or some other method of cutting down on the volume of review, is probably going to be needed.
That depends on the 'regular old code' but most stuff I have seen doesn't come close to 'maintainable'. The amount of cruft is proper.
I have yet to see a single example of this. The way you make AI generated code good and maintainable is by rewriting it yourself.
There is quite a bit of skill to it, however. You cannot just take an AI from blank to "good code" without doing work. Yes, it takes work and quite a bit of it. By this I mean you have to write a good code style guide and a proper explanation of your architectural style(s), your preferences, your goals, plenty of examples, etc. Proper thought has to be put into this.
If you come across bad code, you need to investigate not castigate: why did this happen? How can we prevent this in the future? Those sort of processes need to become second nature. They actually should be already, because it's not that much different from managing a bunch of humans.
Humans come with lots of implicit knowledge and you also select them to match your company's style when you're hiring them. When they sit down at their keyboards you (and society) has already guided them towards a desirable path. (And even then they often still misfire.)
AI agents operate different. Their range of expression is completely alien to us. We cannot be both von Neumanns and complete morons. LLMs have no problem there. It takes a good while to get used to that.
Obligatory xkcd:
https://xkcd.com/810/
This after I started catching it commit directly to upstream main without PRs among other things.
The userbase is also changing. There are vast numbers of new users on Github who have no desire to learn the architecture or culture of the project they are contributing to. They just spin up their favorite LLM and make a PR out of whatever slop comes out.
At this point why not move to something like Codeberg? It's based in Europe. It's run by a non-profit. Good chance it won't suffer from the same fate a greedy corporate owned platform would suffer?
The main SDL maintainer is paid by a US for-profit company, Valve. They don't necessarily share your EU = automatically good attitude.
But anyway, if Codeberg really takes off it'll be flooded with AI bots as well. All popular sites will.
History might prove me wrong on this one, but I really believe that the platforms that are pushing people to use as much LLMs as possible for everything (Microsoft-GitHub) will surely be more flooded by AI bots than the platforms that are focusing on just hosting code instead (Codeberg).
I'm not sure how one follows from the other. I am paid by a US for-profit company. But I still think EU has done some things better. People's beliefs are not determined by the company they work for. It would be a very sad world if people couldn't think outside the bubble of their employers.
You can be assured that the leanings of Valve are always going to be USA, USA, USA, for reasons that will be clear when you follow the chain of ownership to its source.
2) New Zealand is a favorite place for Western apparatchiks to build their bunkers. They don't move there out of a love for Kiwi culture and desire to integrate with the locals. Much like their interest in Wyoming/Montana also; they see a place they like, and they go take it over and drive out/murder whoever was there before.
The Eternal September eventually comes for us all.
At this point, projects are already on GitHub due to inertia, or they're chasing vanity-metrics together with all the other people on GitHub chasing vanity-metrics.
Since the advent of the "README-profiles" many started using with badges/metrics, it been painfully obvious how large this group of people are, where everything is about getting more stars, merging more PRs and having more visits to your website, rather than the code and project itself.
These same people put their project on GitHub because the "value" they want is quite literally "GitHub Stars" and try to find more followers. It's basically a platform they hope will help them get discovered via.
Besides Codeberg, hosting your own git server (via Forgejo or Gitea) is relatively easy and let you do so how private/public you want.
As I've seen it, there's a lot of git=GitHub going on. It wasn't even clear to me for a while that you didn't even need a "git server" and could just use a filepath or ssh location for example.
In here, and big tech at large, it's touted like the unavoidable future that either you adapt or you die. LLMs are always a few months away from the (u|dys)topia of never having to write code ever again. Elsewhere, especially in fields where craft and artistry are valued (i.e. game development), AI is synonym of wanting to cut corners, poor quality, and to put it simply, slop. Sure, we're now inundated from people with a Claude subscription and a dream hoping to create the next Minecraft, but no one is taking them seriously. They're not making the game forum front pages, that's for sure.
Personally, I have made my existential worries a little better by pivoting away from big tech where the only metric is line of code committed per day, and moving towards those fields where human craftsmanship is still king.
- I have refused to use LLMs since 2023, when I caught ChatGPT stealing 200 lines of my own 2019-era F#. So in 2026 I have some anxiety that I need to practice AI-assisted development or else Be Left Behind. This makes me especially cross and uncharitable when speaking with AI boosters.
- Instead of LLMs I have tripled-down on improving my own code quality and CS fundamentals. I imagine a lot of AI boosters are somewhat anxious that LLM skills will become dime-a-dozen in a few years, and people whose organic brains actually understand computers will be highly in-demand. So they probably have the same thing going on as me - "nuh uh you're wrong and stupid."
I hope it's clear I'm trying to be charitable!
I mean, it's either that or I quit software development completely; it would be a shame to throw away two decades of experience in the field.
The itch I want to scratch is that I'm on Linux, and our native image editing apps are very clunky, or you have to spend a weekend every time reacquainting yourself with ImageMagick.
The other project in the back of my head is a font repository, manager and downloader for Linux. It's an unserved niche, and there is no popular central repository of fonts, despite a large majority of them are released with permissive licenses. I just want to be able to do `font-app install Inter Iosevka "IBM Plex"` and they appear under ~/.local/share/fonts
I think it likely that a typical HN'er [1] has actually used an LLM in coding and if they sound like they are proposing that LLMs in coding are inevitable ("the unavoidable future") it may well be from an informed, personal experience.
(Of course there's no reason not to believe that those pushing back against LLM-Assisted-Coding are also doing so from personal experience. Me, I am on "Team-LLMAC".)
[1] Never used that term before, not sure I like it.
A craftsman knows how to use his tools. You can with AI produce very complete, polished, maintainable and tested, secure, performant high quality code.
It does take planning and lots of work on your part, but there is a high payoff.
So many people just dump a one paragraph brainfart into a prompt and then label the AI "slop".
Slop in , slop out. Play silly games, win stupid prizes. Don't blame your tools. Sometimes, you are 'holding it wrong'.
When you look across all software development, I think this kind of AI contribution ban is probably the exception. Because open source maintainers can have standards and have the ability to decide to enforce them.
Corporate America is enraptured by an even dumber and less thoughtful version of the HN echo chamber.
> Elsewhere, especially in fields where craft and artistry are valued (i.e. game development), AI is synonym of wanting to cut corners, poor quality, and to put it simply, slop. Sure, we're now inundated from people with a Claude subscription and a dream hoping to create the next Minecraft, but no one is taking them seriously. They're not making the game forum front pages, that's for sure.
Are you talking about indie games? Because I could see that having a similar dynamic to open source. I would think a big studio would be similar to any other corporate America office.
Meanwhile I'll keep using SDL from the official maintainers which have been working on it for decades.
That's just Virtue signaling.
"AI-improved" projects like "rewrite $FOO in rust" are popping up everywhere. I dont support it, sqlite3 being rewritten in rust makes me just sad https://turso.tech/blog/introducing-limbo-a-complete-rewrite..., but this "$PROJECT bans AI" is just ridiculous. Ideally we should try to use it for the good, instead of ban it.
why so? If they don't feel like reviewing code (or ensure copyright compliance) they are free to reject that.
If you feel strong about it, go fork and maintain it on your own.
I only manage 3 'new' hires and I am of the mind of banning AI usage myself despite my heavy usage (the new hires don't level up, that's my main issue now, but the reviewing loops and the shit that got through our reviews are also issues).
Like, look: https://github.com/tursodatabase/turso/issues/6412 It's stunning considering this project is advertised as a beta. There are hundreds of bugs like this. It's AI slop that gets worse the more AI is thrown at it.
SDL is 100% correct to keep this AI mess as far away from their project as possible.
it never happens in 3 weeks? The AI revolution is just starting.. too soon to jump in conclusions, i guess?
and they are right. We never saw that before. That's why we all fear it.
Please please please tell me this is sarcasm. Because if you are serious, I think a lot of people have a long list of bridges to cell you.
I think you are wrong. The "a lot of work maintaining a project" would be reduced, specially issues investigation, code improvement, security issues detection and fixes. SDL isn't a that relevant project, but "ban AI-written commit" - which reading the issue, sounds more like ban "AI usage" - is counterproductive to project.
SDL is kinda the king of “I want graphic, but not enough to bring a whole toolkit, or suffer with opengl”. I have a small digital audio player (shangling m0) where the whole interface is built with SDL.
No. My impression is that most AI PRs aren't made to improve anything, but to inflate the requester's reputation as an "AI" expert.
> and feature development
There's also this misconception that more features == better...
This reasonably means AI contributions where a human has guided the AI are not subject to copyright, and thus can't be supported by a project's license.
At least a monkey is an unambiguous autonomous entity. A LLM is a - heck of a complicated - piece of software, and could very well be ruled a tool like any other
https://www.reuters.com/legal/government/us-supreme-court-de...
It's still early, but this is absolutely going to be precedent used in a software related case, and it's going to lead to fun times with SOX/PCI style compliance issues, where developers will have to attest that merges did not use AI so compliance can ensure repos don't pass a threshold where there's too much LLM code.
The legal question was "did a human author the work"?
Why not just specify all contributions must be written with a steady hand and a strong magnet.
To show you your hyperbole: Allowing monkeys on typewriters.
LLMs are neither IDEs nor random.
I am very sceptical about iterative AI deployment too. People pretend the success threshold is vibing somethging that gets widely used, but its more than that. These one-shot solutions are not project maintenance. Answer yourself this one, could LLMs do what the linux kernel cummunity did over the same time span? This would be a good measure of success and if so, a strong argument to allow generated contributions.
They simply don't want people like you and lose nothing.
>> ………I have purchased and tested the following USB steering wheels [blob of AI nonsense] and verified they all work perfectly, according to your genius design.
“Wow, that was fast! It would take a stoopid human 48 hours just to receive the shipment.”
[I would think Claude would recommend using SDL instead of running some janky homespun thing]
So what about SO code snippets? I'm not here to make a stance for AI, but this thread is leaning towards biased.
Address the elephant, LLM-assisted PR's have a chance of being lower quality. People are not obligated to review their code. Doing this manually, you are more inclined to review what you're submitting.
I don't get why these conversations always target their opinion, not the facts. I totally agree about the ethicality, the fact it's bound to get monopolized (unless GLM becomes SOTA soon), and is harming the environment. That's my opinion though, and shouldn't interfere with what others do. I don't scoff at people eating meat, let them be.
The issue is real, the solution is not.
StackOverflow snippets are mostly licensed under CC BY-SA 3.0 or 4.0, so I'd wager that they are not allowed, either.
The SDL source code makes a few references to stackoverflow.com, but the only place I could find an exact copy was where the author explicitly licensed the code under a more permissive license: https://github.com/libsdl-org/SDL/blob/5bda0ccfb06ea56c1f15a...
Most SO contributions are dead-simple; often just being a link to the documentation or an extended example. I mean just have a look at it.
Finding a comparable SO entry that is similar to Google versus Oracle example, is in my opinion much much harder. I have been using SO in the last 10 years a lot for snippets, and most snippets are low quality. (Some are good though; SO still has use cases, even though it kind of aged out now.)
How is this different from LLM outputs? Literally trained on the output of N programmers so it can give you a snippet of code based on what it has seen.
Not only by comparing the scale of infringement, but because direct Stackoverflow snippets are very rare. For example, C++ snippets are 95% code cleverness monstrosities and you can only learn a principle but not use the code directly.
I'd say that Stackoverflow snippets in well maintained open source projects are practically zero. I've never seen any PR that is accepted that would even trigger that suspicion.
Why not let the animals be?