sure LLMs can write code, but it's not better than the source used and the issue is that nowadays there is already a lot of code that has been generated by even older versions of LLMs, which will affect newer LLMs.
Sure I do see use of the LLMs like for figuring out the variables needed to use with a function, give some idea how to do somethings so you don't have to spend a half day to figure it out from some LLM generated source...
But let it really code, nah... we really need to get a working AI first before that will be innovative and able to think of consequences of the code
and sure the issue with license too, the code the LLM has been trained with will have a large range of different licenses, the code it borrows may have a lot different license than what the program you write has, and if there is some closed source included then wow...
If you are an experienced developer, LLM tools speed up coding ~30% and the code is as maintainable as human code downstream.
https://www.youtube.com/watch?v=b9EbCb5A408
The licensing issue is gross, and there is a question on consolidating yet more power in US tech companies. Alternatives are available that don't have those issues.

YouTube
We Studied 150 Developers Using AI (Here鈥檚 What's Actually Changed...)