Will you are all wet here.
I know you are not a SW dev, you are a journalist. Still you need to do your homework.
MILLIONS of devs are using LLMs to speed their work every day (including myself).
It just WORKS. As an AI guy I am shocked how well it works.
Does it code correctly out of the box. No. Is that relevant here. No!
What matters is how fast I am with and without it.
I do need to understand the code, because as you say the LLM really doesn't.
BUT it gets hundreds of details of API signatures correct etc. I just need to ensure the overall approach it is taking is a good one, and occasionally I need to debug when it cannot.
But even in this area >80% of the time, I cut-paste the error message and ask it what is going wrong, and it usefully points me in good directions and suggests fixes to its own code.
Your referenced study is irrelevant, and your conclusions are simply at dramatic odds with many many programmers who are coding right now.