"it no more copies code than a human does" < that's a very big call right there, considering how much verbatim copying has already been documented in Copilot. The primitive understanding Copilot has of what it is generating doesn't even approach that of the most average programmers. It's classic AI: impressive on the surface.
All the "copied code" I've seen is where the person prompts it with a large amount of very unique preamble and then it fills in the exact example they are quoting from.
Try it without doing that.
And it's weird people think it can't understand conceptual relationships. Word2Vec demonstrated that nearly 10 years ago and that's a much weaker model in terms of both size and techniques than this is.
> And it's weird people think it can't understand conceptual relationships. Word2Vec demonstrated that nearly 10 years ago and that's a much weaker model in terms of both size and techniques than this is.
Saying that Word2Vec or Copilot have "understanding" of their input requires a redefinition of the word "understanding".