I was unaware that Apple was helping implement WebGPU! I actually love WebGPU, it looks great and pairs very nicely with three.js which is a favourite hobby tool of mine to use on pet projects.
I can tell you have strong opinions on Vulkan. I don't disagree with your general view that it's hard to work with development wise as it's very tied down to driver and hardware implementation specifics.
What I can say though, is that I've met several pipeline rendering engineers (think folks who invent render engines for film and write low level game engine code) who seem to love Vulkan. They appreciate being able to really get down to the bare metal of the drivers and eek out the performance and conformity they need for the rest of the game or render engine.
A lot of the frustration with OpenGL/DirectX from these specialists was their inability to "get in there" and force the GPU to do what they really wanted. Vulkan apparently gives them a lot more control. As a result, they are able to accomplish things that were previously impossible.
All that being said, I think WebGPU will be far more popular for 99% of developers. Only a very few folks like getting down into the nitty-gritty of libraries like Vulkan. At the same time, there is huge money to be made knowing how to make a game eek out another 10 FPS or properly render a complex scene for a film group like Pixar who wants to save days on a scene render.
> I was unaware that Apple was helping implement WebGPU!
WebGPU is pretty much a combined Google/Apple effort (of course, with other contributors). If I remember correctly it was Apple engineers who proposed the name "WebGPU" in the first place.
> I can tell you have strong opinions on Vulkan
I really do, and I know that my rhetorics can appear somewhat volatile. It's just that I find this entire situation very frustrating. I was deeply invested in the OpenGL community back in the day and decades of watching the committees failing at stuff made me a bit bitter when it comes to this topic. We had a great proposal to revitalise open platform graphics back in 2007(!!!) with OpenGL Longs Peak, but the Khronos Group successfully botched it (we will probably never know why but my suspicion having conversed with multiple people involved into the process is that Nvidia was fearing to lose their competitive advantage if the API were simplified). Then we saw similar things happening to OpenCL (a standard Apple has developed and donated to Khronos btw.).
I am not surprised that Apple engineers (who are very passionate about GPUs) don't want anything to do with Khronos anymore after all this.
> What I can say though, is that I've met several pipeline rendering engineers (think folks who invent render engines for film and write low level game engine code) who seem to love Vulkan. They appreciate being able to really get down to the bare metal of the drivers and eke out the performance and conformity they need for the rest of the game or render engine.
But of course they are. OpenGL was a disaster, and it's incredibly frustrating to program a system without having a way to know whether you will be hitting a fast path or a slow path. We bitterly needed a lower level GPU API. It's just that one can design a low level API in a different ways. Metal gives you basically the same level of control as Vulkan, but you also have an option of uploading a texture with a single function call and have it's lifetime managed by the driver, while in Vulkan you need to write three pages of code that creates a dozen of objects and manually moves data from one heap to another. I mean, even C gives you malloc().
Vulkan gives me an impression that it was designed by a group of elite game engine hackers as an exercise to abstract as much hardware as possible. Let's take for example the new VK_EXT_descriptor_buffer extension. This allows you to put resource descriptors into regular memory buffers, which makes the binding system much more flexible. But the size of descriptors can be different on different platforms, which means you have to do dynamic size and offset calculation to populate these buffers. This really discourages one from using more complex buffer layouts. They could have fixed the descriptor size to say, 16 bytes, and massively simplified the entire thing while still supporting 99% of hardware out there. Yes, it would waste some space (like few MB for a buffer with one million resource attachment points), and it won't be able to support some mobile GPUs where a data pointer seems to require 64 bytes (64 bytes for a pointer!!! really? You make an API extremely complicated just because of some niche Qualcomm GPU?) And the best part: most hardware out there does not support standalone descriptors at all, these descriptors are just integer indices into some hidden resource table that is managed by the driver anyway (AMD is the only exception I am aware of).
In the meantime, structured memory buffers have been the primary way to do resource binding in Metal for years, and all resources are represented as 64bit pointers. Setting up a complex binding graph is as simple as defining a C struct and setting its fields. Best part: the struct definition is shared between your CPU code and the GPU shader code, with GPU shaders fully supporting pointer arithmetics and all the goodies. Minimal boilerplate, maximal functionality, you can focus on developing the actual functionality of your application instead of playing cumbersome and error-prone data ping pong. Why Vulkan couldn't pursue a similar approaches beyond me (ah right, I remember, because of Qualcomm GPUs that absolutely need their 64-byte pointers).
The thing is, this all works for a middleware developer, because these are usually very skilled people who already have to deal with a lot of abstractions, so throwing some API weirdness in the mix can be ok. But it essentially removes access from the end developer (who is passionate but probably less skilled in low-level C), making large middlewares the only way to access the GPU for most. This is just a breeding ground for mediocrity.
I can tell you have strong opinions on Vulkan. I don't disagree with your general view that it's hard to work with development wise as it's very tied down to driver and hardware implementation specifics.
What I can say though, is that I've met several pipeline rendering engineers (think folks who invent render engines for film and write low level game engine code) who seem to love Vulkan. They appreciate being able to really get down to the bare metal of the drivers and eek out the performance and conformity they need for the rest of the game or render engine.
A lot of the frustration with OpenGL/DirectX from these specialists was their inability to "get in there" and force the GPU to do what they really wanted. Vulkan apparently gives them a lot more control. As a result, they are able to accomplish things that were previously impossible.
All that being said, I think WebGPU will be far more popular for 99% of developers. Only a very few folks like getting down into the nitty-gritty of libraries like Vulkan. At the same time, there is huge money to be made knowing how to make a game eek out another 10 FPS or properly render a complex scene for a film group like Pixar who wants to save days on a scene render.