I wonder if support for OpenGL, Vulkan, etc will improve now that Apple is partnering with nVidia, Adobe, Autodesk, Microsoft, etc around the OpenUSD rendering/animation/CAD/3D-scene format?
Considering the whole schtick of OpenUSD is "one file format that renders consistently everywhere" (paraphrasing), I would be surprised if Apple doesn't use it as a means to cement more 3D software vendors into macOS land. It's really hard to render consistently if the underlying drivers are all wonk and proprietary.
I am curious to see how this plays out. In my mind, there are two options:
1. Apple conforms to the existing standards of OpenGL and Vulkan we see gaining steam for many film and game production pipelines.
2. Apple tries to throw its weight around and force devs to support their Metal standards even more, ultimately hoping to force the world onto Metal + macOS.
My heart hopes for option 1, but my gut tells me Apple is going to push for option 2 with all the might it can muster. In my experience, Apple doesn't like any standards it doesn't control with an iron fist (not really saying much about Apple here though... nVidia, Autodesk, Adobe, and Microsoft are all the same).
The next couple of years are going to be interesting for sure!
> In my experience, Apple doesn't like any standards it doesn't control with an iron fist
I would add some nuance to this statement: "Apple likes open standards when it is weak."
The iMac and early OS X went big on open standards, and Jobs made a point of pointing this out: USB for the iMac, JPEG, MPG, mp3, Postscript etc for OSX. IP/TCP built in. They even paid the danegeld for .rtf.
Then as they clawed their way back from the precipice, they started "adding value" again.
The iphone was an HTML device, loudly repudiating the proprietary (and terrible) Flash much less the crappy, mostly stillborn "mobile HTML" attempts.
You still get H.264, matter/threads, and other standards they don't control, where they don't have market power.
You might actually expand "Apple likes open standards when it is weak" to "companies likes open standards when they are weak."
Generally speaking you get standards consortiums when there is a clear winner that is mopping up the space.
Here's an example that's happening right now: Nvidia-NvLink-Infiniband.
Nvidia owns the highspeed interconnect inside the chassis (HGX), the NICs (Mellanox), the inter-host interconnect (Infiniband), the high performance inter-host interconnect (switched NvLink), the and the ethernet network (Mellanox has the same 52.1Tbps switch performance that everyone else has now). GPU training is RDMA heavy and this is a place where both NvLink and Infiniband shine, ethernet much less so. Retransmissions are very bad, in global-performance-terms, for ROCEv2 transfers. Right now Nvidia is just crushing it and there's zero chance anyone is going to catch up by introducing new Infiniband ASICs.
So what happens? You have a consortium spun up by all of the companies in the Ethernet space - Ultra Ethernet Consortium - to try and use "standards" to push back on customers who don't want to make big investments in "non-standard." UEC is pretty vague but seems to be promising Broadcom-style cellized fabrics, the whole point of which is to have an ethernet-like standard that avoids ECMP-induced flow collisions and retransmissions - that is, get Ethernet into the same territory as Infiniband.
If you look back in time in the tech industry, you see this over and over and over and over. Standards are great, they make certain kinds of multi-sided markets and markets that need broad participation to be viable possible - but they are also routinely about the losers joining together to compete.
I'd even bend that quote to say "Given a problem that maximizes their profit if solved, it's an acceptable compromise for companies to resort to solutions they don't control, if they don't have the ressources to create competing solutions they'd do control."
> they are also routinely about the losers joining together to compete.
That is the greatness of it. It reminds me of democracy: The less powerful join together to give everyone an equal vote, rather than having one vote per dollar.
In this case though dollars work just fine - the incumbent who probably invented the market now gets smaller companies banding together against it after a while.
Same for adaptive sync. Gsync was first to market by a country mile, the adaptive sync standard wasn’t even approved until like 6 months after the first gsync stuff showed up and if nvidia hadn’t gone first it would have taken much longer if it happened at all.
Freesync wasn’t really market-ready until at least 2015 and the early products were mostly junk, it’s not until the gsync compatible program that any vendors really cared about LFC or flickering issues.
HPC cluster builds are complex enough due to the presence of multiple networks (2x moderate-scale infiniband chassis and 2x ethernet chassis as a minimum) without introducing unknown vendors. At that point, if you're doing IB, why not just go Mellanox since you will almost certainly buy 200GE Connectix NICs and not the 100G NICs from these guys.
UEC will - like most standards - _eventually_ work. In the mean time, Nvidia pods are the obvious choice for anyone who really cares about performance, and other vendors (Cisco, Arista) if they don't.
Idk, I personally did it for a HPC cluster two years ago. 2x100GbE + OmniPath was a sensible way to reduce cost, especially as the cluster was very light on GPU power and mostly focussed on CPU-bound jobs.
Last I heard everyone there is still very happy with what we built.
Was that before Intel spun it out? I can see people being willing to do that build if Intel was seemingly on board. Today things are pretty different.
Modern HPC mostly means GPU compute and tons of data shuffling, but point taken. CPU bound jobs aren't going to stress the i/o, so you probably could have done 100GE for less. I'm curious what you did for storage but I'm guessing with CPU-bound that is again much less of an issue.
While we’re over generalizing…
What could be said about grand unification vs the march of progress? Can the laws of physics change? Must we reinvent arithmetic?
One of my greatest joys learning to program is reading old (well written) code, and piecing apart the timeless from the legacy.
Or wait for the incumbent to get big fat and lazy before beating them using a 10x faster, dumb, "unsuitable" interconnect loaded to only to half its theoretical bandwidth.
>The iphone was an HTML device, loudly repudiating the proprietary (and terrible) Flash much less the crappy, mostly stillborn "mobile HTML" attempts.
Skipping Flash wasn't so much an ideological decision as a practical one.
At the time Steve Jobs listed a ton of reasons that they didn't implement Flash. Listed among them were concerns about it not being an open standard, inferiority to H.264, security and performance issues, etc. However, all of these things could've been ignored or overcome.
The principal problem was that a huge proportion of Flash applications, games, and websites used mouseovers as crucial methods of interactions, and Apple simply had no way to allow users to mouseover an element on a touchscreen.
That could have been overcome. The general crustiness of flash could not have been (from Apple's POV).
Apple used to ship a dev app called "Spin Control" that would log stack traces whenever an app failed to drain its event queue in a timely manner (i.e. beachball). One time I accidentally left this open for an entire week, went about a bunch of assorted business, and when I came back every single stack trace had to do with flash, and there were many. Either flash in a browser or flash in a browser embedded in something else (ads embedded in 404 pages for broken help pages that were never even displayed, lol). At first I thought it had to be a mistake, a filter I had forgotten about or something, so I triggered a spin in Mail.app by asking it to reindex and sure enough that showed up as the first non-Flash entry in Spin Control.
As hard as it was to believe: Flash had been responsible for every single beachball that week. Yikes.
I was mistaken, I found a copy of Jobs' letter re:Flash and he does cite the proprietary nature of Flash and Apple's lack of control over the content served on its platform as the most important reason Flash was kept off the iPhone.
> This becomes even worse if the third party is supplying a cross platform development tool. The third party may not adopt enhancements from one platform unless they are available on all of their supported platforms. Hence developers only have access to the lowest common denominator set of features. Again, we cannot accept an outcome where developers are blocked from using our innovations and enhancements because they are not available on our competitor's platforms
Funny how one could and maybe should point the same criticism towards electron.
Even in OS X it was still a bad plugin, one of Snow Leopard's headline features was that plugins were moved to separate processes so that Safari could keep running instead of bringing down the whole browser when a plugin had a problem.
If memory serves me right, that was one of the big raisons d'etre for Chrome (released in 2008)... so Snow Leopard (2009) was catching up with it more than solving a Mac specific issue. But yes, back then the big big plugin was Flash, and its security and performance left a lot to be desired (in all platforms).
Back in the day there was even a Mac browser plugin called "Click to Flash" that would prevent Flash from loading in web pages and draining battery like crazy. Made the web livable and more secure throwing Flash garbage ads everywhere.
There were also browser extensions that'd replace YouTube's terrible flash player with an HTML5 h.264 player which was a godsend on single-core PPC G4/G5 machines, where flash would happily keep the CPU pegged for no good reason.
Yeah Flash was slow, but the common alternatives for in-browser animation/games were (and still are) far slower. H.264/5 maybe did video better than Flash, even then idk, Flash YouTube was always way faster on my old iMac than HTML5 YT. Google Hangouts/Meet will lag a high-end Intel MBP thanks to the unaccelerated VP8/9 video, but AIM Flash-based video calls ran fine on an iMac G4. On top of that, there was never a real replacement to the Flash creation software. All those Flash games that people made without really understanding how to code, no longer doable.
I guess Flash had to die because of how outdated, insecure, and proprietary it was. It did seem like a nightmare to support well on mobile. Just wish someone made something strictly better.
Perhaps it was fast due to lack of tight security policies etc. These days, everything has to be containerized or be in a sandbox. That adds layers of overhead. Probably also why Google killed NaCl (Native Clients for Google Chrome). It was loosely in the same space as Flash and ActiveX - anyone remember that?
I've thought about that too. Can only guess about it myself. Old software tends to be faster just because it has to be, and there isn't always a compromise other than time-to-implement.
Not sure how ActiveX's sandboxing worked, but I'll bet it was even less than Flash, since it was running actual x86 code.
ActiveX required that the ActiveX code you wanted to run be already installed on your computer. The security mechanism was that users obviously wouldn't install insecure stuff...
Turns out nearly every extension had gaping security holes.
Not quite. ActiveX would download and run unsandboxed native code direct from the web, even if it wasn't installed. The security mechanism was a popup dialog that contained the publisher name as verified by a code signing certificate.
This is how I remember it. I was dealing with early 2000s smart home software that used ActiveX. I'd visit a website in IE, press ok, and run their "web app" that had raw access to my serial and ethernet ports. It was bizarre even for back then.
Not hard to believe. Reliability was one of the points in Steve Jobs' _Thoughts on Flash_:
> We also know first hand that Flash is the number one reason Macs crash. We have been working with Adobe to fix these problems, but they have persisted for several years now. We don’t want to reduce the reliability and security of our iPhones, iPods and iPads by adding Flash.
Part of it sucked, part of it was great. On Mac-side the experience was worse than on Windows, that's for sure.
But Flash also allowed many games to be easily developed and played in the browser. Lots of fun cartoons were made (the Metallica cartoons "fire bad" on Newgrounds come to my mind now).
It's a shame Flash sucked so much on Mac, since the developer behind Flash [0] did create some nice games on the Mac early in his career, namely Airborne! and Dark Castle.
My fuzzy recollection was that the OS X version of Flash was much worse than the Windows version. Given "Thoughts on Flash" and the direction of the iPhone, this ended up being very expensive for Adobe :)
I always thought Flash was unusably slow, until I tried it on Windows. On Windows Flash was fast and fluent, on the Mac it was choppy and unbearably slow.
Since Flash was so ubiquitous on the web, this made Macs a lot slower for many tasks than Windows computers. No matter how much Steve Jobs touted the power of the G4, and boasted about the speed of Final Cut, nobody would believe it when their Mac couldn't even run a space invaders clone in IE4 fluently.
Adobe ignored performance on Apple devices for years. There was no way in hell Apple would allow Adobe to do the same to iOS.
I don't think that wasn't the deal-breaker. Jobs didn't want Flash to become the de facto development environment for the phone.
I'm pretty sure he always knew they'd end up with apps on it, they just didn't pull that together for the first release, thus the HTML song and dance. But if they supported flash, that would reduce a lot of the demand for apps later, and worse - it would be cross platform.
So he used the other (still good) reasons - battery life, security, etc. to obscure the real reason - Apple was not yet ready to compete with it on their own terms, so they banned it.
Apple did find a solution in mobile Safari for touchscreen hover states on the Web. However the Web platform generally offers more affordances for accessibility than Flash ever did, which I'm sure helps.
If I'm not mistaken the solution was to make touching an element trigger the :hover state and the click action unless the :hover state changed the visibility of another element. If the :hover state changed the visibility of another element, then the click action was not triggered until a user tapped again.
This is possible in HTML because it's trivial to determine whether or not a :hover changes display or visibility properties of other elements. As you've supposed, Flash did not afford browsers with that sort of ability.
> more affordances for accessibility than Flash ever did
TBF one thing Flash did manage to achieve was the proliferation of web sites and apps that were as hard to use for people without protected disabilities as for people who did have them.
Whether this increased empathy for people with disabilities is an open question
>The principal problem was that a huge proportion of Flash applications, games, and websites used mouseovers as crucial methods of interactions, and Apple simply had no way to allow users to mouseover an element on a touchscreen.
It's more about control and all the other reasons.
Mousovers is a dead herring. They could still give Flash not to run existing stuff, but new apps, that would take iOS into account.
And legacy flash movies and animations and games that just needed click and not mousovers would still work. To be frank, mouseover interaction weren't really that big in many (most?) Flash games anyway.
Mouseovers were just the tip of the iceberg: fixed screen sizes and other desktop UI conventions, assumptions that you could just leave things constantly running rather than figure out how to do proper event-driven programming, etc. Yes, they could have tried to do a “Flash mobile” but most of the appeal was compatibility with the huge library of existing apps and users wouldn’t have been happy with that, while authors would have bristled at having to give up their favorite hacks. Flash was a fractal of bad programming and UI design, and there was no way to get that community to improve since Adobe was one of the worst offenders and didn’t care about their platform in any discernible way. People wrote Click-to-Flash plugins to keep it from crashing your browser, every browser had to change their architecture to handle Flash crashing, the plugin & authoring tools had tons of performance, stability, and security issues – and Adobe just kept snoozing, waking only enough to cash the royalty checks, and claim the next version would be totally better.
Another poster mentioned the hangs - which is very true - and I can say that the desktop Macs I supported had a high 90% of the crash logs show Flash as the culprit, not to mention almost every CPU / battery life issue.
There were some great artists who produced neat work despite it but the only company to blame for Flash’s demise is Adobe. As a thought exercise, ask why Flash on Android consistently sucked – if they were trying to make the case that Apple should reconsider, they could have put at least one intern on making that seem appealing.
> Adobe was one of the worst offenders and didn’t care about their platform in any discernible way.
This is par for the course for Adobe. The other day I had occasion to try to fill out a PDF form using Acrobat Pro. I made it through about a page (painfully slowly) until I unwisely saved my work. Then I cursed for a bit, tried quitting and reloading, and eventually gave up and started over in PDF.js. Superior in every way.
I remember when a major selling point of Acrobat was that you could save a filled out form, whereas third party apps couldn’t. Apparently doing so still breaks the form, and third party apps have gotten it right for many years now.
Adobe seems to pretty much never care about their platform once they have market share.
I remember working on a project in 2010. We hit multiple crashing bugs in Flash (both runtime and IDE) doing fairly basic stuff. I figured we’d paid for support, might as well use it, and reported them to Adobe.
Literally never heard back until a year later when they closed the tickets saying it might be fixed in the next release and we should buy licenses to find out (spoiler: no).
I remember my old Samsung Galaxy Note (or was it the Note 3?) could actually sense when your finger was near the screen but not touching, for mouseover events.
I always wondered why that feature didn't continue - I remember it working quite well, but IIRC it only worked in Chrome browser.
Also, from a practical perspective, even Adobe never had a fully working (feature parity to desktop) way to actually load Flash on the iPhone. Apple kept asking for one: Adobe could never produce something that wasn’t buggy crap.
There were some 3rd party things that sorta worked a bit, but they were not good either.
Flash was bad on touchscreens for sure, but we’d have seen content adapt eventually anyhow, if it had actually ever worked in the first place.
Very true. In case anyone is too young to remember, in the days of the iPhone 3G, it was somewhat popular to buy apps which would use a real browser on a server somewhere to render a browser session, complete with Flash, and stream it to you. It was very handy for those of us who played Flash games and needed to check in on our game on the go. (Think FarmVille, Cityville, etc.)
Everyone does. AMD is the same. The market leader focuses on features, the runners up try to take them down with openness. The competition is good for consumers, but the motivation is one of self-interest, not the common good.
Then as they clawed their way back from the precipice,
they started "adding value" again.
I don't know if I can agree. On the software side, MacOS supports all those things. On the hardware side, it's still (edit: almost) nothing but industry-standard ports.
One thing that I've enjoyed is how for decades Mail.app has been first and foremost a generic IMAP mail client, with features specific to iCloud Mail being far and few between. It works exactly as well with e.g. Fastmail as it does iCloud Mail. Conversely, iCloud Mail is just plain old IMAP and works fine with other generic IMAP clients.
Compare this with Gmail for example, which has a crummy IMAP implementation, and though the Gmail mobile app supports adding non-Google IMAP accounts, it clearly prioritizes being a Gmail client.
I didn't say they necessarily denigrated the open formats but added and preferred their own proprietary image, audio etc formats as they gained market strength.
On the hardware side I'm delighted by Apple's USB C/TB push (and Apple contributed a lot to those standards, esp based on what they had learned with Lightning) but note they revived the proprietary "magsafe" connecter on recent laptops (though you can still use USB C PD). Apparently enough customers wanted it.
And as others have pointed out, Apple is hardly alone in this
Magsafe is genuinely a nice innovation that users missed. It solves the cord yank problem. But I do like having the option to use USB-C if I don't have the MagSafe cable.
I'm the other way around: I just want to carry a UB C cable or two. The cable was designed so wear and damage should accrue to the cable, not the connector in your device.
Had lots of phones, always in my pocket, never had that problem.
Actually I just realized I put it in my pocket top-end down. So the usb socket is facing the sky. I suppose that might keep the usb socket further from pocket lint.
The only port I’ve ever experienced this problem with is Lightning.
At least on many iPhone versions, if anything damages one of the delicate lead springs inside the port, Apple service will tell you to replace the entire phone.
This magnetic thing saved my macbook from numerous falls when people tripped on the power cord.
It was ubiquitous, worked across the whole lineup and for several generations. It is hard to forget that until usb-c, it was commonplace for a manufacturer to have a wide range of power adapters, of varying voltage, power, connector, etc. Apple do their own shit, but they do it consistently.
About your side note: That seems physically impossible. Magsafe only has five pins, and is larger than USB-C, which has twenty-four pins. To me, USB-C is a modern miracle. It is so small, reasonably strong, reversible, and has very high pin density.
Some of the new MacBooks require a charging port capable of greater than the current USB-C specification limit of 100W, that's likely one reason why they decided against coopting USB-C exclusively for charging.
iChat stored every conversation (if you enabled history) in HTML files the user could find in their Documents folder and read. Messages history is stored in an undocumented SQLite database that you’re not meant to touch. I’m not saying they’re completely proprietary in every way, but you can see the progression from wide open to “keep your hands off.”
For another example, Apple Notes was actually just mail messages stored in IMAP, until they decided to deprecate that in favor of another iCloud-backed black box in order to add more features.
I think you are joking, but if you want to, you can buy an adapter for like €10 that will go from magsafe to USB-C, and then indeed charge your non-Apple laptop with it.
Or even “Apple doesn’t like open standards that are weak”. OpenGL was just no longer fit for purpose. They were competing against DirectX, needed to do ML acceleration, and at the time Vulcan didn’t exist. They had to do something, and especially given their chip strategy that must have been in full swing at the same time, Metal must have been a no brainier.
> In my experience, Apple doesn't like any standards it doesn't control with an iron fist
Apple supports a number of open standards. I think I’d modify your statement to say Apple doesn’t want to depend on any standards it doesn’t control. And while that may appear nefarious, I get Apple’s implied position there. They have really tight coupling between hardware and software in order to deliver on the UX that they intend (whether or not you like all or some of it). If they’re designing their own software and hardware, I can see why they’d also want to implement standards that they can control to some degree - otherwise their UX is dependent on others. This is also why I think Apple sometimes implements new industry standards, USB-C being one example - if no one else has made an effort yet, they can influence the direction by being first movers.
OpenUSD is a way of bundling and specifying the data to be rendered. It doesn't relate to which API the app uses to accelerate rendering. Adobe, Autodesk, Blender, and most others support different backends per operating system including Metal on macOS already.
While not at the kernel level I think WebGPU is probably better than something like Vulkan for that "I can target this low level access to the GPU resources but expect it to work everywhere" use case. Ignoring the name hint that it was designed for the web (it works fine outside the browser), it's a lot more portable than Vulkan across its various implementations precisely because it needed to work on various devices that could browse the web. It's also already backed and supported by all the big players, including Apple.
WebGPU, unsurprisingly, is much better supported by the web ecosystem right now (just like WASM in the beginning), so you'd have better luck trying to use ClojureScript (as then you can just use the JS interop) for playing around with it. https://codelabs.developers.google.com/your-first-webgpu-app would be easy to get started with and translated to ClojureScript cleanly without issues.
Also, keep in mind that WebGPU only been around since 2021, and wasn't actually enabled in Chrome until this (2023) spring, so it's relatively new tech, not surprising that the ecosystem still feels halfbaked.
I'm a big fan of webGPU. I use it within my Three.js projects. Really happy to see it gaining support worldwide. I hope either it or similar projects continue to gain traction!
Apple is definitely not going back to OpenGL. I can't say I'm particularly sad, OpenGL is very long in the tooth and writing a good driver for it seems like a nightmare, considering how hard it is to even write good performant application OpenGL code. I wish they had thrown in behind Vulkan instead of creating Metal, but it seems like outside of linux vulkan is a second class citizen everywhere (although it's a pretty good second class citizen to target).
In terms of supporting games or steam and such, I think the reality is that a large segment of games now use engines that handle the API stuff for you, and if you do have the resources/time/inclination to write directly to the API, you're probably ok with MoltenVK as long as you're not doing anything too cutting edge.
Seriously though, while I've written OpenGL most of my life to be cross platform and support the most I can, it's an absolutely TERRIBLE api. Global state everywhere, tons of functions you can-but-shouldn't-use, all sorts of traps everywhere, monsterous header files for extensions, incredibly hard to debug. Vulkan is verbose but in a lot of ways it's actually easier, even if it's advertised as being the more hardcore way to do things.
In my experience, Apple doesn't like any standards it
doesn't control with an iron fist
I mean, on one hand, the Metal/Vulkan/OpenGL situation is unfortunate and I don't understand Apple's motivation there.
On the other hand I'm sitting here typing on a Mac with nothing but USB-C ports that is connected to half a dozen peripherals and a big ol' monitor over those standard ports using standard protocols.
In general I feel that Apple prefers open standards when they actually suffice. USB2 couldn't do a lot of the things that Lightning did, so they invented that. Now that USB-C exists, they embraced it immediately on the Mac and iPad but are unfortunately dragging their feet w.r.t. the iPhone.
> I mean, on one hand, the Metal/Vulkan/OpenGL situation is unfortunate and I don't understand Apple's motivation there.
OpenGL is a dead end - while vendors (including Apple) still support their respective GL stacks, there is not really active investment anymore either in those stacks or in the standards.
Metal came out years before Vulkan, and Apple has tight integration between the graphics API and their underlying first-party graphics hardware designs. If Apple did have first party support for Vulkan, it would basically be MoltenVK anyway. Apple tends to push anything which isn't a first party framework to be third-party sourced and bundled as much as they can. They likely think MoltenVK as a third party library is the best scenario for Vulkan support.
"Years" is overstating it a bit; Vulkan's SDK came out a little more than a year and a half after Metal was available on iOS and 8 months after it was available on Macs.
> the Metal/Vulkan/OpenGL situation is unfortunate and I don't understand Apple's motivation there.
I don't think it's hard to understand. Apple wanted an easy to use API that could be extended and updated easily. Vulkan is an extremely complex API that foregoes all and any developer ergonomy to facilitate quick driver development on as many hardware targets as possible. Consequently, Vulkan's design choices are driven by the least common denominator. The goals are just too different.
There is at least some evidence that Apple was interested in supporting and shaping Vulkan (they were member of the initial working group), but I suspect that it very quickly became clear that the committee is going into a direction they were not interested in, so they noted out.
Still, I don't think it's correct that Apple is opposing any kind of standardisation in this domain, they just want something more aligned with their vision. They have been very active with WebGPU, which is shaping up to be a very nice API, and it has inherited a lot of good design decisions from Metal.
Indeed Apple has been very active in shaping WebGPU, and that is why we can't use a bytecode representation for shaders. Instead, we have to repeat OpenGL's mistakes and store shaders as strings.
WebGPU specifically has to be the lowest common denominator among the APIs it supports. And there are several features very useful in GPGPU's which are supported in Vulkan and CUDA, but cannot be included in WebGPU due to the lack of Metal support. One such example is floating point atomics.
I am sure that bytecode representation of shaders will come in a future revision. It’s not a first priority. SPIR-V is a poor choice for portable shaders for reasons outlined by the WebGPU group on multiple occasions. And WebGPU shading language finally goes away with some poor legacy design choices that are still stuck in GLSL and HLSL (such as shader bindings via global variables).
Regarding floating point atomics, I think you got it confused? Metal fully supports floating atomics on all recent devices, while in Vulkan it’s an optional extension. According to gpuinfo database only 30% of implementations support float atomics, and only 10% support more advanced operations like min/max. If you are looking for cross-platform float atomics support, Apple is the least of your worries (what they suck at are 64-bit int atomics though).
>On the other hand I'm sitting here typing on a Mac with nothing but USB-C ports that is connected to half a dozen peripherals and a big ol' monitor over those standard ports using standard protocols.
I have two 4K 60Hz monitors plugged in my work MBP (M1 Pro, or is it Max?) via DP over USB-C (not TB) basically every day. The MagSafe and HDMI ports sit unused, and I wish these were more USB-C ports instead.
My personal Mini M1 can't handle two DP over USB-C displays but can handle one DP over USB-C + one USB-C to HDMI. I also wish the two USB-A ports were USB-C as well.
MST has been a dead letter since day 1 because nobody puts a more expensive monitor controller board in a monitor than it actually needs.
4K60 monitor? cool, it gets DP1.2. So MST means dropping to 1440p or 1080p resolution (splitting DP1.2 across two monitors).
crappy dell office monitor? it gets DP1.1, so MST means dropping to 720p or 540p.
absolutely no company in their right mind is going to haul off and put a DP1.4 in a bottom-shelf monitor or whatever, such that MST actually had extra resolution to play with. So the only places it matters are (a) stocktrader type people who want 8 monitors and don't care about visual quality/running non-native resolution, and (b) office situations and other places where the ability to run a single cable is more important than visual quality.
so de-facto nobody has ever cared about MST, and docks fill this use-case much better. Thunderbolt/USB4 doesn't care about what monitor controller board is behind it. It just cares that you have one DP1.4 stream and the dock can allocate that into as many physical ports as the dock physically allows. Have the bandwidth but need more ports? Cool, just daisychain more docks/adapters.
(and this does work on m1 pro/max btw - this guy for example did eventually find a dock that worked for him.)
the big gotcha with M1 is really that the base M1 is actually a crossover chip between high-performance tablet/ultraportable laptop and so Apple doesn't want to waste the space for multiple PHYs that won't be used. So it gets 1x HDMI PHY (normally used for the internal screen, on tablets/laptops) and 1x DP PHY for an external monitor. The Pro/Max support 2 and 3 external displays respectively.
I do agree this is a major limitation on the "just buy macbook air" approach, although you can use displaylink (video compression+USB+active adapter) or use an ultrawide monitor (it's a fast connection, you just only get 1 of them). In particular the 15" MBA really needed a "Pro" CPU option like the Mac Mini family, because that's actually a very nice ultrabook other than the single display, and I absolutely think the chassis is big enough to handle it for normal "interactive" use-cases (not bulk video processing/etc).
And of course the 13" MBP doesn't get one either but lol fuck that thing anyway, let the touch bar/old-style chassis die already please
> 1. Apple conforms to the existing standards of OpenGL and Vulkan we see gaining steam for many film and game production pipelines.
Read what gamedevs have to say about this, Metal is more appreciated than Vulkan
> 2. Apple tries to throw its weight around and force devs to support their Metal standards even more, ultimately hoping to force the world onto Metal + macOS.
Apple was part of the Vulkan working group, knowing what gamedevs prefer, it now make sense why they parted away and created Metal instead
In retrospect I can only show compassion to Apple, they made the right choice
I think the conversation is mixed here. Baldur’s Gate 3 was just released with first class Vulkan support. Steam is pushing hard with MoltenVK on macOS and native Vulkan drivers on the Steam Deck.
I agree there are likely a lot of game devs who like Metal, but it would appear there are a lot of heavy hitters backing Vulkan.
As well, in film, many render engines prefer Vulkan due to the flexibility to write complicated compute shaders with complex command buffers. I experienced this first hand working with VFX studios in my day job.
I think the story is mixed, there is a big interest in Vulkan still.
Do you mean engine developers who actually use the Metal API, or game developers writing shader code? I know game developers prefer HLSL (Direct3D) over GLSL, but I dunno what people think about MSL.
Nonsense. Feature-wise, they are mostly equivalent (Metal has better support for GPU driven pipelines and shader authoring). Metal is much simpler and more flexible. The only way Vulkan is "better" if you measure lines of codes.
Metal allows you to present surfaces at exact times. You have access to the display refresh timing information and it's your job to synchronise your drawing rate to (potentially variable) display presentation interval. Vulkan presentation modes are workarounds over the fact that Vulkan provides no fine-grained control over presentation intervals.
There is the VK_GOOGLE_display_timing extension that provides functionality similar to Metal, but it doesn't seem like it's well supported on desktop. The equivalent official extension seems to be stuck in limbo somewhere.
Sounds really fine-grained, but does this mean I have to invent my own "mailbox" every time I want "unlimited refresh rate with minimal input lag, but without tearing"?
I think it should be as easy as not presenting a drawable if you detect that the previous frame is still rendering. Should be solvable by adding a single conditional guard to the command buffer completion handler. Never did that myself as I don't have a use case for it, so I might be underestimating the challenge.
Note that mailbox approach does not really give you unlimited refresh rate, as you are bound by the number of drawables/swapchain images your driver can supply. If your drawing is very fast these resources become the bottleneck. If you truly need unlimited framerate (e.g. for benchmarking) the best approach is probably to render to a texture and then blit the last one to a drawable for presentation. And if your goal is "minimal input lag", then you might as well do it right and decouple your simulation and rendering threads.
Apple have less constraints so their API is more straightforward (less abstractions) and less verbose, Vulkan give you more control, but at the expense of a more convoluted, verbose and complex API, people like to joke about the amount of code one need to write in order to render a triangle with Vulkan
I am not convinced that Vulkan gives you more control. Metal is adaptive in the sense that it can manage some state and configuration for you, but that is strictly opt-out. You still get your manual memory management and low-level event synchronisation.
On the topic of control, Metal has precise control over display intervals and presentation timing (I think Vulkan recently introduced something similar, not sure).
>I wonder if support for OpenGL, Vulkan, etc will improve now that Apple is partnering with nVidia, Adobe, Autodesk, Microsoft, etc around the OpenUSD rendering/animation/CAD/3D-scene format?
I'd say it's totally orthogonal matter (having a standard 3D scene format, and what graphics api will render it), and Apple's participation on that will be minimal anyway.
>I would be surprised if Apple doesn't use it as a means to cement more 3D software vendors into macOS land. It's really hard to render consistently if the underlying drivers are all wonk and proprietary.
There's an easy fix though Apple can suggest: just use the official macOS engine.
Why would the even opt for (1)? To burden themselves with supporting different 3D engines? They already support and maintain their own.
I don’t think that’s the shtick behind OpenUSD. It’s not a transmission format like glTF, so the intent is not to get consistent rendering but rather to standardize intermediate graphics representations so that software that works on 3D scenes (unreal, Maya etc) can represent all of their workflow in USD and get consistent interop.
They even did a big demo shot showing the same frame being rendered in multiple different editors all creating the same consistent result and matching. All of it was said to be due to OpenUSD standardizing how a scene is defined, animated, and rendered.
Just because they used them doesn't mean they like using them, just that they don't have the sway to move people to something of their own.
FireWire was developed by Apple and some other companies, as a competitor to USB, and lost out to USB.
Apple had their own video container and codec formats in quicktime, and those also lost out to others.
They definitely prefer to roll their own, they just don't always succeed in gaining enough market adoption (in the past) or they're told to stop pushing it to the detriment to their users (as recently with USB-c).
> FireWire was developed by Apple and some other companies, as a competitor to USB, and lost out to USB.
Apple was part of the patent pool for FireWire and is also part of the patent pool for USB C and was early to be onboard with Thunderbolt along with Intel.
Apple went all in on USB with the iMac in 1997 well before PCs were completely onboard.
> Apple had their own video container and codev formats in quicktime, and those also lost out to others
Apple’s QuickTime container is part of the standard
> Apple went all in on USB with the iMac in 1997 well before PCs were completely onboard.
"PCs" were using either parallel or serial ports, in addition to the PS/2 ports for mice and keyboards. None of them were proprietary or if they were, they were widely used so basically standard. USB ports were added easily as expansion cards on those PCs (TBH I don't recall if it was the case already in 1997, don't remember owning any USB peripheral back then)
My point is that PCs already had perfectly standards for cheap peripheral communication, so there was less pressure to upgrade to USB. I remember the "PC2000" slogan that aimed at having USB-only PCs by 2000, it probably took 3-4 extra years.
Apple is listed first as the designer, then second the IEEE1394 working group. Indeed, there's some indication that Apple's development started in the 80's and it wasn't until later it was presented as a standard.[1] Funnily enough, they wanted it to replace SCSI, another technology you noted as a counter to Apple not liking standards they don't control.
> is also part of the patent pool for USB C
Being part of a patent pool doesn't really mean anything to me, given how companies use patents strategically and trade them. Do you have details on what patents may be shared? (I ask because I looked and it wasn't obvious from some light googling on my part).
> and was early to be onboard with Thunderbolt along with Intel.
They weren't early to onboard, they developed it with Intel (even if Intel held most of the patents and may have done the lion's share of the work, I'm not sure on that point).[2]
> Apple went all in on USB with the iMac in 1997 well before PCs were completely onboard.
Being able to control the hardware completely allows they to make shifts like that, because there was no one "PC" to be completely onboard. That said, they make moves away from it where they could for protocols they had some level of control and or steering of (FireWire, Thunderbolt, etc).
> Apple’s QuickTime container is part of the standard
Apple's QTFF was donated to be the container for MP4, but for a decade or more prior to that it was proprietary (but may have been open to implementation by third parties, I'm not sure). The main problem was that Apple licensed and defaulted to using Sorenson video codecs in their Quicktime framework and shipped it along with their video players, locking down the playing of the format to people willing to purchase the Sorenson codec individually or to those that used their player.
I admit this one is less about using a standard of their own and more just an early example of the platform control and lock-in they're known for now.
> And Apple is in the patent pool for H.264
Again, being in a patent pool for a large company doesn't by itself signal anything to me, given how strategically large orgs use patents. I would need some more info to view this one way or another.
What exactly is your complaint? That Apple only uses standards that it contributes to? What other computer maker was going to move technology forward?
Should Apple have used the PS/2 connector instead?
> That said, they make moves away from it where they could for protocols they had some level of control and or steering of (FireWire, Thunderbolt, etc).
What were they going to use instead of FireWire? USB 1 was painfully slow. Again what other “standard” should they have used?
There was never a Mac that didn’t have USB after the iMac.
You can go back even further Nubus was licensed from Texas Instruments (used in the Mac II in 1987) and they moved to PCI with the second generation PowerMacs in 1996 (?)
My complaint? I'm just calling into question your counterpoint exampkes supplied when someone stated that Apple only likes standards they control. Whether it's warranted or they have good reason to in some cases is somewhat besides the point, they have a long history of developing their own standards, sonetimes because they are addressing a problem that isn't solved by another technology, and sometimes just because they would rather have something they control whether the market segmentation and user confusion it causes is best for the customer or not.
> Apple had their own video container and codec formats in quicktime
You know the MP4 standard is based on Quicktime, right?
> MPEG-4 Part 14 is an instance of the more general ISO/IEC 14496-12:2004 which is directly based upon the QuickTime File Format which was published in 2001.
> FireWire was developed by Apple and some other companies, as a competitor to USB, and lost out to USB.
Firewire existed for years before USB: it was designed in the late 80's, roughly 10 years before USB. Development was mostly Apple and Sony, but numerous others were involved in the IEEE-hosted process.
As USB became more capable (Firewire vs USB1 was no contest), it gradually began to replace it. But ultimately, Thunderbolt was its real replacement.
Apple deliberately broke DSC 1.4 to support the Pro Display XDR. Thousands of people happily using 4K HDR10 high refresh monitors under Catalina all of a sudden couldn’t under Big Sur with the release of the Pro Display, and people wondering how they were managing the resolution.
And demonstrably so - “downgrading” to DSC 1.2 actually improves those other users refresh rates and HDR support.
This year definitely because of the EU mandate. It will probably still be nerfed like the low end iPad that has USB C. But still transfers at USB 2 speeds
I honestly don’t get why people are so furious about this. I asked my two siblings, two friends from my uni days and two friends from work and none of them have used the Lightning (soon to be USB-C) port for anything but charging and music.
None of them even remember connecting it to a computer past the iPhone 5S.
It’s pretty easy to move files on and off the iPhone using a usb/portable ssd and the files, hardest part would be you need a usb to lightning otg cable which is somewhat uncommon.
It’s the kind of thing you don’t really do if you follow the apple flow though. You’d either stream the video from whatever service, or you sync it with apple photos and it will be available on your phone.
When I plugged in the iPhone via cable to my Windows PC, I could only extract pictures which were taken recently, god knows why.
Apple officially recommends to install iCloud on the PC and download the files from there, but they didn't let me disable the upload of the files on my PC, so I uninstalled iCloud again.
Then the recommendation was to just download it from iCloud Web. Which I did. But for some reason iCloud downloads default to a lower resolution (720p video in my case) instead of the full resolution. To do this I had to click on a small button, which then gave me the option to download my own files in full res.
Of course I only noticed that I'd downloaded a lower res after editing a video for 5 hours. All in all, an extremely subpar experience. Every Android phone ever can just transfer files over cable to any PC, for some reason just iPhones have to be complicated...
As long as it has DP alt-mode for HDMI, I'm happy. Unlike Lightning that doesn't do that, so they package a full h264 decoder into that HDMI adapter.
The USB 2 thing is probably like the Raspberry Pi 4: the SoC only supports 2.0. Older iPads and the Pi4 have a full USB 3.0 controller external to the SoC. Apple likes to re-use the previous year's SoC and no point doing USB 3.0 before. I could see the pro models doing 3.0 since it'll probably be a new SoC
> It will probably still be nerfed like the low end iPad that has USB C. But still transfers at USB 2 speeds
Other than one exception, the A-series SoCs have not shipped with USB 3.x or USB4 support. The 10th gen iPad uses an A-series chip, so it is pretty close to being a "lightning to USB dongle" inside the case.
So it isn't a software or manufacturing nerf - the part does not support USB 3.
Fine for me, I never transferred anything over a wire between my phone and another device. It’s way more annoying to have to deal with a different charger.
Apple's strategy for cross-platform GPU is WebGPU, which they are actively involved in. OpenGL has been obsolete for a while and Apple has no interest in supporting Vulkan (for multiple reasons). The core philosophy of Vulkan — which is designed as a least common denominator abstraction to facilitate fast driver development, with no regards to end developer convenience — is at odds with what Apple wanted (a compact, easy to use API with progressive manual control knobs).
Current Metal is essentially a more streamlined and user-friendly DX12 Ultimate, plus a mix of some console-like functionality and Apple-specific features, plus Nvidia Optix, plus a subset of CUDA features (such as cooperative matrix multiplication). Plus they have a very nice shading language and some neat tools to generate and specialise shaders. I expect them to continue gaining feature parity with CUDA going forward (things like shared virtual memory might need new hardware). They simply can't offer such comprehensive feature set in a reasonable fashion if they went with Vulkan, even if we let the issue of usability aside for a moment (and we really shouldn't, as Vulkan is horrible to use).
I was unaware that Apple was helping implement WebGPU! I actually love WebGPU, it looks great and pairs very nicely with three.js which is a favourite hobby tool of mine to use on pet projects.
I can tell you have strong opinions on Vulkan. I don't disagree with your general view that it's hard to work with development wise as it's very tied down to driver and hardware implementation specifics.
What I can say though, is that I've met several pipeline rendering engineers (think folks who invent render engines for film and write low level game engine code) who seem to love Vulkan. They appreciate being able to really get down to the bare metal of the drivers and eek out the performance and conformity they need for the rest of the game or render engine.
A lot of the frustration with OpenGL/DirectX from these specialists was their inability to "get in there" and force the GPU to do what they really wanted. Vulkan apparently gives them a lot more control. As a result, they are able to accomplish things that were previously impossible.
All that being said, I think WebGPU will be far more popular for 99% of developers. Only a very few folks like getting down into the nitty-gritty of libraries like Vulkan. At the same time, there is huge money to be made knowing how to make a game eek out another 10 FPS or properly render a complex scene for a film group like Pixar who wants to save days on a scene render.
> I was unaware that Apple was helping implement WebGPU!
WebGPU is pretty much a combined Google/Apple effort (of course, with other contributors). If I remember correctly it was Apple engineers who proposed the name "WebGPU" in the first place.
> I can tell you have strong opinions on Vulkan
I really do, and I know that my rhetorics can appear somewhat volatile. It's just that I find this entire situation very frustrating. I was deeply invested in the OpenGL community back in the day and decades of watching the committees failing at stuff made me a bit bitter when it comes to this topic. We had a great proposal to revitalise open platform graphics back in 2007(!!!) with OpenGL Longs Peak, but the Khronos Group successfully botched it (we will probably never know why but my suspicion having conversed with multiple people involved into the process is that Nvidia was fearing to lose their competitive advantage if the API were simplified). Then we saw similar things happening to OpenCL (a standard Apple has developed and donated to Khronos btw.).
I am not surprised that Apple engineers (who are very passionate about GPUs) don't want anything to do with Khronos anymore after all this.
> What I can say though, is that I've met several pipeline rendering engineers (think folks who invent render engines for film and write low level game engine code) who seem to love Vulkan. They appreciate being able to really get down to the bare metal of the drivers and eke out the performance and conformity they need for the rest of the game or render engine.
But of course they are. OpenGL was a disaster, and it's incredibly frustrating to program a system without having a way to know whether you will be hitting a fast path or a slow path. We bitterly needed a lower level GPU API. It's just that one can design a low level API in a different ways. Metal gives you basically the same level of control as Vulkan, but you also have an option of uploading a texture with a single function call and have it's lifetime managed by the driver, while in Vulkan you need to write three pages of code that creates a dozen of objects and manually moves data from one heap to another. I mean, even C gives you malloc().
Vulkan gives me an impression that it was designed by a group of elite game engine hackers as an exercise to abstract as much hardware as possible. Let's take for example the new VK_EXT_descriptor_buffer extension. This allows you to put resource descriptors into regular memory buffers, which makes the binding system much more flexible. But the size of descriptors can be different on different platforms, which means you have to do dynamic size and offset calculation to populate these buffers. This really discourages one from using more complex buffer layouts. They could have fixed the descriptor size to say, 16 bytes, and massively simplified the entire thing while still supporting 99% of hardware out there. Yes, it would waste some space (like few MB for a buffer with one million resource attachment points), and it won't be able to support some mobile GPUs where a data pointer seems to require 64 bytes (64 bytes for a pointer!!! really? You make an API extremely complicated just because of some niche Qualcomm GPU?) And the best part: most hardware out there does not support standalone descriptors at all, these descriptors are just integer indices into some hidden resource table that is managed by the driver anyway (AMD is the only exception I am aware of).
In the meantime, structured memory buffers have been the primary way to do resource binding in Metal for years, and all resources are represented as 64bit pointers. Setting up a complex binding graph is as simple as defining a C struct and setting its fields. Best part: the struct definition is shared between your CPU code and the GPU shader code, with GPU shaders fully supporting pointer arithmetics and all the goodies. Minimal boilerplate, maximal functionality, you can focus on developing the actual functionality of your application instead of playing cumbersome and error-prone data ping pong. Why Vulkan couldn't pursue a similar approaches beyond me (ah right, I remember, because of Qualcomm GPUs that absolutely need their 64-byte pointers).
The thing is, this all works for a middleware developer, because these are usually very skilled people who already have to deal with a lot of abstractions, so throwing some API weirdness in the mix can be ok. But it essentially removes access from the end developer (who is passionate but probably less skilled in low-level C), making large middlewares the only way to access the GPU for most. This is just a breeding ground for mediocrity.
> In my experience, Apple doesn't like any standards it doesn't control with an iron fist
what about MacOS being a Unix?
I'd suggest a deeper diagnosis is that Apple doesn't like standards incapable of showing off or leveraging custom hardware prowess, which is a key competitive advantage.
USD is a file format. It doesn't have anything to do with the underlying graphics APIs. In fact, tailoring your file format to the underlying graphics APIs is pretty dumb. (like glTF specifying OpenGL constants like 9729, 9987, etc.)
I didn't mean to say they should tailor the file format to the graphics API. I more meant, when a scene format becomes popular, usually you see multiple engines/pipelines/libraries support the underlying scene descriptors like material files, physically based rendering profiles, animation keying, etc.
I wouldn't want the file format dictated by the graphics API, but I would like consistent rendering output in multiple places for the same file. That'd be cool.
In case you're wondering where this "conformance" idea came from, check nVidia's 2023 Siggraph talk. Jensen will buzz word OpenUSD and conformity across products until your ears bleed.
Consistent rendering is only possible when the material models used by the various engines are similar enough. Physically-based renderers produce pretty similar results for basic diffuse-metallic-clearcoat, etc. materials. Where things become hairy are more advanced effects like refraction, subsurface scattering, ambient occlusion etc., where different engines use different techniques with different tradeoffs, because there's no easy one-size-fits-all implementation. The UsdPreviewSurface part of the USD spec doesn't even support many of these advanced effects. If your scene uses these effects a lot, then consistent rendering is less likely.
Considering the whole schtick of OpenUSD is "one file format that renders consistently everywhere" (paraphrasing), I would be surprised if Apple doesn't use it as a means to cement more 3D software vendors into macOS land. It's really hard to render consistently if the underlying drivers are all wonk and proprietary.
I am curious to see how this plays out. In my mind, there are two options:
1. Apple conforms to the existing standards of OpenGL and Vulkan we see gaining steam for many film and game production pipelines.
2. Apple tries to throw its weight around and force devs to support their Metal standards even more, ultimately hoping to force the world onto Metal + macOS.
My heart hopes for option 1, but my gut tells me Apple is going to push for option 2 with all the might it can muster. In my experience, Apple doesn't like any standards it doesn't control with an iron fist (not really saying much about Apple here though... nVidia, Autodesk, Adobe, and Microsoft are all the same).
The next couple of years are going to be interesting for sure!