> In short, my brain has crossed a Rubicon and now feels like experiences constrained to small, rectangular screens are lesser experiences.
I left Apple 1.5 years ago but was working on the Vision Pro while I was there. I have spend many hours working in the headset and I know exactly the feeling he's describing! Leaving felt like going back in time to using clunky technology and I've been waiting for outside world to catch up (and will still be waiting until it comes out at least).
For the past week I've been trying to explain to people how certain I am that doing work in headsets will become mainstream but am (understandably) met with doubt, and I think you'd need to try it on to fully understand.
I don’t want to be nosey but given that it sounds like you feel like it’s a paradigm shifting device, and you’ve actually worked on and with the device, I’d be interested to read about your motivations for moving on from Apple/the project when you did if you ever feel like writing about it.
So many questions I’d love to ask you about this, but again, none of my business. Just wanted to let you know I’d upvote that blog post to the moon should you ever decide to write it.
Thanks! My motivations were mostly comp based (I'm in HFT now), I loved the product and the team I was on!
Feel free to ask any questions here and I'll answer anything I can (e.g. no unreleased info, but maybe go into more detail about the kind of stuff that was shown to devs at WWDC). No blog right now but maybe in the future :)
A major limitation for using VR devices as "virtual monitors" has been the screen resolution. Vision Pro appears to have significantly higher resolution than anything available on the market right now; in your experience, was the resolution high enough for the pixels to disappear and not be a distraction, especially when e.g. reading text? Have you used other VR headsets to compare it to?
I've used other VR headsets and it's significantly better than everything I've tried. You'll be able to put up virtual monitors at the same density as real life and read stuff fine, I couldn't really discern pixels at all. Reading physical monitors through passthrough is a bit harder (I increased text size a few points when I needed to do this) but isn't really something you'd do for actual work.
Lol this comment made a fun image pop into my head. Being in a work meeting with my Vision Pro on looking at the presentation through passthrough while browsing HN on my virtual screen. :)
The idea of looking at real screens in passthrough didn't even occur to me.
Lol, I'll expect people will figure out tells for whether you're slacking off irl in a headset just like being distracted in a zoom meeting. Reprojected eyes jumping around too much? Holding your hands below the table to pinch out of sight?
Yep, I'll be sound asleep and the visor will show my eyes focused intently. It'll blink at an appropriate rate and maybe raise eyebrows during interesting conversation.
Just purely from a PPD perspective a 5K Retina display will beat the Vision Pro, but from a readability standpoint I think they're equal. Like at "normal" text editor scales you can read everything just fine but a 5K display might be crisper.
Sorry, I haven't done any color critical work and am unsure of the specs.
If you're wondering about passthrough it's pretty good for most things but is definitely missing the dynamic range of the human eye, which no video camera or display can really match yet. Like a super bright light might just show up as white and you can look directly at it no problem. Basically the same as what you get when you record a video and watch it back on your phone.
It's more comfortable than other VR headsets I've tried, but still something strapped to your head that's significantly heavier than a pair of glasses. I think I could spend all day in it with no issues but I'm not sure how sick of it I'd get doing that 5x a week, every week.
I'm sure as soon as it's released people will be doing that and reporting back though!
Have you tried the XReal glasses? virtual monitors are my biggest use case too and I'm close to making the plunge on XReal, but would love to hear from people who have tried them.
Resolution aside most headsets kinda feel like wearing a scuba mask because of a narrow field of view. How was the vision pro? I assume this should be public info since it was shown to the press and devs.
To be honest I don't know the actual FOV number. It feels better than some VR headsets I've tried and on par with others. The lenses are definitely a more exotic shape than the ones on my Vive so they're able to get closer to your eyes and have better quality in all areas of the FOV. I feel like for work stuff and entertainment it's definitely good enough, though you might struggle living in it full time haha.
1. You can drink stuff but have to be careful. Hand-eye coordination gets a bit wonky the closer you are to your face. I've done it and it works though!
2. Never tried sleeping with it... I don't see why it would be any worse than other headsets though.
3. I've never used it outside, but that was for secrecy and not technical reasons.
4. Honestly not sure, maybe an hour without taking it off at all but I've definitely been in it for the majority of a few hour spans many times. At the time the main blocker was the beta OS and not comfort or battery (I would keep the battery pack plugged into the charger most of the time).
5. Nope! We were all super careful with them because prototypes are expensive, much more so than the consumer product. It's not something you could just casually drop while using like your phone though.
4. So the battery can be charged while plugged into the headset. What happens when you pull the battery from headset? I am guessing insta black. Does it have some power-saving mode where only R1 is feeding images from cameras to displays without any computing possibilities?
> (I would keep the battery pack plugged into the charger most of the time).
This sounds like the battery can be charging while using the headset, right? Which imo makes the 2 hour battery life much more understandable – if you're stationary in the device most of the time then you only have to rely on the battery when you move. If you can plug in to charge when you're back at your desk/couch, it's not really a limiting factor (for the use cases Apple is pursuing).
That can't be right. The human FoV is huge, more than 180 degrees. To cover that range without visible pixels requires much more than "4k" type resolution.
Either one of those statements can be right. Not both at the same time.
It's time someone with real measuring equipment looks at of these and gives a more technical review than just "wow Apple magic".
> That can't be right. The human FoV is huge, more than 180 degrees. To cover that range without visible pixels requires much more than "4k" type resolution.
I'm not so sure. Human vision is only sharp in a coin-sized area at any time. If you fix your eyes on a single word in your comment you can't actually read the entire comment, for example.
In other words, you don't need 4k in your entire FOV you just need to ensure that most of the pixels are spent in the middle of the viewing area.
I believe that it would be possible to have the screens in the headset generate a very distorted image, where the edges are compressed to a small area of the actual screen and therefore low-res, while the lenses stretch this image to fill the viewing FOV. Kind of like anamorphic movie lenses, or even how wideangle lenses distort the edges more than the center.
I have no idea if Vision Pro does this but it seems theoretically possible at least.
each eye is not getting "4k resolution" they are claimed to be each capable of rendering a 4k screen in natural resolution.
meaning a couple 4k screens can be rendered in the visible area without a noticeable difference in quality to real screens at a similar "apparent" distance
Not sure why people are still repeating this erroneous line of thinking. Humans can’t look in two different places at once like a chameleon. The eyes focus on the same point and the stereo images overlap almost entirely - the difference in point-of-view between your two eyes is tiny, they are next to each other! (Try alternatively closing one eye). If you’re emulating a pixel from a screen in VR you’re going to have to draw that same pixel in both eyes. You do not get 2x the pixels.
The exception to this is at the edges of your vision, where each eye does see a unique portion of the field, but by definition that’s not where you’re looking.
> meaning a couple 4k screens can be rendered in the visible area without a noticeable difference in quality to real screens at a similar "apparent" distance
This is impossible at the resolution stated. The headset would have to be 8K or more per eye in order to achieve this, which it most definitely is not.
With Apple's marketing power, I really would expect them to claim a higher resolution number if it was that high. After all, why not? If it's actually 5K they would shout it from the rooftops. All they've said is "more pixels than a 4K screen", also we know this type of lens distortion causes some waste.
We also know 23 million pixels for the whole system. So, 11.5 million for one display. a 4K display is 8.2 million. So it's not a whole lot more than 4K, and lower than their own 5K display (which has 14 million), which matches what they claim ("more than a 4K display"). That's not enough to cover the full field of human vision and still have pixels so small they can't be seen.
With corner waste it's just barely enough for one 4K display this way (and stretched to the full limits of vision it will be pretty unwatchable so close). You can't display two as you mention, because AR/VR projection works by using the 2 screens to display the same content just from a slightly different position (parallax) causing the 3D effect. The more overlap between the eyes the better and more comfortable the 3D effect (some headsets try to get an ultrawide FoV this way but shoot themselves in the foot with low overlap).
If it has a really wide FoV its sharpness will be pretty much on-par with a quest 3/Pico 4, if it's got the same FoV it will be a lot sharper. I expect the truth to be somewhere in between. Wider than a Quest and also sharper, but pixels visible if you look well and not quite full human FoV.
What I expect Apple will have done is sacrifice a bit of vertical FoV for horizontal FoV. Vertical FoV is important in VR (especially 'roomscale') because of orientation issues, and not quite as important in AR. Also most of their marketing material promotes a seated position so moving around is not an issue. Most VR headsets have an almost-square resolution per eye, but I expect this one to be closer to 16:10 (maybe not quite that wide). I think it will come out at around 4300x2600 pixels per eye which is slightly over 11 million pixels.
Still very impressive (which it must be at that price point obviously!). But real-world and not magic. Makes sense because Apple's engineers, as good as they are, are bound by the same laws of physics we all are. Their marketeers would love us to believe otherwise though.
It's really time for some real-world specs on this thing instead of marketing blah. But I think Apple specifically doesn't want this, it's no wonder they let only the most loyal media outlets (like Gruber) get as much as a short hands-on with this thing. And even those experience are white-gloved in detail (they even prepared custom adaptive lenses). They just want to keep the marketing buzz on as long as possible.
Norm from Tested, a highly credible source for VR, actually mentions the FOV is akin to Valve Index's: https://youtu.be/f0HBzePUmZ0?t=825
This is actually a bit disappointing since I would rather not have to move my entire head to look at a virtual side monitor. It seems like the technology is there now and many companies are looking into it, hopefully other headsets will be released with higher FOVs: https://youtu.be/y054OEP3qck?t=283
Another noteworthy point from that video: Apple has bought Limbak, an optics specialist that was formerly tied with Lynx R1's optics. This means Lynx can no longer use Limbak's future optics that, admittedly, fall short in scaling to higher resolutions. Now, Lynx has shifted its gaze to hypervision optics, intent on preventing a similar acquisition by another tech behemoth like Apple.
I just read Gruber's review. When he says "field of view" he's not talking about total degrees of arc, he's talking about what one might think of as "zoom," in this instance. He says things don't appear larger or smaller when you remove the headset.
The actual quote from Gruber was "There is no border in the field of vision — your field of view through Vision Pro exactly matches what you see through your eyes without it."
I’m still curious how things will look in low light. All the demos were in an optimally lit room, but what about when you dim the lights and the camera has trouble picking up the room?
> Vision Pro and VisionOS feel like they’ve been pulled forward in time from the future. I haven’t had that feeling about a new product since the original iPhone in 2007.
LLMs are not so impressive when you understand how they work approximately. This new Apple thingy is very impressive even though it’s much easier to understand. Apparently even if yoou worked on it from the beginning.
What’s the best way to develop for Vision Pro until the headset is available to buy? Use ARKit/RealityKit/Unity in an iPhone to develop some of the basics and hope the code translates well enough?
Yep, the simulator (will come with the SDK when it's released) plus an iPhone/iPad is the best you're gonna get. Most code will definitely translate! Would recommend ARKit+RealityKit over Unity for the best OS integration unless you've already started with Unity or don't need OS integration (i.e. a videogame).
Do you feel there are any large leaps left for this type of device or will it be incremental improvements to price, speed, battery life, etc from here on out?
Not exactly sure at all, I wouldn't have been able to predict this large of a leap before finding out about it myself.
I do think a bunch of incremental improvements will eventually allow different use cases though. E.g. you'd never take it on a run or a bike ride in its current state but with enough weight and battery improvements you could have a HUD for exercise stats + navigation when biking with stuff like a videogame-esque ghost of your PR to race against. Just a random idea, there's a lot of stuff you can dream up.
I do think we'll see a fitness related product in the future but holy crap the early adopters trying it out should be super careful! It might be easy to forget that your eyes are completely covered with a screen.
Imagine riding a bike at 30kmph and accidentally unplugging the headset. Boom. Black. Instantly. That's a scary place to find yourself.
I'm curious if Apple will utilize the gyroscope to try and detect movement faster than a certain speed and show a warning. Take the risk if you want, but at least people should be aware of the consequences!
If you wear this thing in traffic you have a deathwish, and if you hit someone else while wearing it in traffic I hope you'll be sued into poverty to the point where you never ever will be able to buy another one.
If you wear Vision Pro to watch movies, can the field of view achieve a cinema-like effect. Or can we have a movie theater viewing experience on Vision Pro?
> That shift is fundamental. The interface for Vision Pro felt like it was reading my thoughts rather than responding to my inputs. Its infinite, pixel perfect canvas also felt inherently different. I wasn’t constrained by my physical setup, instead my setup was whatever I thought would be most productive for me.
That DOES seem like a paradigm shift in the offing to me. Sure it might be iPhone 1 expensive and uncertain, but it's easy to imagine how incredible a lighter, more affordable version will be to MANY people within 5 years or so.
> I've been trying to explain to people how certain I am that doing work in headsets will become mainstream but am (understandably) met with doubt
Tech enthusiasts underestimate how much the general public hates having gear strapped to their faces and over their hair, especially more than half of women who put considerable effort into their face makeup and hair.
I think this 1.0 is problematic and it will take several more leaps to get to where we need to be. Something more like swimming goggles (without the suction), visualized in the bladerunner sequel on the face of 'love' when she's calling in artillery on K in the 'junkyard'.
Every headset I have seen looks really unfashionable. Even the Vision Pro looks kind of bad. It is too bulky for the relaxing atmosphere Apple was going for in its adverts. And too rounded to appear more edgy. It is not commiting to anything, kind of like the new Macbook Pro design. This stands in contrast to Airpods (small but bold shape) and the Apple Watch (practical and elegant).
I must be one of the only people in existence who absolutely hates the aesthetic look of AirPods, for lack of a better word, they're abnormally "long" looking when they're hanging from one's ears.
Yeah people weren't exactly thrilled about the look of AirPods when they first released. Everyone thought they looked so unflattering. I do agree that the AVP does look too clunky for the general public, but it's not hard to imagine that this will be improved upon with every future generation.
I think it's one of the things they got right with the Quest Pro. It doesn't touch your face and is carefully designed so that all the support is via the forehead. It causes discomfort for some people but the upside is you put it on and off and it doesn't disturb your face or hair.
This raises an interesting use-case, though, which is the idea of a person who is working remotely or otherwise engaging virtually could rely on the 3D-model version of them, generated when they were professionally attired, to avoid having to go through that whole rigamarole every day.
I'm definitely not the general public, but the biggest factor for me has always been screen resolution, not size or weight. Until HMDs can achieve 8K+ per eye and match the fidelity of a sub-$100 4K monitor, I'm not going to be interested.
I'm not talking about "4K fidelity", I'm talking about "the fidelity of viewing a 4K monitor", which is different. When I'm looking at my monitor in front of me, it fills a small fraction of my visual field (around 15–25%?). So for a headset that covers nearly the entire visual field, the resolution would need to be much higher in order to reproduce the pixel density of the monitor.
This is, for most people, pointless, as the rendering costs will be much, much higher (>8x for a quadrupling in resolution per eye), they probably won't care if text is not quite as sharp as a real 4K display, and they might not even be able to tell if their eyes haven't been trained on them.
Unfortunately though, I require a 4K display for sharp text (I get really weird visual effects from low-resolution displays), and my current HMD (HP Reverb G2 at 2160x2160 per eye) is uncomfortably blurry, so I don't imagine the Vision Pro at an estimated 3400 pixels per eye according to their marketing figure (`sqrt(23000000 / 2)`) can do much better.
I know it's revolutionary and this is about at the current limits of HMD technology, I'm not at all claiming that it's not impressive for the industry. It's just part of the reason why I don't use HMDs often is because the tech hasn't yet advanced enough to be comfortable for me.
> Because it seems to be another Apple device separating producers and consumers just like iPad and iPhone.
The persistent framing of i-devices this way on HN, after all these years, is baffling to me. Tons of creation happens on them, or using them as a tool in service of creation, much of which would be harder or impossible with traditional computer. Just not a lot of software creation.
Because with the exception of very few tasks like digital art the limitations placed on iOS make it worse. Some tasks are just straight up impossible (e.g compiling code), others are just hampered by things like file management and sandboxing or lack of support for hardware/accessories. Sometimes they can be worked around but often it's an exercise in frustration to do something that would be brain-dead on a desktop OS but requires a whole host of taps and trips through the share menu or whatever to get the same result.
Even in digital art where the iPad excels doing something like having a reference photo open while you work is compromised and the only good way to do is to hope the app you're using supports it a reasonable way or else throw away half your screen real estate.
All I can figure is difference in perspective is because such a large part of the potential of i-devices (or phones and tablets more broadly) for creation involves using them as a tool to get something done in the real world, and/or for the kinds of creative tasks that get overlooked by the hackerspace crowd (running an Instagram, say, involves lots of creation; tradespeople and home DIYers routinely use phones and tablets to help get things done in situations where a laptop wouldn't be a useful alternative; that kind of thing). Desktop computers are mainly good, outside of industrial uses, for creating more stuff for computers. I-Devices shine when most of your task's not on a computer, and the computer's just one of many tools at your disposal. If "creation" is making stuff for computers, they seem bad. If "creation" happens in the real world and the computer's just a tool to help with that, and nothing more, they're pretty appealing. A smart slab of glass packed with sensors that can become different tools. Super handy, and a "real" computer couldn't replace it.
For most creation-oriented tasks I do that aren't computer-centric, I'd much rather have an iPhone or iPad than a laptop. They're great tools for creation & work, just not so much for creating stuff for computers, certain exceptions aside, and for those they're mostly just about as good as a laptop, or even not-very-good but usable in a pinch, not better.
Desktop computer have some advantages that the so-called i-devices lack even for tasks that are not computer-centric.
The main one for me is the longevity of your data. It is connected to the control over your data you have in a desktop OS. Good luck finding what you did five years ago in an app in some device and reproducing it now. You are at the mercy of some proprietary format the app creator chose, the availability of the app at a later stage, whether you saved your work in a cloud service, whether you kept your old device running, the operating system updates you installed, .....
With a desktop you can archive your work, whatever it might be, in sequences of bits that reside in your disk. Open source software and virtual machines go a long way for helping reproducibility.
Not all tasks benefit from reproducibility and longevity but I'd argue they include a lot more than computer programming
This adds a lot of great color to your perspective, but the summary becomes essentially: An iPhone makes a great metronome, but a terrible recording studio replacement. I'm not sure that's saying too much, because the latter is so much more ambitious than the former? That's really the dilemma for those of us disappointed in iOS creation, we want it to be more
ambitious, and competitive with things laptops can do.
iOS shines when you create something, for while someone has already created software to create.
Desktop shines when you create something novel.
It's as simple as that -- there are tons of things people have created really polished iOS "happy path" software to create on, but they're not general purpose creators.
Specifically because anything general purpose threatens Apple's App Store toll gate, and thus cannot be allowed to exist.
GP was talking about desktop, not which android is not, but to give a serious answer: I can run a full linux terminal on android, run vscode-server (has to be in a VM, but still) and edit and compile my code. All without needing a connection to some remote "cloud".
If apple would just get over themselves and allow sideloading (or, gosh, a terminal!), it would go a long way to improving iOS usability for creatives.
> Because with the exception of very few tasks like digital art the limitations placed on iOS make it worse. Some tasks are just straight up impossible (e.g compiling code)…
Compiling code isn't impossible; every webpage that contains JavaScript uses the Just In Time compiler to execute JavaScript. The ability for 3rd party apps to compile code is limited due to security reasons—Apple's Lockdown Mode [1] specifically disables the JIT.
iPad users have been able to write code using Swift and submit apps to the App Store since late 2021. [2]
I have no doubt you'll be able to create apps using the Vision Pro in the future… there's probably a version of Swift Playgrounds [3] running on a prototype Apple Vision Pro right now.
The Mac is a "real computer" because it can be used to create new types of software and platforms, such as the Vision Pro.
iPad is not because it cannot be used to create the Vision Pro. It may still have advantages where it's better for some uses cases, but it still doesn't make it a general purpose computing device in the same way that a TV is better for watching videos than a Macbook, but a TV is not a "real computer".
They're not great for making software. That doesn't mean they're not a decent-to-great tool for a whole bunch of productive or creative activities, often in ways that a "real computer" cannot be reasonably used as a substitute—that is, if they vanished tomorrow, in many roles they wouldn't be replaced by a laptop, but by paper, a stack of single-purpose tools, and more human running-about or time-wasting.
I think the real/not-real computer thing's not especially illuminating, but I don't take issue with it the same way I do dismissing phones and tablets as "consumption-only", a perspective I find simply baffling, because I see them used for creation and productive purposes all the time in a ton of contexts by ordinary people.
I think the disconnect is in the type of creation, the higher-end production quality you want, rarer iOS devices are. And the corollary is, if you only care about the highest-end of creation, they essentially become invisible.
> I think the disconnect is in the type of creation, the higher-end production quality you want, rarer iOS devices are. And the corollary is, if you only care about the highest-end of creation, they essentially become invisible.
For producing stuff for computers or for industrial design (e.g. CAD or architectural drafting), yes. In other areas, tablets and phones are ubiquitous tools, and if they vanished tomorrow they wouldn't be replaced with e.g. laptops—they'd be replaced with paper, with various single-purpose devices, et c.
I suspect the downvotes are because the keynote literally featured someone using their Mac inside the headset. And another video (session? State of the union?) showed developing a Vision Pro app on the Mac and testing it right next to the Mac from inside the Vision Pro.
Right, it sends the message that the developer is better served by using a Mac, or that the Vision Pro is not capable. Maybe that was honest/intentional?
It’s got a full M2 and (reportedly) 16 GB of RAM. In theory it’s capable.
That doesn’t mean Apple will expose that. Maybe just not today. The original iPhone had a full OS capable of multitasking but it wasn’t a feature given to the user due to memory and battery constraints. We’ve now had it for years.
This was the first we’ve seen of it, and I suspect the on device development story was not one they wanted to spend limited time on. Plus knowing Apple it seems likely they’d want to make Xcode work better for the interface than just plopping the Mac app in.
Time will tell. The iPad can do development. Apple has proven it with Swift Playgrounds. But they still haven’t given us Xcode. So who knows.
> Time will tell. The iPad can do development. Apple has proven it with Swift Playgrounds. But they still haven’t given us Xcode. So who knows.
Apple announced just before WWDC the iPadOS versions of Final Cut Pro and Logic Pro, their professional media creation tools. We've all seen shows and listed to music (and podcasts) created with these tools. While most of the core functionality is the same between the Mac and iPadOS versions, the iPad versions take advantage of multitouch and other iPadOS features, as they should.
> Plus knowing Apple it seems likely they’d want to make Xcode work better for the interface than just plopping the Mac app in.
Yes; that's how Apple does things.
As I mentioned elsewhere in this thread, iPad users can already use Swift Playgrounds to code and submit an app to the App Store [1].
It's just a matter of time before there will be a version of Xcode for the iPad.
Final Cut Pro has significant limitations over what can be done on the Mac, which limits their ability to do project sharing between the two platforms.
Thats nothing compared to the bash build stages and custom permission setups, third party libraries, bespoke source code management scripts, ruby-based project initialization and preprocessing scripts I've seen amongst Xcode-based projects.
When those fail, you are troubleshooting through logs and at the filesystem level.
If the idea is a version of Xcode without those things but is otherwise fully featured, I think that is actually speaking about some future version of Swift Playgrounds.
Otherwise, "Xcode for iPad" is basically a jailed Mac VM.
apple is so determined to prevent any serious code execution that even ssh apps require janky workarounds just to provide a terminal emulation that still needs a second device to communicate with just to do any real work
From the ad, it looked to me like you get a iPad-esque computing device when standalone, but that it tethers and can act as a display for macOS. Just speculative of course, but in the ad they said something like, “connect to your MacBook to extend all your MacBook apps onto the Vision Pro.”
I suspect that, like the iPad, there will be ways to force it to function as a standalone device, but that the happy path will be heavily optimized towards you buying as much Apple hardware as possible.
This seems more than a little likely, as Vision Pro has an M2 chip for processing, and the R1 for vision processing. If your work needs a fully-capable GPU, that's probably a different story, although most graphics work (like photo editing/video editing) should run well on the M2.
yes, its the difference between the headset being a bold new computer form factor, or just another peripheral.
basically is it a laptop (fully independent), an ipad (excellent secondary device that can do focused work on it's own), or a smart monitor (peripheral with some built in usability)?
This isn't correct. The AVP can run apps side by side so you can have a window for xcode open along side your app. With the iPad you only get one app at a time, which is a bummer. (Even if you had multiple windows open only one app can have the full screen experience at once.
Beyond that, even though an iPad can act as another monitor, it's not a big monitor. It doesn't really achieve the ideal of a workstation replacement.
Can it? The ad showed 1 single virtual window that acted as a mirror for the macbook's screen. It never showed 2 mac apps side-by-side, each in their own independent virtual window.
This is my question as well. Is this just showing just going to show one screen with the entire macOS in it? Or will we be able to individually project different macOS apps onto the canvas?
But with the Vision Pro there is zero latency and the possibility of having many big screens, whilst when remote-desktoping into a Mac from an iPad you gain nothing and the high latency makes it worse.
Because it's not remote, if you are streaming from your Mac, it's right there, at 2m at most. It's much different than remote connecting from an iPad 100 km away.
It wouldn't, but why would you use an iPad as your main screen if you are near a Mac? It's just a screen, it doesn't bring anything special like the Vision Pro would.
They demoed this in the State of the Union I think. Vision Pro was on with Xcode + a live build of the app that was being developed side by side. I'm not sure if Xcode was running on device or on a Mac though.
I fear Apple’s effort will be limited to an iPad-like content consumption experience without the platform opening up to a Mac-like degree. And that no other company has the chops to “cross the Rubicon” in such a comprehensive way, dooming AR/VR to a niche of limited usefulness.
This is the main issue for me. At a minimum, the screens and virtual computers need to be completely open. I can’t live 100% in the Apple world for productivity.
> For the past week I've been trying to explain to people how certain I am that doing work in headsets will become mainstream
This gets at my main question about this: does it feel to you like it will be great for "doing work", which generally happens at discrete places, or something which will apply to life generally, which is the kind of (some would say dystopian) ideal of "augmented reality"?
Or put another way, I felt a huge increase in immersion and interest going from the original Oculus's three degrees of freedom to the "room scale" six degrees of freedom in Quest.
I know this is "spatial computing" and you can put monitors in fixed positions in the environment, but is "desk scale" where you can kind of look around and pivot your head, or "room scale" where you can get up and move around the room, or "life scale" where you can leave the room entirely to check on the screens you left in the kitchen before coming back to sit down in front of the ones in your office?
I think it'll be great for "doing work", but I'm not entirely sure if it'll immediately outclass my in-office setup that my employer has already spent thousands on. However, I don't have a wfh setup in my small apartment because I didn't want to dedicate space for that purpose, so when I occasionally need to wfh I just use a laptop on my couch or dining table and I would love it to replace that. I think if an employer were speccing out a new office it would definitely be a viable option to have a small laptop + Vision Pro instead of desktop (or bigger laptop) + multiple external monitors.
I also don't expect it to be super dystopian or replace real life in general, but it could definitely replace the parts where you're already looking at a screen regardless.
It is technically capable of being "life scale" in the way that you describe: it tries to keep a consistent coordinate system across multiple reboots and from room to room and real life objects can occlude virtual objects, but to be honest I have no clue what the average use case will be or how the OS will enforce it.
Its interesting to hear it will replace the parts where you are already looking at a screen. My consumer oriented view is that it will first replace the moments where we absolutely cannot be looking at a screen but need to see the real world in front of us, perhaps even for safety reasons.
To me its a bit like tap to pay vs magnetic credit cards. Eventually tap to pay replaces all other ways of paying much like AR might replace all other screen interactions. But FIRST it should replace something critical. In the payment space this was subway entrances where people needed to pay quick and get through.
The reason it takes so long for full adoption is because its very challenging to replace something like magnetic credit card that “works pretty well”. From my point of view, the iPhone “works pretty well” and its going to towards the END of the innovation cycle when other screen usage gets folded into these spatial platforms
> or "life scale" where you can leave the room entirely to check on the screens you left in the kitchen before coming back to sit down in front of the ones in your office?
according to the videos they show, people move around a lot in a room with these things on, grab things (e.g. their phone, cup of coffee). and makes sense, if you see basically what you have in front of you, then there's no "boundaries" you should stay in not to bump into things.
I'm wondering the same thing too. I have tried multiple monitor setups dozens of times now, but always end up coming back to a single display. I can't imagine having a tonne of widgets floating around my work space in 3D is going to do anything positive for my productivity.
I see the productivity use case more useful than the entertainment ones, and without the social stigma of hanging around on your sofa weirding out spouse/kids.
However most of the previews I read do point to some amount of weight/comfort ergonomics that I think play against the productivity use case.
Basically my work compute needs to be ergonomic enough to use for 8-12 hours/day.
I have a hard time believing we can live in this headset for anything north of 2-3 hours at best.
I dont know, but I dont think this wull really allow to watch movies in bed.
I have trouble even using small in-ear headphones in any other position than laying on my back, because just the pillow pressure acting on the headphone piece is so anoying. This headset has a strap around the head that will add unfamiliar pressure from basically every direction you would rest your head on.
There are now definitive studies that sleeping with a blackout mask greatly improves sleep quality. I just got my spouse onboard with wearing one and she says it is night and day difference for her.
I have to wear a cpap so I haven’t found an eyemask that works with my cpap headgear.
For reference I was already pretty used to gaming in VR on inferior headsets before working on the Vision Pro, so I might be more forgiving of issues than your average person.
I'd say that eye strain wasn't really an issue at all for me. When you take off the headset after a while inside of it you kinda get a jarring transition back to normal vision instead of passthrough, but passthrough doesn't have any noticeable issues when you're inside of it. It's almost like waking up from a dream where stuff feels different and you can't exactly place why.
As for physical weight, it's lighter than other VR headsets I'm used to but obviously more than a pair of ski goggles. I'll echo MKBHD's thoughts that the most noticeable strain was on my nose because my lightseal (like the WWDC demos) wasn't as personalized as the version you'd get as a customer, so I fully expect that to be a non-issue.
How is it in terms of heat & condensation build-up? Is there a certain amount of time after which it starts getting uncomfortable?
Also, is passthrough good enough for things like typing, so you don't get an uncanny valley sense of the keys being just slightly off visually compared to your proprioception & touch?
Heat and condensation build up is a complete non-issue due to the built in fans! I've never once felt hot or stuffy inside the Vision Pro, and I totally do on other VR headsets.
I don't really look at my keyboard while typing so that's also a non-issue. For stuff where you really need hand-eye coordination it's totally fine at normal arms-length but gets worse as you get closer to your face. Finding a key on the keyboard with your eyes is not a big deal but if you try to drink from a glass of water you'll likely spill it on yourself the first time (and then eventually get used to it).
To be honest I'm not entirely sure what reprojection is like on the final device or even if it exists at all, so my experience might be off. There may have been things that changed since I left and I'm not sure what the final plans were.
I didn’t necessarily mean reprojection but whatever ‘trick’ is done to hide that the cameras aren’t in the same physical location the users eyes are without the headset on.
So you can pair bluetooth peripherals to it.. thats good to know, I assumed it but maybe missed that in the initial preso (i was watching while in a meeting). That's one thing i hate about my index. It's impossible to drink a soda while wearing it without a straw, the geometry just isn't compatible.
I have ADHD and can definitely say that I experience "time blindness" with technology, and being on the internet.
I'm not sure how common this is for neurotypicals with VR headsets. With Vision Pro have you experienced losing track of time? Or does the AR passthrough seem to help with this?
I have experienced what you're describing while gaming on my personal VR headset, but never with Vision Pro.
I'm not sure if that's due to the passthrough or the fact that I was only ever using it to actively do work (and I usually mean work on stuff for the Vision Pro itself, not just "normal" dev work).
I actually would not mind a ceiling mounted design where the headset is counterweighted so it is essentially "floating" and you pull it down to put on your head. I know it will take away from the mobility of the headset, but I am okay with being stuck to a 6ft radius while working. It could also charge through the ceiling.
Or a bump helmet and hook the whole thing to the Night Vision Goggle mount on the front (like the new IVAS design), and put counterweights on the back to prevent it from shifting down. That could take a lot of pressure off your face and make the whole headborne system more balanced.
Counterweights are already a pretty common thing for existing VR headsets. Some people will strap a battery pack to the back strap of their headsets, and there's enough tethered players looking for plain counterweights that's there's a small cottage industry around custom made heavy metal counterweights.
Based on your experience with the device, do you feel like the first version will be good enough to replace a multi-monitor setup for software developers, and allow them instead to fully switch to using the Vision Pro instead? Or is that monitor-less future for developers still a few iterations away?
Depends what you mean by "fully switch". I think it's good enough to replace a few monitors but at the same time there are use cases for monitors that it can't replace, like showing something on your screen to someone next to you.
I'm sure some developers will switch over on the first generation, some will wait until it gets lighter/cheaper/more immersive (disclosure: I have no clue what's coming in the future), and some might never switch if they just plain don't like having something on their head.
What was the upper limit on number of virtual monitors you could stream from a Mac when you were working on it? The demo videos only ever show one which doesn't seem all that useful. Optimally you'd be able to just drag the actual MacOS windows out into space and the cursor would just hop to them when you look at them.
You sound like you're measuring this leap as the Vision Pro vs. not using a headset at all. How much of a leap is it if you're already using e.g. an XREAL Air for display mirroring — and you're planning to mostly use the Vision Pro for that same use-case?
I haven't used the XREAL AIR personally but I've used similar devices and it's pretty night-and-day I think:
- Sensors: It looks like the XREAL AIR doesn't have any sensors so just displays the monitor in a fixed position on screen, which is pretty different than loading up monitors in a fixed point in space and being able to look away at other stuff. Also eye tracking is a pretty killer feature that totally changes the game in my opinion, I wouldn't personally buy a headset without it after having experienced it.
- VR vs AR display: As I'm sure you're aware, with AR sunglasses you can't really have anything darker than the actual light passing through so you're limited to only rendering stuff brighter than the surroundings. Maybe for text it's fine but not for images or video in my opinion. I can't find any footage taken through the XREAL AIR lens (and their website doesn't faithfully show this effect) so idk exactly how big the difference will be for you.
- FOV: I'm not actually sure what the FOV is of the Vision Pro but it's significantly higher than 46°. And because usable area increases with the square, you'd be able to fit a few times more displays and other stuff in.
- Resolution: XREAL AIR looks to be 1080p, Vision Pro is way better than that (per-degree too), so you can have better looking and smaller text.
Maybe for Apple development, but given that it’s positioned as an iOS-type device, the public will never get a native terminal on this thing. It would be unprecedented.
Yea no way. Unless you jailbreak. I would love it, though. The iPhone, iPad and now this thing would be great if there we could just opt to jailbreak without risks. I don’t jailbreak my phone because it’s risky and I don’t mind having it stock. The iPad would be great if I could run anything properly on it. The Vision Pro would be phenomenal. I’m thinking of all the game ports that could happen if the OS wasn’t locked down. I see their rationale in doing it, to retain control over the quality of the user-experience. But for those users who want to experiment, it just hampers their experience instead.
I've had a question that I haven't seen anyone else ask that you might be able to shed some light on: is there an internal battery to facilitate switching power sources without an instant loss of continuity?
I'm not very skeptical on the "doing work" side of the question. (Though I do wonder about the price point.) But I am skeptical on the "doing life" side.
Yeah I don't think I'll be using one at dinner or hanging out in-person with friends any time soon, but there are a lot of non-work things I'd happily use it for.
Gaming, TV, casual internet browsing, stationary exercise, socializing with people remotely (e.g. VRChat instead of Zoom/FaceTime), passing time on trains+planes, etc.
Thank you for answering all these little questions, hope I'm not too late :)
1. Could I take a disconnected cable keyboard and "anchor" the virtual keyboard to overlay the real one to get the haptical feedback when typing on the virtual keyboard?
2. I'm very excited about "hanging out in VR" but initial impressions of Persona avatars seem to be mixed. Do you think people will eventually get used to Personas after using them for longer or are we not quite there yet?
3. What is the focal distance for the eyes?
4. What are you most excited for using the Vision Pro again once it comes out :)?
I definitely can't wait to upgrade from my original HTC Vive!
1. I don't really know of any feature like this but can't say for certain. I'd highly recommend investing in a bluetooth keyboard for work in the headset (or keep your cable keyboard plugged into your computer and mirror the screen into the headset).
2. Honestly no clue how the public will react to Apple's avatars, but keep in mind that third party apps are free to create experiences like that and I'm sure you'll see a dozen VRChat clones taking off.
3. Not sure! I never ran into any issues but I also don't need glasses.
4. I personally am a VR gaming enthusiast and despite gaming not being the core use case of the Vision Pro, I'm very excited to see what people do with it. If you've played Half Life Alyx you know how fun gravity gloves are, so imagine having games that have graphics and physics that can seamlessly blend with real life. Use your hands to toss around virtual objects that bounce off walls in the real environment! The cool thing is that this is all super easy to develop for now: all the scene understanding, rendering, hand tracking, physics, etc. is all handled by the device so you can probably whip up a gravity gloves prototype in a few lines of code.
Awesome, thanks for answering! And yes, question one was also partly motivated by gaming :) I understand it doesn't ship with controllers, but I was wondering if the cameras/depth sensors/software are good enough to recognize and track for example a ping pong paddle.
Apart from that, I really hope Apple will work with more game engines in the future. I rented an HTC Vive Pro 2 last year just to play through Half Life Alyx and everything about this experience was so, so good (especially Jeff). Would be incredible if it ran on the Vision Pro eventually but I guess Source 2 is probably not on the engine list :)
One last question I actually had if you don't mind. How do you think gestures will evolve once the headset releases? I understand there are a few basic gestures defined already, like clicking, dragging, zooming, but what about cmd + Z or deleting something. I'm sure eventually people will expect the same gestures for those actions across apps but I didn't see any mention on them so far. Any thoughts on how those might emerge?
I was developing stuff to be shown inside the headset that wouldn't have worked in the simulator, so I was frequently using the headset regardless. It was often easier to just leave it on and develop than constantly put it on and take it off to test things out.
As for development of non-VR stuff I personally didn't because I already had a fully-featured desk setup and the OS at the time was buggy enough to not want to deal with it (as every OS is years before release, not a ding on Vision Pro at all). With a good OS (which it will be upon release) and more native apps (like Termius on iPad) I definitely would though!
Do you think it’ll allow Xcode for development? When you say you were wearing it while doing dev work were you pairing it with a MacBook? From what I understand the OS is more similar to iPadOS (no Xcode, ability to compile binaries) vs MacOS.
I honestly don't know that will be supported on the final version, but what I was referring to was developing on a Mac through the Vision Pro and deploying to the Vision Pro.
FWIW the iPad supports dumbed down app development through Swift Playgrounds and presumably the Vision Pro will at the very least support that if it supports all iPad apps, but that's pure speculation.
Unfortunately I haven't tried that one! $6,500 is a bit out of my personal price range haha and I didn't get a chance to try all the competitors while I was at Apple.
I'm not saying 3500 is "alright", it's definitely still expensive!
I would say 6500 is too much for a better version of something I already have that doesn't really enable any additional usecases. The Vision Pro would be used pretty differently than my Vive (which is just the occasional gaming session right now). I'd think of it as more competing with iPads, external monitors, and laptops than a Varjo right now.
I feel like I’ve been spamming this everywhere and any time I get the chance, but I really want people to join in and help define this experience with me for developers.
AR VR for iOS and macOS. Millions of glyphs. Instant control. There’s magic here. If this excites you, work with me and help make this a reality! I don’t have all of it in me.
I wish I did. I don’t. I don’t have all the time and energy. But there are people here that if they spent just a little time to work on this, we would be in the future of a 3D code space in days, and not weeks or months.
I owe a new readme for the project. If any of this makes you feel any feelies, get in contact with me star it, make noise, whatever!
Lotta love to yall. Thanks for letting me vomit words.
I've been wanting to make this kind of thing for a while now. Basically canvas and 3D view for code. This looks super cool, but I was more thinking for a general platform like VSCode, rather than something limited to the MacOS eco-system. What's the best way to get in touch with you?
Totally feel ya there. If I had a different sorta history, I would have like to have started with something like d3 or even Unity… alas, poor me, heh. I’d love to chat! My email’s fairly public, so feel free to ping me at thelugos @pm.me (no space).
Hi, curious about the foveated rendering on this headset. Theoretically, it should offer a 10x+ performance boost, but headsets like Quest Pro and PSVR haven’t achieved much with it. How does this compare?
would you recommend buying 2 and replacing your TVs with it? I'm legit thinking about this, because the home theater setup I'm looking at will cost nearly the same, but two of these + a great audio system would be amazing.
The real question is, can you make two devices sync to each other simultaneously.
The problem is that it puts a hard cap on how many people can enjoy your home theater, so that might be an issue. If you already have a space carved out for a home theater and were thinking of entertaining >2 people then I'm not entirely sure what the value add of the headset would be. In my opinion the benefit comes in getting a home-theater-like experience that you can bring anywhere and not have to dedicate space in your home for.
Third party developers will definitely be capable of adding a sync feature like you're describing but I have no clue if they will.
Shared space interaction was notably missing from the keynote. I'm very curious what the vision for the platform for it will be. Meta has put a lot of focus on it so there are a bunch of Quest apps and games where people inhabit the same phyiscal space and all the virtual objects are shared. I feel like it's essential for a platform like this to ultimately succeed, and curious that Apple appears behind Meta on this front.
Whether it has capabilities here or not, it seems like it would be challenging at a $3500 price point to say "and here's features you get if you buy one for each member of your family".
Within an app, the challenge would be identifying they occupy the same space and mapping into the same coordinate system. Easier for VR mode than AR mode, where your mapping can be pretty arbitrary to the available bounds.
> Within an app, the challenge would be identifying they occupy the same space and mapping into the same coordinate system. Easier for VR mode than AR mode, where your mapping can be pretty arbitrary to the available bounds
Absolutely. It's impressive work on Meta's side that the full ecosystem of headsets natively support this in AR mode already [0]. It will be interesting to see if Apple has this figured out, or if they can ship in time for release. The fact they didn't demo it suggests to me that they are still in catchup mode here. But then, there are a lot of things they didn't demo that came out in the technical talks later.
I hope it kickstarts more development in surround sound headphones. I live in a small condo and have a big 4K projector set up. But I can't reasonably do surround sound or turn up the volume. Apple Vision. + surround headphones would be awesome, or projector + many pairs of surround headphones for group experiences.
I guess the big question I have is... what's the point of an extremely expensive headset when it looks like Quest 3 at 7x less the price is going to deliver a roughly similar experience, quality-wise?
My gut feeling is Quest 3 will be more than good enough for most people/use cases, and people will be more than happy to pay 7x less for what is - at the end of the day - basically the same experience.
Vision Pro looks like a better version of HoloLens of yesteryear. Doesn’t really seem all that groundbreaking to me, just more refined.
The main difference is that Vision has a laptop grade processor (Apple M chip), LiDAR, and another separate processor to manage the LiDAR, a separate depth sensor, & all the other 12 sensors and cameras
The Quest only has a smartphone processor. The pro doesn’t even have a depth sensor. While the q3 will have a depth sensor, it will not have eye tracking or face tracking, in addition to only running yet another smartphone processor and only ~5 sensors and cameras
For what scenario? Gaming? What about use as a virtual screen? What about social interaction? What about some of the other scenarios that were depicted in the wedc demo?
A lot of information has come out about the Quest 3 already.
Full color passthrough from cameras and a depth sensor (better than Quest Pro), 120Hz, slightly higher resolution than Quest 2, 40% smaller (whatever that means), Snapdragon XR2 Gen2 (they claim more than twice as fast as Quest 2), pancake lenses, controllers without the tracking rings, $500.
Compared to Apple's, the main downsides are lower resolution display (also not OLED), slower, no eye tracking.
Main advantages are price, support for games/controllers (you can still use hand tracking if you just want to watch movies), and no external battery pack.
I don't think so, not at launch, but maybe down the road.
This product is inherently experimental. They are not going to be operating at the type of scale they do when it comes to iphones, macbooks, ipads, etc ...
So I do imagine that if this thing takes off and they can start putting out hundreds of thousands or millions of units they will gain more savings through scale.
I'd love to know what their initial order for launch is. It will be interesting to see how far the preorder precedes delivery as well.
any opinion on why Apple isn't dogfooding this by allowing their software and non-hardware teams to work remotely? Seems like that would be one way to prove it is legit
They only just announced it and had many details well hidden prior to launch. I think over the next few months, they’ll definitely have Apple staff using it around a controlled office and then later from home.
At this point, I doubt they’d be letting them off campus.
I'm confident I'll buy a version of the headset at some point, but I don't know yet if I'll buy it right at release. I'm not saying that because I have any additional knowledge of future products (I don't), just that, like the rest of you, I want to see the ecosystem get fleshed out and not have to pay the early adopter tax. I think it will be great for work but who knows when a great remote code editor and terminal will show up on the App Store, right?
It's also not novel to me because I've spent a while using it already, so I'm not exactly in a rush to test it out just to see all the features.
> That intuition is developed by following a platform’s development from its early stages. You have to have seen and experienced all the attempts and missteps along the way to know where the next logical step is. Waiting until a platform is mature and then starting to work on it then will let you skip all the messy parts in the middle, but also leave you with only answers to the “what” questions, not so much the “why” questions.
I definitely think there's a lot of truth in this, not just for the Vision Pro but for many (if not most) platforms/frameworks/what have you. This could be an entire blog post in itself.
It doesn’t surprise me that Apple would want to give a demo to someone who has consistently made high quality apps for their platforms, often on day 1 of a new platform.
It’s a developer conference. Yes there are media there, but it’s also to court developers.
Marco Arment was invited to WWDC as he is north a developer and operates a Mac focused blog. He is frequently very critical of Apple’s tools and actions. I don’t think Apple just invites ’yes men’
You could further summarize it with one word: excitement.
Excitement is good. It's not a revolutionary take on anything, but it can be interesting to know why a seasoned iOS developer finds this new platform worthwhile to invest in.
From the previews people have had so far, I feel like they decided their target was "could plausibly and comfortably replace a computer day 1 even for someone who had never used a VR device before", then went backwards from the hardware needed for that to figure out their price point.
Apple has an app developer story for ipad with swift playgrounds. It would not surprise me that their long term strategy is to grow that into the default app dev experience and have it be on all platforms, including visionOS.
As for developing from day one, web development should be ok, as things like vs code online have really opened ip options.
Yeah, i was a little afraid of an iphone or ipad mini level of compute, was happy to see that this thing can run untethered with some actual muscle under the hood.
I was kind of waiting for the day where a touchscreen based app would be more productive than a traditional laptop with keyboard.
Yet, after many years all we got is fruit ninja and a concession from apple in the form of attachable keyboard for the iPad.
So I’m equally sceptical that the investment in headset based interface is actually worth it.
I agree the keyboard will remain as the best text input device but I'm pretty confident that eye tracking will be categorically better than a mouse/trackpad. I'll expect that eye tracking solutions for normal monitors will also become more mainstream once people get used to it in the Vision Pro and want to use it elsewhere.
Eye tracking will probably lose the productivity metric, though. When you remove the ability to look at one thing and interact with another, you get farther from the way that humans tend to interact with their immediate physical environment.
I can cut vegetables without looking at them. I can use that dynamic to offset the planning and acting phases of my thought process. Falling short of that efficiency will feel limiting.
I often am looking at another element before interacting with it with my mouse. With eye tracking enabled menus hovering over an element immediately because I'm looking at it becomes an annoyance.
Yea, I don't think a 1-to-1 translation of mouse movements with eye tracking is the correct answer here. I do think it's probable that eye tracking + {X} can be a 1-to-1 translation for clicking for 80%+ clicks.
Maybe X is a button on the keyboard. Maybe X is a gesture.
I can think of some Portal puzzles in particular where timing is important, and you need to hold your aim but wait to click until something happens somewhere else on the screen (so the place you're clicking is not the same as the place you're looking).
I think the same thing applies to e.g. recording network activity in Chrome dev tools. My eyes are on the page to see when the thing I'm interested in finishes loading; my mouse cursor is on the button to stop recording.
It's not a super common pattern, but probably common enough that it would be annoying not to be able to do it.
Yea, I agree with this for gaming. Eye tracking as a gaming interface probably requires rethinking a lot about games. In VR for example the movespeed is much slower. In normal video games the character movement is constantly super-human speed, and this is jarring while in VR.
I am speaking mostly about the desktop interactions. In your Chrome Dev situation, I would look at the cursor before clicking on the stop recording button. I think I might be able to trust the MBP trackpad to do a primed click without looking at the cursor, but I wouldn't trust a traditional desktop mouse to have stayed steady enough.
Interesting, I just tried to use my pointer with my peripheral vision, and while possible, it was indeed more difficult and harder to focus on. Something I hadn't tried before.
I'm not sure what the SOTA is currently but the Vision Pro has much better eye tracking that other systems I've used in the past. Idk if the sensors are actually better or if it's just the native integration into the OS and apps, but it feels more seamless than a mouse and I've never "misclicked" on the wrong thing with my eyes. It kinda feels like if the mouse cursor instantly jumped to the right place every time with no travel time or physical movements required.
I've used a 3D mouse for CAD but am not sure where else it would be helpful?
Tobii https://gaming.tobii.com/product/eye-tracker-5/ is standard in consumer eye tracking, they even have integration with some popular video games! There's just no value in it. I tried an older version in a store, and it was basically perfect, tracking my eye's movements exactly, with zero delay.
Nobody wants it for day to day computer interaction. Most people using eye tracking for computer interaction are disabled, because it's a terrible experience.
I primarily use i3wm with 4-40 terminal windows + firefox open. I'd LOVE it if the terminal that I'm looking at was the `active` one. I can't count the number of times I'd lookup some error code, type in more commands, hit enter; only to discover that I had been typing away in the wrong window.
This will be no problem at all in the sdk. Hell I saw someone do this as some point on the mac using the macbook camera popping windows to the front based on eye-dwell... I think they started with cursor control.. essentially eye-mouse.
As we use an iPhone so close to our face, I would love to see an eye tracking system work for them. But maybe it really need that incredible close proximity, two cameras per eye, and IR illumination in the Vision Pro to work.
I definitely want to see a monitor with Vision Pro-style eye tracking and hand tracking. I'd love for eyesight and subtle gestures to fully replace pointer devices.
The pinch gesture on Vision Pro is pretty smooth and easy, about as much physical exertion as a mouse click. Right click is harder, I'd expect most app developers to do something similar to iOS and treat long clicks as right clicks but that's pure speculation.
I'd love to have an eye tracking setup though on a normal desktop computer where I could devote a keyboard key to clicking and I'd never have my hands leave the keyboard.
There's also apparently other gestures and voice commands that can be selected in the settings, though I haven't seen any real detail on it other than "they're there".
We can see this with voice to text, in which despite in theory being so much faster than typing things out, tends to not be due to these details (processing lag, clunkiness of handling different forms such as whether a word is part of a command or should be added to the text).
Same will happen here. People will get over the hype period and then realize hey this isn't actually faster or more efficient than a traditional tool. Apple knows this as well, it's why they're marketing it as a media consumption device first and foremost, where a lot of these problems can be safely ignored.
I think some artists use an ipad with a stylus rather than tablet input to a laptop/desktop, but other than this I can't see how anything else could compete with a keyboard in terms of productivity. Maybe if someone makes a really, really good voice to text interface.
I could see walking around the room thinking and doing clicky-stuff and looking at things, then sitting in front of a keyboard to type. Basically, what I do when I'm on a call and have wireless headphones on.
I had to use it for a while when I was unable to touch a keyboard or mouse while recovering from RSI and I was surprised by how quickly I was able to get to about 80% of my previous productivity using just my voice. I still use it sometimes even though my RSI is fully healed.
As an early developer (Day 1 of App Store launch in 2008 - iRetroPhone) there is some excitement of being the first on a new platform. Back in 2008, there was very little documentation on UIKit but it was fun to develop. The difference I see this time is we had access to the iPhone(released 2007) and this time[1] it looks like you need to either ask Apple to test the app for you, or go to one of the labs (not in NYC strangely) or be one of the lucky ones to be selected for developer kit.
One more thing I found strange was the current Xcode 15 beta does not even come with Vision OS SDK (coming later this month) and we cant even play with the WWDC session videos in the simulator
>> The difference I see this time is we had access to the iPhone(released 2007) and this time[1] it looks like you need to either ask Apple to test the app for you, or go to one of the labs (not in NYC strangely) or be one of the lucky ones to be selected for developer kit.
I'm pretty sure iPad's weren't available in advance of the launch day. And, it launched first in the US. I had an iPad app available on launch day and wasn't able to get my hands on an iPad for a month or two. Of course, developing only in the simulator for iPhone/iPad isn't a huge deal...visionOS is probably a lot harder to simulate particularly when it comes to understanding user interaction.
> "I have been a “day one” developer for three of Apple’s platforms"
> "I’m going to be a “day one” developer for the Vision Pro."
> "The Economics of “Day One”"
> "Look for Widgetsmith for visionOS from “day one”."
it looks like a guy who is flexing his "day one" access and using it to advertise some vaporware, and the rest of the words in the blogpost are some fluff maybe gpt could make it
It doesn't take much to get "day one" access like he's describing. I was some random kid with a couple years of (completely self-taught, solo, using free tools) Mac OS X dev experience in 2008, and I had beta access to the SDK and could have had an app out on day one if I had been more disciplined. Same goes for both the iPad and the Apple Watch. The SDKs for both were easily available before the devices shipped.
Salty much FTX Bro? This is David „underscore“ Smith, developer of Widgetsmith and a ton of other successful iOS apps. Genuinely one of the nicest human beings. If anyone has earned the rights to flex its him.
As a VR hobbyist dev I'm divided on how I feel about the device. Price thing is already well discussed, for me it's a tool worth investigating if I should invest time in the ecosystem.
Without a doubt, the Vision Pro hardware is a marvel. Having the 3D capture camera in a consumer device would feel like living in the future. The headset design is top notch and seems comfortable. I do have slight worry about how these devices will look like after a year of use.
The holographic eye reconstruction screen. A bit unsettling and strange design idea. We're messing around with people's eyes to make the headset friendlier?
The frosted glass user interface is classy, aesthetically pleasing and very well put together. Lighting and soft shadow interaction with your environment is amazingly well done. I felt the UX the current user experience seems rather lackluster and limited to a 2D format. The first iPhone also had an old fashioned interface, so my guess is they are building the compatibility bridge for the first release. Maybe the next iteration would be actually spatial and not just panels floating in space?
Moreover, the over-reliance on swipe and tap gestures is a point of concern. This design decision is focused on media consumption and limits the overall versatility of the interface. Feels like a very low-bandwidth control method.
A significant drawback, for my use at least, is the lack of control over the hardware. Developers are restricted to building Unity applications within a sandboxed environment, which is a privacy feature intended to prevent apps from accessing the eye gaze point. Good idea, better than what Meta's doing. But too restrictive and frustrating for a developer wanting to run their apps on the bought device.
Apple is once again marketing an expensive device-platform that caters mainly to larger corporations looking to offer their own 'immersive' experiences. Everything about this screams 'coorporate'. I would like to be proven wrong with some creative apps, but those will probably come through the web browser or streamed from Mac, and not as approved native apps running in gimped Unity engine.
Nevertheless, the Vision Pro sets a new benchmark for future headsets. Maybe AR/VR is slowly entering the mainstream?
I play VR games like Skyrim VR quite often and my biggest disappointment, outside of the ridiculous price - is that it's completely gesture controlled. Are there going to be controllers at all? How would you play a game with guns? Make a pew-pew gun with your fingers?
VR gaming is already covered by multiple VR headsets...and not a lot of people are buying them. The people I know who did buy them found the novelty wore off quickly. Positioning the visionPro as a computer rather than a games console seems like a smart and very deliberate choice.
Computer gamers are the only ones that still think that general use computers are also great gaming devices. I think it makes more sense to have dedicated gaming devices for those experiences.
Apple has done very well while ignoring the gamer (as opposed to people that play games) customer. I don’t anticipate Apple making big strides towards prioritizing the so called AAA gaming experience now.
I don't think apple really likes guns, so it would probably have to be third party, but a training pistol with a real belt-mounted holster and an AR upper and lower without the FCG or barrel, both with some tracking markers the Apple goggles can pick up would probably work well. Shooting a rifle with VR controllers does not really work that well, because you can't brace them at all (because the two standard controllers are not attached) and there is no stock. Fix that with a hundred dollars of airsoft knockoff AR-15 parts.
The promotional video on Apple's site shows a user picking up and using a PS5 controller. iPhones and iPads can pair with third-party controllers today as well as keyboards and mice, so there's really no reason to think that this wouldn't work with those as well, particularly since Apple showed it happening in their video.
They did show off using a standard PlayStation controller but nothing like an Oculus Quest or Index Give VR controller. While a PlayStation would be fine for something like Tetris Effect, for titles that get more out VR it does seem like it could be a problem.
They're clearly not trying to sell it as a gaming product. But assuming the device is successful enough, there's nothing stopping Apple from selling controllers as add-ons in the future (for a hefty price of course).
If Apple allows it then I fully expect 3rd parties to make controllers. Now that Meta has demonstrated fully self tracked controllers, you don't need much OS integration to make that work.
It is difficult to develop any asset heavy application as an independent developer. Pretty much period. You have to find a way to contract out assets, or you will drown.
Unless you mean the programming of 3d transformations and such. That is, of course, still hard. Most of the platforms have done a ton of that work for you, though.
if you can 3d model in any capacity I reckon you can make VR apps, ive seen plenty of them done at game jams near here, including some that won. Don't need movie-quality textures or whatever, a cartoonly low-poly look works well. Don't let the player move, thats a can of worms.
speaking of 3d modelling, I taught myself this year in my spare time over a few weeks, and feel im pretty good at it now, ive made scifi wargaming terrain im happy with and almost considering selling some. Blender is free and good and there are literally endless high quality tutorials for almost any specific object you might want to make and after ten or so you will be able to generalise to anything simple you choose. Of course human bodies are more complicated and I was intimidated but I found sculpting caricatures fun despite not being an artist in the slightest after following an excellent tutorial
> When I sat down to write this article I was having trouble context shifting back from WWDC mode and wished that I could have gone up to a virtual cabin in the woods, opened a text editor and written it there
I do this all the time with the Quest Pro. It's resolution isn't quite there but in most other respects it enables the same type of transportation out of current context to somewhere else. Given you can get these now for 1/4 the cost of a Vision Pro, I think dev interested in Vision Pro should pick up a Quest Pro now and then you have both ecosystems to play with / compare and ultimately, if you are building something where it works, ship it on both.
I'm interested to try it, but there's a few things holding me back:
1. I tried working in my Oculus Quest for about a week (two years ago) and gave up because the resolution is way too poor, and it's too heavy.
2. The price tag.
It looks like the Vision Pro is not a monitor/headset, but rather a computer + monitor all in one. Is that accurate to say?
-
I am tempted by this future, but I don't see it becoming adoptable for any reasonable users for another 5 years minimum. Plus, for developers to truly adopt it, we'd need to be reasonably sure our workflows (e.g. keyboard/mouse sync and ergonomics, portability, etc.) are increased in speed/comfort to ever consider adopting such a feature.
Yes, it's not just a headset/screen. In their words, a spatial computer. This coming with a new OS also signifies how they really want people to view it as a standalone product rather than a Valve Index which requires a PC to use.
I'm curious how the economics work out. I don't think it's unrealistic to believe that the customer base that are going to be buying visionOS applications is going to be very limited compared to everything else. Very few people are going to shell out 3.4k for a niche product like this.
HoloLens found it's niche in commercial sectors like manufacturing and logistics so there's a potential path to make a profitable software business in that direction.
However I'm curious what makes this play exciting for indie developers who might not have enough cash for a "long play" here. For iOS it wasn't difficult to see what the benefit would be and iPad only expanded the audience (and breadth of applications having more power under the hood). This is quite a bit different: the technology has been here for a while but the audience seems to be missing (or not as big as you'd think).
What's the play here for small businesses? Just use a portion of your budget to throw spaghetti at the wall?
> HoloLens found it's niche in commercial sectors like manufacturing and logistics so there's a potential path to make a profitable software business in that direction.
I mean… that may have been a niche they targeted, whether they found success is debatable. Seems like it was mostly kept afloat by bloated military contracts. And even there it pretty much flopped.
I suspect the only play would be: develop developer tools for those developers who do have the budget for a long play.
If Apple wants AR in their normal non-gaming, non-niche segment to work out, they are going to have to come out with a sleek $1000 device in a couple years. They’d have to be very stupid to not realize that, and it couldn’t possibly be the case that you get a trillion dollars by being stupid, right?
Don’t forget the iPhone wasn’t an instant hit. It got a lot of discussion but it took a few years/versions before it became THE phone. Same with the Apple Watch. And even the iPod didn’t take off until it was on Windows after a few years.
This will likely be similar.
When the iPhone came out, not only was it limited it was $600 and no subsidy at a time most people were getting free camera phones.
It took a while (and a price drop) for people to see enough value to pay the price.
I don't expect the Vision devices to really drop much in price in the coming years. To lower costs, they'll have to go with lower quality displays, or cheaper cameras/sensors and less battery life.
If you look at the iPhone, it hasn't gone down in price much if at all.
I think we're at least 5 years away in terms of tech capability to provide the level of experience Apple is looking for at an "SE iPhone" type price point.
Some will come from mass production lowering costs for components. Some from the components being less cutting edge (and thus cheaper) too.
But I agree it’s not getting cheap soon. Maybe to $2k for the non-pro version (similar to the announcement Vision Pro) in a few years. Maybe even $1500.
You want to pay $500? You’re going to wait a long time.
When Apple releases one of these for $1999, that's the point where they'll be taking the general consumer market seriously. The price of a MacBook Pro.
I think you could get there pretty quickly by cutting out all the AR stuff and making a VR-only "spatial computing" device, but I dunno if that's something they want to do.
I think that would be a mistake and definitely not what they want to do (from their focus on AR in general the last few years).
I think at $1999 for a "base" model, they'll sell millions. I just think that the tech they need to improve (optics and batteries) are hard to miniaturize. Chips and sensors they're the best in the world (in terms of consumer electronics). Batteries are just tough, and despite how good the iPhone is as a camera, it still is easily trounced by dedicated digicams.
Right, I think they will eventually come out with a non-Pro device eventually... and price is only one factor in adoption.
They could be signalling that they only want a small number of users for these things.
> it couldn’t possibly be the case that you get a trillion dollars by being stupid, right?
... I mean, the fossil fuel industry likes to think it's smart but what it's doing is incredibly stupid. I'm not sure Apple got to a trillion dollars on merit alone.
This is their first iteration. I think like the Apple Watch is the version where they just gonna see where it’s gonna go. I mean if anyone can make this more successful it’s probably Apple.
> When I sat down to write this article I was having trouble context shifting back from WWDC mode and wished that I could have gone up to a virtual cabin in the woods, opened a text editor and written it there. Or similarly while I was watching WWDC session videos in my hotel room on my 13” MacBook Pro I found myself wishing for a larger display where I could have the video, notes, documentation and Xcode open all at once.
This reinforces the idea that Vision Pro is for a portable, fully immersive work experience. Which dovetails nicely with the post-Covid notion of working in various locations as convenient/affordable. Bring a Vision Pro with you to "the cabin" or hotel or remote working location, to have a more productive experience than just a small laptop. Or college dorm, where you don't have your own dedicated work space.
None of the existing headsets I've tried has worked out in terms of working setup replacement. If it can replace a desk/monitor and lead to better ergonomics and flexibility, I would pay upwards of 5k for it. I really want Apple to succeed.
I would buy this if it can replace my current monitor(s) for development/coding. Price is not issue for many people as long as it can do your daywork easier.
I’m a “hybrid digital nomad”. My wife and I travel six months out of the year and we stay in our “Condotel”[1] that we own the other six months. We can’t buy anything that we can’t carry in our four suitcases since we can’t keep anything in our unit when it’s rented out.
I might buy it just for the multi monitor setup and to get rid of my extra USB-C video/usb-c portable monitor and my iPad for a 3 monitor setup in smaller spaces.
[1] a Condotel is where you own the entire condo unit. But it gets rented out as a hotel room when you aren’t there and you get the income minus the property management fees
It could work, with a Mac (which is unfortunate, given the Vision Pro's HW is equivalent to the Mac you "need" just to run the right software). It's an unfortunate limitation that you can only run one 4k "virtual monitor" from a Mac, but presumably your IDE & terminal would live in the virtual Mac screen and all of your documentation, etc. would be "native" in the headset
For the vast majority of my work vscode stuffed into a browser-like window using SSH remotes would be fine, and would remove the "single 4k screen from a mac" restriction.
I tried this with the iPad for a while but it ended up just being too buggy (weird cursor and scrolling errors, in particular). I wonder if it'll be any better using eye tracking as a cursor
I still haven't read one single account of a developer actually using it to write code in VSCode/Vim/Emacs. I need to know if the screens looks crisp and are actually a good replacement for a screen for work.
I don't mean "powerpoint/drawing" work. I mean wall of text, code work.
It's going to mess up your hair, your makeup, add weight on your face, and collect face oil like a sponge. Take the keys off your keyboard and roll your face around in that.
Will wait for a substantial price reduction on later versions with bugs worked out ... hopefully in the magnitude of 2 / 3rds less before I'd even consider buying it.
Btw, how do any of these work for people with glasses? My vision is too bad to survive without them and I'm sure half the city dwellers around me are too.
You have to purchase (rumored to be expensive) magnetic clip-in prescription lenses, or use contacts. The device is too close to your face to fit glasses
> In short, my brain has crossed a Rubicon and now feels like experiences constrained to small, rectangular screens are lesser experiences.
The funny thing about this statement is that Vision Pro is in fact the smallest rectangular screen of any device he's written code for.
He's hyped up. That's normal. In time he'll understand Vision Pro doesn't provide any better UX for common activities. In fact it's worse in many ways.
Where Vision Pro may shine is tasks where you need to perceive and manipulate complex three-dimensional objects, as they would be in physical space. I see great uses in engineering, design, art. It'd be great to preview interior design, design cars, architecture, create machinery and so on.
It'll also be great for previewing products, so online stores become a lot more viable than they are now, as you get a sense of size and style for an item in Vision Pro.
It may also be great for education, training, simulations.
It has many great uses. But basic apps isn't it. And most people won't care. This thing sucks to wear for more than 20 minutes. It's heavy and uncomfortable. You can't share your experience with others, either. It costs a lot. And you can't multitask with it. I can walk to a place and do something on my phone.
The input model also sucks. To code, for example, you need to hook a bluetooth keyboard and mouse. Looking at symbols one by one to fingertap would be comically slow. At which point, you may as well just get 2-3 screens and work on a normal workstation. For less money.
Have you worn this? Yo use pretty explicit with your complaints about its weight and comfort. Your penultimate paragraph is really unfounded and inaccurate. You can multitask with in (in a computing sense). You can share with apps that have use the appropriate APIs. And I completely disagree that "basic apps" (whatever that is) will not be a great use case. Hell, I would buy this today so I could have a monitor replacement. And I'm just a lowly sysadmin who is at home with vim in a terminal.
It will absolutely rule for wargames (i.e. board-based simulations). High resolution, zooming into "counter" stacks, limited intelligence (yes!), calling up rules, procedure checklists, odds calculation & combat resolution, the list goes on and on methinks.
Grognards with sufficient disposable income - rejoice! At last you can (for example) play Operation Barbarossa at regiment level - and retain your sanity!
These are just gen1 issues. I expect it will become lighter and lighter until it will probably be like some heavier/bulkier glasses.
For the common user it will be amazing for cooking (tells you what to take next and from where and mix in what order, all that fully with arrows on the screen). Or let's say you want to leave your home and it knows that you forgot your keys and it tells you where they are with directions on the screen like the objective marker in a game.
It will know where things are in your home even if you don't pay attention to them it will have object recognition in place and you will be able to say "Hey Siri where did I leave my glasses?" and it will point you to them.
“Bulkier glasses” is a deal breaker for me — LASIK was one of the best quality of life upgrades and I can’t imagine rushing back to that experience for many reasons.
Ambient computing of the style you describe I do think is a common use case and I look forward to less invasive form factors to tackle it.
Do you recall the wave of people breaking their TVs with the Nintendo Wii controllers?
I'd expect a similar wave of people breaking their expensive Vision Pros if they try cooking with one. It's a terrible idea. First, most kitchens are cramped, full of low-hanging cabinets to break your Vision's fragile glass into.
And then, keeping those open vents around vapors full of fat and tasty food bits is a great way to cover the circuits with grease.
We already have a solution for something telling you what to do next, and it's called an iPad with a stand. A phone also does the job and has much lower chance of incidents than a headset, despite yes, you may need to wash your hands from time to time to scroll down. Or... you can simply use assistive features and voice for that. Siri is going to get a lot smarter thanks to LLM, much sooner than Vision Pro will become light and pragmatic for such purposes.
Regarding this "it'll know where things are in your home", let's use basic logic here. It can learn the layout of your rooms and where your immovable furniture is. But no, it can't know where everything that moves is, because this means you literally can't move anything unless you have the headset on to track its location. Or slap expensive AirTags on every single jar and utensil maybe. All solutions would be hilariously impractical. And... we'll end up with where I started: a broken Vision Pro glass as you slam it in a cupboard while trying to fish out a jar of condiments.
I don't know what is about VR that makes people pull out the fantasy scenarios. It's simply a (bulky) screen with pass through. It's not a wizard. It can't know things unless there's a way for it to find them.
We can imagine a super-thin model that you can keep on your face 24/7 and sleep with it too, so it tracks your entire life forever and knows you better than you know yourself. And it synchronizes with your spouse and children who also wear their own headsets 24/7. And it's unbreakable. And the battery never runs out. We can imagine many things. They don't exist, and won't exist any time soon. "Not on the horizon" as Steve Jobs used to say.
You see, the rules of formal languages that encode formal rules of system constraints pre-date computers by centuries. Think of math proofs, for example. Sure, we can encode symbols as emojis, or geometric figures or whatever. But in the end, it's sequences of symbols, that's the nature of it. And tapping symbols one by one with a headset will suck, no matter how programming looks.
The rules of formal languages that encode formal rules of system constraints pre-date in fact our species too. Think about what DNA is. Oh yeah, spooky, isn't it. A sequence of symbols (GTCA) encoding a sequence of more complex symbols (proteins). Spooky! But yes, DNA is our code. And it works the same as our programming code.
Now I know where you're going. LLMs. Let's assume an LLM writes the code for you. You still have to read it, which you can do fine with a headset (if it's not as encapsulating and heavy, and with short battery life as Vision Pro v1). But if you spot something's off, you need to adjust it. Go directly for the kill, and make that surgical series of edits. You know? Or... maybe you can spend the rest of the day hopelessly trying to explain to Siri 2030 year edition what you want to do, instead of going in and doing it, for that "last mile".
Because if AI can do the last mile itself, to the point you don't need to even verify it... first, that's the fast way to AI shipping code we don't understand and basically giving up our entire civilization to it. And second... we don't need to code, but we also won't need to exist, and therefore not need headsets.
So in the worldlines where we DO exist... Vision Pro sucks for coding, because it's a shitty human interface to editing code.
And in the worldlines where we DO NOT exist... Vision Pro sucks for coding, because AI doesn't need headsets.
I realize yes. And do you realize if you'll be using keyboard and mouse you may as well not literally wear a computer *on your head*? Are you aware of displays? They can support themselves. On desks. Or wall mounts. Compared to Vision Pro it feels like magic. Self-supporting displays. It's the future. Everything is about to change when people learn about it.
And do you realize that’s by taking the displays off of your desk you can have more of them? Oh, at any size and any location you want. You can continue using the input devices you like but now have as many displays as you want and you can take it with you easily.
You assume you will hate it, maybe you will. But maybe the future won’t involve dedicated furniture to put things on and cables connecting them. Maybe you’ll be able to work wherever you want and with the same amount of productivity. Or maybe even better productivity!
But you’re probably right, the future will never get any better, this device is pointless and will never lead to better versions of itself or point to other ways of working. Thank you, there’s no telling where we might end up without true believers of the status quo like yourself.
We've banned this account for repeatedly breaking the site guidelines and ignoring our request to stop. Not cool.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
I admire the commitment to your opinions lol. Never really sure what motivates someone to tell the world what they aren't interested in using. My snark was aimed at highlighting the weirdness of how important you thought it was for us to know what you hate.
It's almost as though you're explaining why current and previous VR goggles haven't done very well. David Smith, among others, have had some experience with the device and are really excited. That excitement also includes several VR specific reporters.
I have yet to hear any complaints about screen sharpness. The most upvoted comment is from someone that worked on it and he also thinks the screen Rez isn't a problem. Foveated rendering is probably the reason it looks good. That same guy also said that the active cooling prevented it from getting hot on the face. We'll see how it works IRL.
Some have complained about the weight. I'm not all that concerned since this is the first version. I'm excited by the potential, especially since developers are excited about it.
You keep illustrating that you didn't view the keynote, or any of the followup videos and are just judging based on your priors.
Clearly you haven't seen the VP in person, or you wouldn't be making such assumptions. I'm not, I'm trusting people who have used the device in person, people who have said the text is very sharp and easy to read. People with much better credentials than a random on HN.
And if you had done even the most minimal research, you'd know that the VP has fans that eliminate heat build up and fogging.
But hey, you've repeated ad nauseam that you don't like the VP and think it won't succeed. Everyone's entitled to their own opinions, but not their own facts. You remind me of how ESR was continually predicting the failure of the iPhone until he went quiet on it a few years ago. Same circular reasoning about how it won't succeed because reasons... Yet the people who have seen and used the VP differ greatly with you.
Now the VP may not be for everyone, and some elements of Apple's marketing are a bit cringe, but people said the same thing about AirPods.
> But if you spot something's off, you need to adjust it. Go directly for the kill, and make that surgical series of edits.
LLMs are still a really immature technology.
The hype is about where it could go in future, not necessarily where it is now.
Think about when compilers were immature technology, and the science of parsing/etc and optimizations were not well understood. You could make the exact same argument you have made now about the need of editing assembly or machine code by hand when the compiler doesn't get it right.
It was indeed common practice to do this well into the 1980s. That, and inline assembly is increasingly unnecessary now.
None of what I said is restricted to the current state of LLMs. I was speaking very broadly about the nature of AI in our world, and going back to the creation of DNA...
I left Apple 1.5 years ago but was working on the Vision Pro while I was there. I have spend many hours working in the headset and I know exactly the feeling he's describing! Leaving felt like going back in time to using clunky technology and I've been waiting for outside world to catch up (and will still be waiting until it comes out at least).
For the past week I've been trying to explain to people how certain I am that doing work in headsets will become mainstream but am (understandably) met with doubt, and I think you'd need to try it on to fully understand.