Hacker News new | past | comments | ask | show | jobs | submit login
Apple Vision Pro review (wsj.com)
681 points by fortran77 on Jan 30, 2024 | hide | past | favorite | 1066 comments



Also: Apple Vision Pro review: magic, until it's not - https://news.ycombinator.com/item?id=39190506 - Jan 2024 (226 comments)



Lots of surprises on the downside from all the reviews. Pass through much more limited in quality with motion blur, pixelation, distortions, limited color and dynamic range. The eye tracking driven input method which was seen as holy grail turns out to be annoying after a while because people don't naturally always look at what they want to click on. Personas straight up aren't ready. The lack of AR features is the biggest surprise. They tried hard to avoid it being a VR device but all the actual high quality experiences, especially the ones people are impressed by are the VR ones.

For me the biggest issue though is that it can't fulfil it's primary use cases:

Want it for productivity? it can't run MacOS applications and if you want to use your actual Mac it can't do multiple monitors.

Want it for entertainment? people want to enjoy photos, videos, movies with other people and it can't include them. Even if they have a Vision Pro, I haven't yet seen any sign of ability for multiple people to do these things together.

All up, it all seems far more immature and dev-kit stage than I was expecting.


The biggest thing for me, from the Verge review, was the limited field of view. Apple sold it as filling the users entire field of view and it sounds like that isn’t the case. When I first saw the reality of the Microsoft HoloLens, that was my disappointment there as well… Google Glass as well. I don’t want to feel like I’m wearing an AR device, as so far, that’s what everything has looked like. Limits of the technology, sure, but until that is solved I’ll have a lot of trouble throwing money at it. I still plan to head to an Apple Store at some point to try it out for myself.

As far as viewing with other people, this doesn’t seem like an insurmountable challenge. They have the theaters, they have Personas, they have spacial audio, and other Apple devices have features for watching content together with friends. Put them all together and it seems like if several friends had Vision Pro they could feel like they were sitting in a theater room together while watching a movie. I’m not saying this will be easy, but it seems like all the building blocks are there. The Personas are probably the big weak point, especially looking at someone next to you, but with the focus on the movie, I that’s probably the least important part.


The end goal here - multiple people who are isolated by the technology they're wearing experiencing a simulacrum of human interaction mediated by that technology - is not only unbelievably depressing but also honestly just really boring. We've already built this. It was called the "metaverse" and it sucked. Everyone left.


The idea of wearing a VR headset so I can "enjoy photos with friends" is absolutely hilarious and so deeply disconnected from what human beings want and need as a species.


Every time I see the spatial photos and spatial videos, I always view it as someone putting on their VR headset to reliving a memory of someone who died. Then they take the headset off and realize they are alone and the loss broke them. I have a lot of trouble not seeing those videos from a very depressing place.


People are totally allowed to watch videos of their children without their children being dead.


This is true. I think it's the fade effect they add to the edge of it, it seems like an effect that would be used in a movie to show a memory of someone who died.


It's all about tradeoffs. If you get at the same time:

- gigantic (virtual) screens showing your photos/videos in a spectacular way

AND

- a incredibly small, light, comfortable, and almost unnoticeable (e.g. glasses) headset

then, yes people will absolutely use that instead of actual photos printed on paper.

Obviously we're not there yet.


The people could also be isolated by being a few time zones away from each other.


I mean, I work full time remotely at home (company is in a different state, family is here where I live). If you watch the FaceTime section of MKBHD’s review, that would be a phenomenal improvement over Teams chat with web cams.

While I wouldn’t move back just to go into the office, I absolutely miss the ability to look at a specific person while talking in meetings, and for other to have the context about who you’re looking at with head turning or gestures etc…

In a Teams group chat, I’m always talking to the whole wall of faces essentially. Only by context of the conversation can you figure out who someone is talking to, and even then their webcam might be off center or on a separate monitor from where they’re looking and you don’t even pretend to make eye contact. There is no way to break out into an isolated side conversation or select your gaze and there’s a lot of value there.

There’s also the concept of sharing a virtual room with a person I’m collaborating with. Drawing on a whiteboard, walking around pointing at things on the monitor and such. The feeling of a shared physical space is powerful for collaboration.

Anyways, what I’m trying to say is this: The thought of replacing all human interaction with tech is of course depressing, but I don’t view this like that. This is technology that can meet people where they are and improve an existing experience. I’m not moving away from my family, esp. as my parents get older, and I will remain working remotely for the company because I very much enjoy what I work on, my team, and the way the company treats me.

This tech is a step towards making the shortcomings of remote work less impactful. It’s stupid expensive and the first iteration so it’ll have problems, but I’m much more excited by Apple’s approach to this than Facebook’s nonsense. Excited to see where it develops and how it’s adopted as it becomes cheaper.


Maybe not everyone shares this viewpoint.


That was the surprise to me too. Lower FOV than a Quest 3? Inexcusable.


Agreed. 8 years out from the original Oculus rift which had 110' FOV and we're still somehow staring through the AR/VR equivalent of a submarine periscope.


a whole eight years???


I still have one of the Oculus Devkits at home from their initial campaign. It's more than 10 years old now. I've noticed the biggest improvements to VR over the last decade were in useability - not the screen. Once they figured out low latency by switching the screen to black after every frame every device upgrade felt like a tiny improvement in resolution at best. Size 12 font text on a 2m screen wasn't really readable yet, but I still don't use VR for text work today (even though I could) so it also doesn't matter. Right now it's still a question of usability rather than the screen.


Even with that FOV and the highest resolution display they could get, it still can only emulate a 15" display comfy 70cm from your eye at a whopping resolution of 940x540.

It's a gimmick priced as 2.5 monitor stands.


How did you get those numbers? They seem quite low, considering that they do 4K mirroring and all.


They don't do 4k mirroring. A 4k monitor has 8 million pixels while the googles have about 11 million pixels per eye. True 4k is only possible if you use 85% of vertical and 85% of horizontal FoV, like when staring at a 24" monitor from 15cm.


Even if they were double the resolution per eye, you aren't going to have "native" 4k reproduction because you are looking at a projection of a screen rather than an actual screen. What is the value of achieving one "accurate" pixel if it is surrounded by eight bilinear-interpolated ones?

But then, you usually don't buy a 4k screen because you can perceive the individual pixels either, unless you are creating content that requires that.

I suspect it aligns easier with the quality concerns of reality and not with virtual components - if I can use a monitor through the screen with no noticeable loss of quality or additional eye strain, there's no reason to think a virtualized monitor won't be at least as good.


I don't think it's an apples to apples comparison (heh), I think at this point 'good enough' is going to be better than anything my eyes can resolve. On a Quest 3 with Zenni lenses, pixel visibilty as a concern is way down on my list. It works REALLY well.


8/11 = 73% not 85%. so should be a lot more comfortable distance than 15cm/5.9 inches.


Nope.

0.85*0.85 => 0.73


Ah, gotcha. But what if you factor in the limited FOV of the display itself? this 11 million pixel calculation doesn't apply to the full FOV of the eye after all.


I would expect most people to pick a natural size for their windows, perhaps 24-30" half a meter away?


It's a VR headset, angular resolution is fixed at 3800 px / 90 deg = 42px/deg. For a 27.5" virtual display half a meter away, let chord length = 700mm, radius ~ 600mm, distance to virtual display will be 488mm, and diagonal angle it occupies within view will be 71.4deg. with 42px/deg, resolution is ~3000px diagonal, or from (16x)^2+(9x)^2 = (3000)^2, x = 163.4, therefore resolution for that virtual monitor will be 2608x1470.

Other combinations of distances and virtual sizes can be calculated in the same manner.


How much of that “natural size” has been dictated by common monitor sizes, the limits of desk size/setup, and aesthetics?

My work setup as a 43” monitor set about 1 arm’s length away. I can just touch it when my fingertip if I stretch slightly. In a virtual space, I’d probably go slightly larger, maybe 45-48”, pushed a little further back.


I don't think I would want a monitor that big, since I think I would have trouble covering all of it without moving my head.


So then it's not "4k mirroring".


Not when text is vastly sharper than Quest 3.


That’s a double edge sword though. Were they to make it 180°, we would critique it for spreading the precious pixels. “With those 4K your virtual monitor is 1080p at most!”


They certainly could do something clever, like having a screen with gradually decreasing pixel density and quickly moving it around so its center is always where you're looking it.


Apple used to excel at making the right trade off here. Choosing what feels right over numbers.


I remember when the first Android phones with 4-inch screens hit the market, I saw someone comment online that if a 4-inch screen was better Apple would have figured that out during R&D for the iPhone and made the original iPhone 4 inches.


I still think the original iPhone was the best size. I use the iPhone mini today, and I wouldn't mind it being a bit smaller still.

It really lends itself to the phone being a tool, rather than a device for endless entertainment. It also suggests it is an accessory, with the desktop/laptop being the home base, rather than the phone being a primary device.

It really bothers me that the phone is now at the center of the ecosystem. It seems like the hub should be something that isn't as vulnerable (to theft, loss, or drops) as a phone.


I hope somebody at the time replied with a picture of the original iMac's hockey puck mouse.

Apple can do amazing things, but they do also fairly regularly make terrible things.


I mean, the current Apple Magic Mouse which has its charging port on the underside is interesting as well.


Sure, but wasn't cockroach charging introduced in 2015, vs. c. 2010 for 4-inch Android phones?


What's 'cockroach charging'?



Ah, I see. I've never seen a phone which charges from the underside, though.


There's a pretty common use case that's basically this: get in the car, plug in the phone, and drop it in a cup holder with the charging port pointing up and all the antennas that would point skyward during handheld use are now pointing at the ground.


Steve used to excel at it.


They were and are good at this. But the decision to ship this in this state before their typical quality bar was Tim Apple’s. He reportedly overrode the design team and demanded it ship now to enter the market and iterate in public. Even though he’s otherwise hands off with the product itself. This was all widely reported but has been missing from recent reporting now that it’s out.


It was a good call. It needs to get to early adopters and establish the existence of an alternative to facebook's metaverse vision.


The thing is, when one has a user base the size of Apple's what "feels" right actually covers a very large range of possibilities. Even when Apple "excels" and covers most of that range, there's still a ton of people left out. Which leads to every Apple thread on HN filled with users bemoaning Apple's poor choices.

Wash. Rinse. Repeat.


And they probably have


Idgi. What feels right has always been very high numbers.


There is a fundamntal tradeoff between FOV and image quality at any given pixel density (and there are other tradeoffs between pixel density and cost and compute power needed).


yeah. But a high number of pixels and a high number of minutes of battery life work against each other here.


> field of view...Limits of the technology

It really isn't though, at least not for long, as of mid 2023 there are publicly showcased compact lightweight prototypes with 240° FoV.


Well there’s FOV and there’s pixel density which are both too low right now and antagonistic to one another feature wise. There’s also display brightness which is an issue and I’m not sure how that fits in to the FOV/density spectrum. And then more pixels means more compute… That prototypes exist doesn’t necessarily mean much in a space that is full of prototypes showing off one particular feature. The very hard thing is to combine all of the desired features in to one consumer ready headset.


>Well there’s FOV and there’s pixel density which are both too low right now.

Exactly. You know what is the best "goggle" type display I ever saw (and I tried quite a few)? Recent FPV goggles (hdzero) that have 1930*1080 OLED displays at approximately 46 degrees fov. I'm very glad the manufacturer decided to use the his low fov instead of increasing it to 55deg and more like others. The picture is insanely crisp, and looks better than looking at a 4k display. Another huge benefit is that entire picture fits within your "focus cone" so there is no need to gaze around. It is not a VR display, it's purpose is different, but it shows us what visual quality is possible.

I'd love if manufacturers, if they can't make 16k displays that fill the entire fov, create variable pixel density displays. Best quality in the center. Deteriorating towards the edges. That would be much cheaper, but then for good illusion one would need eye tracking and motorised optics which would probably be more expensive in the end...

Oh, well, I'm pretty happy with my fpv goggles (for fpv) I just wish there was a way for them to display different picture for each eye. They already have head tracking, I wonder what VR would be like with this huge ppi narrow fov goggles. Would it be more or less immersive?


for those who don't know: FPV = first person view. Used for drone racing and things like that.


I'm pretty sure one of the varjo headsets had displays like you describe.


motorized displays also sounds HEAVY which is a big issue for a device intended to be used for long periods of time


they are paying the price for their obsession with super high resolution. I think it's a mistake, from my perception Quest 3 at 25ppd is nearly good enough, and its panels are nearly half the resolution. They should have traded 20% resolution for 10 deg FOV either side. In fact, with appropriate optics the effective resolutions sacrifice could be even less than that.


How is it to read text on the Quest 3? How would you compare it to a high-density display (such as what is on your phone)?


Well, it's no Retina screen. I'd say it's bit like reading text on a 1080p CRT at 96 dpi.

It's perfectly doable, but there's still a hint of fuzziness and we're still a couple generations away from crisp LCD text.

I care about my eyes, so I use a 4k screen at 2x scaling for coding, and will not use the Quest 3 for work, unless that involves playing games and watching videos.


Is there any evidence of pixelated text damaging eyes?



They only need hi-res where the user is looking, don't they?

We can't see what's in our peripheral vision clearly, so they should be able to get away with most of the FoV being blocky af, shouldn't they?


The goggle FoV is fixed right in front of you, but your eyes turn.


I've been telling people this for awhile, the biggest issue I have with the available VR headsets is the limited FoV. It causes unneccesary neck strain having to move your head as much as you do to look at things.


> As far as viewing with other people, this doesn’t seem like an insurmountable challenge.

This can also be achieved with 3rd party apps. On Quest, this is already a reality with Big Screen.


I wonder who would actually do that. The only reason I watch tv is to socialize with my family. I think if we were just sitting on the couch all wearing VR googles it would ruin the entire use case for watching stuff together. Surely, some geeks, once or twice.


The use case is watching something with people who are half a world away I think, not with people who are physically present with you.


I wonder if that "theatre experience" would be a significant improvement for most people. I already do this with friends remotely by jumping on discord and hitting play on whatever service we're using at the same time. There more more automatically in-sync solutions out there for sure, but I can't see doing this in VR being more than a novelty that wears out after a few sessions.


You should actually try it before coming to that conclusion.

The main difference between discord, zoom or any other flat viewing experience vs VR is immersion. Words don’t do it justice, you have to experience it yourself.


in some ways even the reviews seem to be placed by Apple (same as they provided the exact photos of the device the media printed). all of them seem to present a full view with floating widgets, when there's really no way to photograph them


Surely the device just has a screenshot function?


Two interesting and unrelated facts:

A screenshot would be blurry because of the foveated rendering

The device doesn't even have a power button


Given Apple have complete control with their OS of how the dynamic foveated rendering system works, if they particularly wanted a screenshot feature to be included - reasonably likely for marketing purposes, it seems to me - then surely it wouldn't be too hard to code it such that it renders a second version of the GPU's output to generate the screenshot, when asked to by user, which has even pixel density. (Or perhaps screenshots getting some sort of "bokeh" for the first time, highlighting whatever the user was looking at when taking the screenshot, will turn out to be a desirable feature.)

Not having buttons obviously doesn't prevent there being a screenshot function, just as lack of screenshot button on iPhone hasn't prevented it from taking screenshots, and just as the VP's lack of buttons doesn't prevent users from interacting with it at all...


> The eye tracking driven input method which was seen as holy grail turns out to be annoying after a while because people don't naturally always look at what they want to click on.

This has been know for at least 30 years in the eye tracking business and it even has a name - The Midas Touch problem.


> This has been know for at least 30 years in the eye tracking business and it even has a name - The Midas Touch problem.

I wanted to see how it is possible but sure enough, I found a paper from 1995 that cited even older research about this.

https://www.cs.tufts.edu/~jacob/papers/barfield.pdf


from the paper, the definition of The Midas Touch problem is:

> They expect to be able to look at an item without having the look cause an action to occur.

So this doesn't seem to be a problem that Vision Pro has?


"At first it is helpful to be able simply to look at what you want and have it occur without further action; soon, though, it becomes like the Midas Touch. Everywhere you look, another command is activated; you cannot look anywhere without issuing a command."

Yeah, Apple Vision doesn't have this problem, because eye tracking is used just for pointing; not for clicking on items.


Whether it has this problem or not seems to really depend on how they use “OnEyesOver” events in practice.


Vision Pro pinch-touch is essentially the same as this paper's

> some form of ‘‘clutch’’ to engage and disengage the monitoring

The paper does talk about other challenges with look-to-select, even if it is biased by the thinking of back in the day:

> Unlike a mouse, it is relatively difficult to control eye position consciously and precisely at all times.

You have to remember that the historical setting for much of this research was to help paralyzed people to communicate, and pushing a button or the modern pinch-touch was not really always an option.


It doesn't seem like a 'problem'. For new tech, the emphasis should always be on pro users first (even if they don't initially adopt it because of the long lead times for those industries). So if you're designing an oil rig with these, a pro user would probably want to be able to independently interact with an element while looking for the next element since that's more time-efficient. Seems like a better term might be the 'Midas Touch Axiom'.


Seems like you could just implement a simple delay to solve this.

Let's say I want to click on the "reply" button below this text box. If I'm perfectly honest, I DO look at the button for a moment, then I move the mouse pointer over to it. But then right before clicking, my eyes switch back to the content I've created to observe that my click is having the desired effect on it.

I'm not actually looking at the button at the moment I click on it, but I DID look at it just a few milliseconds prior to the click. Why can't the UI just keep track of what I looked at a few milliseconds ago, to figure out that I actually wanted to click on the button, and not in the center of some text box?

One issue could be maybe I thought for a moment about replying but then changed my mind and decided to edit the content some more. But the UI has decided that I meant to click the "reply" button and so now it's been submitted prematurely. Yeah, I can see the problem now. The position of the mouse cursor is meaningful when clicking, and the Vision OS doesn't have a cursor. Cursors are important.


But decoupling hand gesture from eye tracking should not be that hard: the external cameras could just follow the hand and put a pointer on the screen.


> if you want to use your actual Mac it can't do multiple monitors

lol, literally the only use case I could convince myself to spend this money on (hey its cheaper than an XDR ).

Even then I was having trouble convincing myself since.. all indications were you wouldn't want to wear this more than a few hours at a time, so ultimately you still need the real physical monitors for the other 80% of your workday.


Instead of having multiple virtual screens I would prefer a lot more if I could simply spread individual Mac windows around my virtual space. In mixed reality the whole concept of a virtual monitor makes no sense, it's just an unnecessarily limiting abstraction you could easily do without. The whole room is your "desktop".

It's basically what the SimulaVR guys are aiming at, and I'm surprised Apple didn't go this way with their Mac integration. Especially because the native visionOS apps do seem to behave just that way.


I imagine it's mostly a CPU/battery/bandwidth concern at this point — having one wireless 4.5k/60 (or is it 90?) stream from your Mac is difficult enough; having a potentially unbound number of them (for arbitrary number of Mac windows open) is a different problem altogether.

But there are 3rd party solutions that will let you do just that though: https://github.com/saagarjha/Ensemble


This is it exactly. I’m surprised the HN crowd isn’t immediately picking up the bandwidth issues with streaming unlimited 4k screens from your MacBook.


AVP knows exactly which monitor you are looking at. The mac doesn't need to render all monitors at full resolution, just the one being looked at.


That sort of seamless render switching doesn’t sound technologically doable if you include that the render is happening on a different machine. I could be wrong.


Noted that this would be a non issue if you could just connect a cable between your Mac and the Vision. You already have the huge battery pack dangling off the side. This is an apple issue, not a technical one.


Not really?

Do the math of how much bandwidth like three windows/screens (because with this model each window is basically its own screen) at 4K/100hz/10bit color each would take.

You're at limits of TB4 _very quickly_.

You can compress the image, try to do something smart with foveated rendering (only stream the windows that users are looking at; but that breaks if you want to keep a window with logs in your peripheral vision), use chroma subsampling, etc; but those all are varying trade-offs with relation to image quality.


> You're at limits of TB4 _very quickly_.

You don't need to send every pixel for every frame uncompressed.

It would be more like VLC, not sending pixels that aren't changing. And you don't really need 100Hz either. You can't read that fast anyway, the content could refresh lower than that as long as the actual window is moved at 100Hz to avoid nausea.

I really doubt the Mac display functionality as-is is refreshed at 100Hz with full pixel fidelity without compression. WiFi can't handle those kinds of speeds reliably.


Foveated rendering doesn’t stop streaming if the user isn’t looking at the window. Just streams in lower quality.

If you would just trivially stream all windows to the vision of course it will at some point be at the limit of current technology. But I would assume a company like apple has the means to push the state of transmission media (rather than just using a now 4 year old standard) and be able to think of something smarter than „just stream all windows in maximum quality even if the user doesn’t look at them“


I may be wrong but it honestly doesn’t seem realistic to expect seamless, latency free foveated rendering switching at those speeds while coordinating between two devices.


Since foveated rendering would only send the resolution required for what the user could perceive then even logs in the peripheral space would be ok since they would be sent in much lower resolution. I think the challenge with some smart foveated rendering would likely be latency.

Another option would be handling rendering on the Vision Pro rather than the MacBook so pixels don't need to be streamed at all.


Exactly. Current systems that work with large channel counts of high res deep colour real-time compositing are also around 6U 19" rack mount units and pull half a kilowatt of power. Not exactly ergonomic for strapping to one's face.


why would foveated rendering break down? it does not stop rendering where you are not looking just lowers the resolution


It'd break if you wanted to do the dumbest thing and just not stream the windows users aren't looking at; but lowering the stream resolution on the fly could work but now involves more complexity (on both sides to communicate when to adjust the resolution) and because it's not handled entirely on-device breaks the illusion of it being invisible.

I can also imagine it having some weird privacy implications; like Mac apps somehow monitoring this and tracking whether you're actually looking at them, etc.


If you're OK with rendering everything at high resolution (and then choosing what quality to send over) then you shouldn't have any privacy issues, assuming that this part is done by the OS.


Foveated rendering don't work but not the way described in GP; human eyes are too fast against current latency and frame intervals for it to work.


I think Apple sees a device that doesn't have any cables in the near future; they just weren't able to pull it off this time.


It’s the product vision. But progress is slow reportedly because of physical and current scientific tech limits. Their product vision could take another 10-30 years.


No, it’s a processor and graphics card issue as well. You have to actually render a potentially unlimited number of 4K screens (at least the ones in your direct view) on the Mac Pro. It was never built to do that.


AVP already has foveated rendering so if MacOS had some awareness of what is being rendered it could potentially render unlimited windows since it would only need to render and stream enough to handle what is being looked at.

The primary problem I would guess here would be latency though so maybe not feasible. The other possibility is if the actual rendering happens on device instead of streaming pixels. It would dramatically decrease bandwidth required.

I think there's a solution somewhere for this.


Because I don't care about the technical limitations. I care about if it's a useful gadget for me or not. It doesn't matter why a feature isn't there, that just becomes a "you're holding it wrong" thing.


I don't think Apple wants you to use Mac apps. They want developers to make Vision Pro apps. That's why the Mac integration is the minimum they could get away with.


Good luck with that. Devs made the iPhone a hit and Apple demonized them over it. Can’t imagine too many are eager to make the same mistake twice.


There are enough devs that love Apple and have a burning loyalty toward them, that this is not something I would worry about. Even if it started to become a problem, Apple has enough cash to make a lot of apps, and could subsidize this for an eternity if necessary to make it work. Apps will not be a problem.


Depends on what they want it to be.

For sure there will be some hardcore indies that just love developing for vr.

There will probably be some game studios ready to go.

TikTok will probably be there.

But, why would Netflix, Spotify, every productivity tool…go out of their way to work on this thing at a time where most are diving deeper into PWA’s?

For spatial computing, they’re going to need to improve the dev situation or accept that safari is the star of the show.


Do you think Netflix and Spotify are really needed (genuinely wondering)? Netflix maybe because of exclusives, but I would think most people with Vision Pro are also going to have Apple Music. With their TV and Music offerings, it might actually be better not to have Netflix/Spotify etc.


Obviously many people use Spotify over Apple Music and want to use it.


How do you know? Where do you get the numbers? Vision Pro isn't even out yet so I'm curious how you could know this. Do you have inside-Apple insight? Such as how many of the people who ordered Vision Pro are on Spotify vs Apple Music?


Yep, I’d say if this thing can’t sell beyond the Apple services subscribers, it won’t exist in 5 years.


Have you been paying attention to everything Apple has been doing in, say, the last week?


[flagged]


Sorry, I misread your previous comment as saying the opposite.


VisionOS is pretty limited right now and seems like a much larger leap to justify that iPad vs iOS was.


Hope they are not really betting on that, think it's clear to everyone that OS fatigue hit when they had 3 OSes and lots of companies stopped bothering after that didn't pan out as worth it now this would make 6 OSes...


But what is your actual physical work environment like then? Are you standing in the middle of the room and typing into air? I always imagined productivity minded people would still at least use a physical keyboard which limits mobility.


No you’d be sitting at a desk with a mouse and keyboard, like today


But that isn't 'The whole room is your "desktop".' If you are restricted to having your desk in front of you, you might as well be looking at virtual screen(s) on your desk.


Of course you could hang the displays freely into space as you can now with the existing Vision Pro apps. You'd hang the productivity ones close to your desk but some of them you could hang in other places and use a virtual keyboard.


I’d be sitting on my couch with my wireless Glove80 split keyboard at my sides. Or lying down with the keyboard flanking me. No desk. And I’d move the keyboard to my counter to stand when I feel like it.


I was thinking something similar. Split ergonomic keyboard wherever you are. Could possibly do something to stick them to legs wherever it comfortable for standing.

I had not seen the Glove80 before, it does look pretty nice. Though I would prefer a bit smaller. I don't really use F keys, and use layers for things like arrow keys.


Glove80 is smaller than the only alternative that suits me, Kinesis Advantage. Agree even smaller would be nice. There are many options if you don’t require the key well shape or if more DIY kits/production quality are up your alley

Glove80 has hot-swappable tripod mounts, so I attach it to my chair, or I use clamps to attach it to a table, or to short tripods standing on the floor, or to very tiny tripods that can give the keyboards an extreme near-vertical angle on a desktop. Very easy to get an ergo setup going when traveling this way.


this is how microsoft's remote desktop protocol works (RDP) but apple never invested in their implementation of something similar, relying on vnc to just send images of the entire screen.

the way RDP works is the code to actually render a window or desktop gets sent over the network and is ONLY rendered on the remote client.

this is not only more efficient but also allows for fun things like rendering remote apps directly on your local desktop mixed in with your normal local windows


I know the implementation needs a lot to be desired but just in case you were not aware, the Oculus Rift shipped with this feature and it still exists via the link on Quest AFAIK. You pick the last icon on the UI bar that appears and it lets you select any window to pop out separately

Here's a screenshot of 4 windows plus the deskop.

https://pasteboard.co/hPKLrKgp5vjy.jpg

This features has been there for nearly 9 years?

(note: In my view there was passthrough and I could see my room but the screenshot function didn't include that).

It seems limited to desktop + 4 popout windows but I know it used to support more because when I first got the Rift I tried to see if I could get 6 videos running + desktop all at once. It worked.

There are issues though

(1) it's just showing the desktop's image, that means it can't adjust the resolution of the window. Ideally you'd pick a devicePixelRatio and a resolution and it would re-render but no desktop OS is designed to do that.

(2) It's Microsoft Windows, not an OS that Facebook controls which means they're at Microsoft's mercy and whatever features Windows supports. No easy way to add better VR controls. For example, IIRC, you can't size a window directly. Instead you need go to the window showing the full desktop, size it there, that will then be reflected in the popout window. If that's not clear, there are 2 size. The size in VR. The resolution of the window. If the window is a tiny 320x240 pixel window on your desktop and you make it 5x3 meters in your virtual space, all Oculus can do is draw those 320x240 pixels. You can go back to your desktop, size the window to 1237x841, now Oculus can draw those 1237x841 to your 5x3 meter VR pane. But, since Oculus doesn't control windows (or maybe they're lazy), there is no way change from 320x240 to 1237x841 by directly manipulating the pane in VR. Instead you need to do that in the desktop pane in VR.

(3) IIRC it's got some weird issues with the active window. Since it's really just grabbing the textures of the Windows OS compositor and showing then in different spaces, that's separate from Windows' itself's concept of which window is the foreground window. So, as you select windows, you can see on the floating window of the "desktop" the front window keeps changing.

These are all really just a function of (2), that Facebook doesn't own the desktop OS.

The same problem would exist with the Mac. Apple, since they control Mac, could maybe make it work but it would be a huge change to MacOS. They'd effectively need each app to run on its own screen and each screen to be a different resolution and device pixel ratio. Plus they'd need to map in all of the other features to mac MacOS to be usable in AR


Huh cool, thank you! I had no idea this existed. And I actually have had a Rift since the start (I was an original Oculus Kickstarter backer so I got one for free). But to be honest I never really used it for productivity as the Rift's resolution was too low anyway. Even with the Quest 2 I didn't think it was high enough to use comfortably. But now with the Quest 3 it might be worth it.

But I'll give it a try, thanks!

And yeah Apple would need to do some changes to macOS to make this work well, but as you say they control it. If they really view this as the future of computing they should really make it work.

The idea of SimulaVR is that it replaces your window manager so they avoid some of these issues by taking direct control. But that's something that only works on Linux.


For 2, does Microsoft not have accessibility APIs to control window size and layout? On Apple platforms at least if you were doing this "right" you would probably create a new software virtual display and move all windows bridged into VR to that, and move and resize them as appropriate. I believe the actual implementation turns off the Mac screen (through a "curtain" I think, so idk if it's actually moving the windows around below that or what) but like there are many things you can try doing here to make it better.


I think people know about that but aren't interested because the resolution is obviously too low on current headsets.


Yeah that's why I never used the whole desktop either, only if I need to change a setting during VR gaming like the audio output device that doesn't always auto-switch to the Oculus Audio output. I never noticed you could show apps separately like this, cool!


Too low and all the surrounding UI is an impedance mismatch.


Keep an eye on Immersed.com, they have an app that does this with a slew of VR devices, and I expect they will support Vision Pro if they can. They also have the Visor.com that is supposed to release later this year


> all indications were you wouldn't want to wear this more than a few hours at a time

I think all the wear comes if you’re sitting up wearing it, though. For passive consumption (or thinking between moments of work), you can just lean back or even lay down, which will take the pressure off your head/face.


>The eye tracking driven input method which was seen as holy grail turns out to be annoying after a while because people don't naturally always look at what they want to click on.

This has always been the case and this technology has been around for a while. I'm surprised Apple would have chosen to use it for user input.


I was curious about this so I tried surfing the web and seeing how I click on things. I am literally unable to click on buttons or text inputs without actually looking at them. If I try to only use a corner of my eye, I miss the buttons 90% of the time. How do people not look at what they are clicking?


My guess is that people actually do look in order to aim their mouse but by the time they click, their eyes have already moved onto wherever they’re looking next.


yep this is it

one reviewer said they kept hitting "No" and "cancel" when they meant "Yes" because their natural progression was to look at the buttons in order, pick out the one they want to press but actually click it after they finish scanning all of them with their "mouse" paused on the one they want to hit. It's kind of fascinating that the VisionPro's input method works so well when it does work that it tricks people into doing this, because they really do just expect it to work like magic.


Yep, focused on the play button on a YouTube video to get my mouse there and then my eyes immediately went back to the center of the video a fraction before I clicked the mouse button.


Maybe they could "remember" where your eyes were looking at a few milliseconds ago and click on that, but then they wouldn't know the difference between that or maybe you actually did mean to click in the center of the content area.

The problem is the Vision OS has no cursor. Cursor position is critical. Your eyes move around a lot more than your cursor moves, but in Vision OS, cursor position and eye position are the same. Very annoying. Maybe they could put a little cursor in the center of the screen that you could move around with another gesture.


If you want to stick with the old mouse paradigm. I rather think people are going to get used to looking at what they want to click on for slightly longer very quickly.


From when I've used them in the past, the issue is the difference between looking in the general direction of something and using your eye like a joystick that jumps around if you happen to glance a little to the left or right.


Just now I moved the mouse to hover over "Reply" while I was still reading the comment. It was close enough to where my eyes were focusing that I could put the pointer over it, but I never looked at it directly. I've noticed this kind of cueing behavior in screen recordings many, many times over the years. I'm sure most people do it without thinking.


Wow, I remember when it was announced, to press materials gave the impression that it would be closely integrable with macOS. I was picturing using VS Code from within it. Which I guess I could do if I set up VNC... but I'm not paying thousands for that.

Is an AR productivity tool really so hard? Apple owns the whole stack here. Nintendo can do Mario Kart AR in your living room with an RC car, but I can't get unlimited AR desktops for software development etc?


It does Airplay from a Mac to the headset. You look at the Mac and then tap the "connect" button that appears in mid air above it. Apparently it's very seamless and has good latency.

What you don't get is Mac windows intermingled with AVP windows, it's just your Mac screen as one window that you can move about. It sounds good though, and there has been a fair amount of movement in screen sharing over the last few macOS releases which suggests more could be coming here (like multi-screen support).


Yeah, that's not great. I guess that's slightly useful, but what I'm looking for is the ability is more like virtual desktops but being able to place each desktop into physical space. Being able to do that with individual windows would be even better, but I'd be fine with the virtual desktops. If Vision Pro can do Airplay, then I don't know why Apple decided not to do an integration with virtual desktops from macOS. Maybe they will, but not having that at day one is puzzling to me. Without that, it seems we've got yet another AR/VR device that is a glorified tech demo. I really hope that it gets better (and less expensive) than what we are currently seeing.


I've been keeping an eye on the Simula One for some time, it does what I think you're looking for[1]. I have to assume outside of the Apple ecosystem this type of this is already possible with existing VR headsets, but the closest I've seen is Virtual Desktop[2].

I guess for the Apple ecosystem we've gotta wait for some software updates to make it more useful. Not that I can justify paying my own money for the Vision Pro or Simula One.

[1] https://youtu.be/a3uN7d51Cco?t=16 [2] https://www.vrdesktop.net/


I'm sure this will be possible in future Vision Pro devices, main issue right now is probably performance. Macbook Pros already can only push to 2-3 monitors (albeit at higher resolution) and the Vision Pro is also running its own apps simultaneously alongside the Mac window (curious if it is the Mac or Vision Pro doing compression and other processing work for streaming to the headset). Apple being obsessive about frame rates and performance are likely erring on the side of caution before allowing too much. For example the recent downgrade from 1080p to 720p when it comes to streaming the Vision Pro screen to other devices.


Yeah I’d love to use one for PCB design at a cafe, but I don’t even own a Mac. I’m not going to drop $5k for an experience I can already do with my old laptop just a little bit fancier.


I'm surprised we still don't have an X-like "render on the drawing device" model. Rather than sending pixel, you send a higher abstraction model from which the UI can be drawn.


Absolutely. This is how RDP works as well. It's one the few areas where I think the Windows ecosystem has a better approach.


I suspect as hardware encode gets better and wireless bandwidth gets greater there’s less and less merit in this.


You can theoretically serialize draw calls, but that's significantly more complicated than sending over a video stream.


Then you lose the power of a second computing device if all view rendering happens on the AVP.


We still had this (--NSDisplay ?) in the first Developer Previews of MacOSX.


> but I can't get unlimited AR desktops for software development etc?

You can't because it's computationally impossible. There is simply no computing device that can render unlimited high-res desktops at 60Hz per eye.

Not to mention the need to stream all that data to the headset - since you're not going to put a high-end graphics card needed to even attempt this in anything approaching a wearable form factor. Good luck getting multiple 4k or even just HD streams between your laptop and your AR headset over Wi-Fi.


To clarify, I should have said virtual "desktops" instead of monitors. I don't even need all desktops to be active at once, but be available in physical spaces for me to access in a way that is more intuitive and customizable than pressing the F2 button on my Macbook keyboard and then hovering my cursor over the desktops bar. That would totally suffice. Also, somehow that bar can show what's on each virtual desktop, and in real time, so I know that's at least possible.


> Also, somehow that bar can show what's on each virtual desktop, and in real time, so I know that's at least possible.

That bar shows a small lower-res preview of each of the desktops, it is not rendering all the desktops at full 4K res. Essentially the smaller a window appears on the screen, the lower effort is needed to draw it.


AVP knows exactly which monitor you are looking at. The mac doesn't need to render all monitors at full resolution, just the one being looked at.


True, but apart from video/games/possibly a browser they absolutely don’t need to refresh at 60hz. I’m surprised they couldn’t somehow work in a “one nice screen, the rest refresh when needed” solution - though maybe that’s just too clunky for Apple.


Rest of the VR industry already systems that only really put the rendering hardware to use where you're looking it's called Foveated rendering, because essentially your eyes can't even see the detail in the edges of vision even if it were rendered there. Handles this all very fast and seamlessly.

Not a stretch the same thing could be applied to a virtual desktop environment especially when you have eye tracking.


I can’t get over how goofy and stupid it looks, and how awkward those gestures are.

There’s a video circulating of someone cooking while wearing it and gingerly pinching a virtual timer and placing it on a pot of boiling pasta.

It looks so stupid that I couldn’t help but laugh out loud.

Maybe younger people might think differently but for me, this stuff is dead on arrival because of simply how uncool and stupid it makes you look when you use it.


I'm not really an Apple fan nor am I going to buy this thing, but this seems like a criticism that goes away as soon as everyone does it.

We all look stupid staring at our black rectangles, with notches at the top, with little headphone stems sticking out of our ears. It looks stupid at first and then you get over it


People said this about voice assistants as well, we would get used to talking to them in public after a little while. I've still not seen anyone speaking to Siri or Google in public. The only thing most people use them for is setting timers when cooking ...


> We all look stupid staring at our black rectangles

Do we really though? Reading a newspaper or book would be stupid as well


Yeah, new things will look weird until culturally appropriated. Two things that seem to differentiate AR/VR stuff from previous tech

1. They are more "worn" than "used"

2. They conceal your face while wearing

So maybe cultural acceptance might take a while...

EDIT: typos


The black rectangle form factor is the same as a notepad. Nothing about it screams "I'm completely out of touch with my reality"


Yeah, saw the WSJ review with that video, but in all fairness, I think only someone who doesn't cook would think that cooking with this thing on your head might be a good idea.


Maybe, but if you had an app that allows you to define kitchen + appliances. You could eaily made a game/app that tells you what to do. Take stuff out off fridge, cut it up... ping your virtual pot needs stirring, go back to cutting, ping turn off the oven... and so on.

Sort of virtual assistant.

That could be useful to people to avoid burning or forgetting stuff.


> It looks so stupid that I couldn’t help but laugh out loud.

I'm so old I remember the N-Gage 1st gen being ridiculed for the "sidetalking" feature.

Now we have millionaires on TV talking to their phones like it's piece of bread they are about to take a bite of and nobody bats an eye.


I think thats the author of the wsj article


We just got the 'larger iPod' version of Spatial Computing. If your primary interface is a screen, it's still screen computing, not spatial computing. They literally had everything teed up to do some wild proximity things (AirTags, Homepods, etc) - and they gave us a strap on iPad.

Whatever, it at least gives a startup an opportunity to build something unique - it's just sad to see your old friend start going senile.


“ok zoomer” - your old “friend” next month as you pay for your subscriptions through their payment processing.


Reminds me of iPhone 1.

Everything you've said is reminiscent of the reviews of the first iPhone.


This iPhone trope has gotta die. I worked at Motorola when the iPhone came out. Every single engineer knew this thing would blow everything else out of the water. It was one of the largest leaps in consumer tech devices ever. I assure you the Vision Pro is nowhere close to that.


The better analogy is probably something like the Apple Watch.

Apple certainly wasn't the first smartwatch, but anyone who owned one before that was obviously a geek (said lovingly). Apple made the first mainstream acceptable smartwatch by smoothing over a lot of the complaints about their competitors, while adding some of their own in the process, just like the Vision Pro. It took a few iterations, but today people from all walks of life wear smartwatches. Certainly not as ubiquitous as smartphones, but Apple made smartwatches a standard piece of tech that millions of people own and they made plenty of money along the way.

The Vision Pro will probably be similar. For example, anyone wearing a VR/AR headset on a plane today would likely get stares. I bet a few years from now there will be several people on every plane wearing one of these. That doesn't mean Apple will make the best VR/AR headset or that VR/AR headsets will be a piece of tech that everyone owns, but Apple is capable of mainstreaming a piece of technology in ways that the Facebooks and Googles of the world aren't even if that is due to their marketing prowess and the strength of their brand just as much as their technical expertise. And in that sense, the thing that is more important than any of these reviews dropping today is the Super Bowl commercial Apple has almost assuredly bought to show this thing off in two weeks.


> Apple made the first mainstream acceptable smartwatch by smoothing over a lot of the complaints about their competitors,

Really? Why do Apple fanboys make these kinda claims.. same as wireless Bluetooth pods, or fingerprint readers, or faceID. There are ample examples of these done well on the hardware side prior. The main advantage Apple has is its seamless integration with software, which of course it pairs well with iOS because nothing else is allowed to.


"Done well" was the Nokia motto. They did solid phones with a ton of features. Look what happened to them?

Their problem was that none of the features was _usable_. It was like they released the first MVP the engineering team got done and forgot that people needed to use it too. But it gave them a bonus and another line on spec sheet, so all was good.

For example Nokia had Copy & Paste years before Apple. But it was shit. They _had_ it, but you could copy very specific text bits to other very specific locations. Even Android had the same issue, you could copy some bits not others.

Apple isn't innovating, they haven't for a long time. They rarely come up with something "new" that _nobody_ has done yet.

What they are pretty much the best at is getting the tech everyone else has tried and packaging it to a usable form factor for the normal non-Hackernews consumer.

Wireless BT headphones existed before the Airpods, but they made it so seamless even my mom could do it and hasn't needed any help with them. Open box, insert in ear, done.


You mean how a mole from Microsoft got in, used the feud between the old school Symbian team and the promising Maemo/MeeGo project to burn the whole mobile division down via nonsensical switch to Windows Mobile ?


> For example Nokia had Copy & Paste years before Apple. But it was shit

To be fair iOS copy and paste is still shit today, selecting and copying/pasting is really one of the worst experiences on iOS.


You can use the space key to drag around the selection cursor.

My point was more about the fact that you can copy an image in most apps and paste it to pretty much any field anywhere. It'll just work. Same with other rich data.

You couldn't do that with any previous C&P implementations, there were hard limits on what you could copy and where it could be pasted.


You clearly missed my point entirely. I'm not a fanboy saying Apple's products are the best. I even specifically said their success is "due to their marketing prowess and the strength of their brand just as much as their technical expertise."

They weren't the first smartwatch, but Apple is the company most responsibly for changing arbitrary societal metrics of "mainstream acceptance" like the percentage of people who would wear a smartwatch on a first date. That seems like an obvious observation and a "win" even if smartwatches aren't as ubiquitous as smartphones. I think the Vision Pro will follow a similar trajectory of success in that it will take years before anyone uses that word "success", but a few years from now you'll get on a plane and notice more than a few people wearing headsets and that will be because of Apple.


I agree with you, but not about the Vision Pro. I could see the potential and use cases for the iPhone, iPad, AirPods, Apple Watch, Apple TV. This headset though? It's a gimmick. I can see some niche use cases for it in very specific industries and in gaming. But I don't see that "normal" people would want to spend significant money on this. Not even if the price dropped to $999.


I would consider my use case (desire) of comfortably working from, say, a coffee shop without having to bring my 24" screen pretty "normal" and non-niche.


Maybe, but you would look like a dork. Most people don't want to look like dorks, so I doubt we'll see widespread use of this product in public.


People used to look like dorks walking down the street staring at a smartphone, or pulling an entire computer out of a bag at a coffee shop.

We'll adjust.


> The better analogy is probably something like the Apple Watch.

first apple watch was afailure and immediately fixed by the watch 2

>Apple made the first mainstream acceptable smartwatch by smoothing over a lot of the complaints about their competitors,

if anything apple set back smart watch development, the real groundbreaker was Pebble, but thanks to apple the smart wtach market is a stagnant perpetual compromise to justify low battery life with overpowered chips and bright screens, when all we really want is week-long batteries, e-ink always on displays, and physical buttons to work well.

i say this as an owner of at least 2 apple watches over the years. pebble never had a chance, but android's watch software has always suffered on trying to play apple's game instead of finding a true advantage


3500USD, an external battery and still "looking weird", I don't think you are going to see too many of them on planes, unless it's some die-hard Apple fanboy. If they manage to make next iteration slimmer (like, half the size) and with a battery in it, this might start to happen. But the market will be anyway smaller than the smartwatch one.


I still think it's weird how many people's eyes are glued to their phone screen in everyday situations and social settings. I think it's realistic that these headsets become acceptable and normal.


in general people have always saught a way to not have to look at each other in public. before phones it was newspapers and magazines as the social scourge of anti-social behavior


prices will come down, there'll be a non-Pro line, you think the market will be smaller but you forget it replaces displays, so people with laptops will migrate to using this, and then Apple will come out with headless laptops.


Until the ergonomic radically improves, it will be a niche device for enthusiasts.

IF they manage to produce some AR device that 1) you almost won't notice you are wearing 2) it has pass-through light capabilities so being real AR and not VR mimicking AR, THEN it can get mass-adoption, at least for office workers or to replace big TV screens.


Correlation does not imply causation. I think smartwatches (and step trackers) were a thing, independent of apple.

I remember a friend talking about load balancers when they were first came on the market 20 years ago. Cisco had this thing called "localdirector" which I believe couldn't handle load in the first place, while competitors did load balancing in hardware.

I was puzzled why people bought them.

My friend said, "Look, people buy $1M of cisco equipment, and they can just add a line item for one or 10 of these with no friction"

So, I think Apple made their watch a "line item". People buy a phone, and they need cables and the watch is sitting there, and they say "ok!" and try one.

(aside, I love my garmin watch. I just put it on my wrist. I haven't hooked it to my phone or connected it to the internet. It is great with battery life. I track my sleep, which seems to be when most people put their apple watch on a charger. I put my watch on the charger during my shower, which is all it needs)


> I put my watch on the charger during my shower, which is all it needs

Same as the Apple Watch.


Every day though. I charge my Garmin once a week when it gets down to 50%. The Garmin is a fitness watch with a few basic smartwatch features though. The Apple watch is a Smartwatch (with a lot of fitness features) The two aren't really comparable I don't think.


> which is all it needs

...to top it up to 7 days of charge


Vision Pro is not earth shaking or category defining.

It's entering a crowded market that isn't even that big. As the premium option.

Climbing that hill is going to be a very tall order.

Apple's brand will not be a moat, either.

Zuck's initial response to Vision Pro [1] was the correct one:

> From what I’ve seen initially, I’d say the good news is that there’s no kind of magical solutions that they have to any of the constraints on laws of physics that our teams haven’t already explored and thought of. They went with a higher resolution display, and between that and all the technology they put in there to power it, it costs seven times more and now requires so much energy that now you need a battery and a wire attached to it to use it. They made that design trade-off and it might make sense for the cases that they’re going for.

> But look, I think that their announcement really showcases the difference in the values and the vision that our companies bring to this in a way that I think is really important. We innovate to make sure that our products are as accessible and affordable to everyone as possible, and that is a core part of what we do. And we have sold tens of millions of Quests.

[1] https://www.roadtovr.com/apple-vision-pro-zuckerberg-reactio...


In retrospect, that was a very negative post, and I wanted to add that I'd like to see {V,A,X,*}R succeed as a sector. In fact, I'd like to see all of the players do well, including Apple. I really want to see a transportive vision of the future pan out.

I don't think this will be an easy market for anyone. It's low attachment, low critical app space.

I want to believe, though.


> This iPhone trope has gotta die

It’s not a trope if it’s true. Most hard core nerds didn’t get it until they had tried it hands on (including myself, I panned the device hard until I tried it). Then it was the exorbitant price point ($650 at a time when nobody really paid for phones). Then it was the lack of hardware keyboard. No “real” apps. No copy and paste (even my older at the time Symbian S60 devices had that). The list goes on and on.

I get it, if you were at another phone manufacturer, you might’ve been scared, but the reality is the iPhone didn’t really pick up steam in the market until 3G or 3GS.


People thought Apple making a phone was odd. No one thought mobile phones, or even smart phones were odd. Mobile phones were common when the iPhone came out, and even smart phones weren't uncommon. And once the iPhone did come out, there was immediate interest.

AR though, that's something the public hasn't shown much interest in yet. These products are looking more like the Segway (which was once supposedly going to revolutionize transportation) - cool, popular in a few niche markets, but not the revolution that people imagine them to be.


Nobody thought Apple making a phone was odd. Even prior to the announcement of the original iPhone and prior to any rumors about actually making a phone, the internet was full of amateur 3D mockups of what an "iPhone" could be like, including looking like an iPod with a click-wheel. They had conquered the personal media player market, and now people wanted a phone with the design of an iPod to carry less stuff in their pockets.


Everyone forgets that the nerds had windows ce phones before Apple hit the market. It was the 3GS and 3rd party dev arcade games that converted me. Apples challenge now is how to convince people who hate them to go to work for them.


I was one of those nerds! I equally loved it and loathed it. Some very cool software available, but as a cellphone-like experience, it was neither one thing nor the other. Most of the time I had a feature-phone as a daily-driver alongside the clunky early Windows stuff. Wasn't a bad compromise as I didn't want or need the power of Pocket PC/Windows Mobile 24/7, but it was however a fairly expensive one!

People also forget that using a stylus with a touchscreen made it a pretty crappy phone, and they weren't ergonomic to hold up to your face with a lot of the larger PDA-like ones. The Windows devices with sliding keyboards were pretty decent as a compromise in terms of size and features though. But boy were they expensive in the UK at the time as they were generally imports from the USA.


I loved my htc with the slide out keyboard! Sure, it would reboot occasionally when you slid the keyboard out and it took 45 minutes to send an email on edge but that was back to the future stuff back then.


Yup. The App Store didn't come until the iPhone 3G. iPhone 1 had no non-native apps. (later backported)


Windows CE or Nokia N95 or Nokia 9500.


To be clear, not disagreeing with your core point, but as a reference comparison, the Meta Quest 2 sold more units in its first 2-2.5 years than Apple sold iPhones (and iPhone 3G) in the equivalent time period.


Yep. People forget that the original iphone that could only run built-in apps really was a failure; it was only when they opened up the app store that it exploded (perhaps aided by this Cartmanland marketing strategy).


Nobody thought Apple making a phone was odd. Phone + MP3 players were already in everybody's pockets. Telcos were making money selling ringtones for $2.99 a pop. Apple already had a digital music store. It all lined up. And that was before they tested the waters with the Moto ROKR in 2005.

Back in 2010 there were rumours of Apple building TVs and cars. Those would be weird because they're in areas where Apple had no experience and no software content to provide.


These AR/VR are definitely a doomed profile unless somehow these things were given away for basically free. IMHO success for any new technology usually comes from: 1. Does it make you look more attractive? NO 2. Does it make you money? NO except for the YouTubers who will surely be pandering about how meme they are. How many streamers us VR? Oh yeah... 3. Does it make me more valuable to others? No? I can't see a mass adoption are where having a computer strapped to your face which enhances your productivity. This may be the area most attackable by some great use cases, but I don't see them today (once again, mass market in a way that Apples of the world would give an F).


I remember being angry that my Moto Razr V3 had the power to run awesome software but as a teenager I didn't have an easy way to boot stuff up on it. IMHO it was a perfect phone, except there was no way to run what I wanted. The best I could do was program text messaging services and use those. They discontinued the phone rather than give consumers the freedom to just use the hardware. I thought the iPhone was dumb, but when the App Store came out that was game over. I really missed the convenience of having buttons and being able to text with my phone in my pocket, but at least it was consumer programmable... even if you had to pay a $100 premium to become a "Developer" in order to do so.

Eventually the Moto X came along. I thought it was the perfect phone. Its voice assist features worked better than most voice assist features even today. You could easily do everything you wanted to do with the phone in your pocket and your earbuds in.

It had the perfect size screen. It had a great ambient-on watch-face screen that looked nice sitting on your desk among clutter. The dimple in the back was a really nice touch, it made it like a worry-stone[0] in your pocket. It had a lower resolution, but I kinda liked that about it. I think Motorola was bought or something, but whatever the reason, the next phone in the series ditched every single thing that made the Moto X special.

Those two devices were both my favorite mobile computing devices, and probably the closest I've come to getting fan-angry about a company screwing up their own magic.

More to the point: you might have been able to see it at Motorola, but even looking back, I don't understand why Razr couldn't have won against the iPhone. The Razr was a surprisingly capable little machine!

[0] - https://en.m.wikipedia.org/wiki/Worry_stone


You’re gonna love this — Motorola was bought by Google in an attempt to jumpstart their hardware business but it turned out to be a failed acquisition. They ended up grabbing the patent portfolio and selling off the rest of the biz fairly quickly.


> You’re gonna love this — Motorola was bought by Google in an attempt to jumpstart their hardware business but it turned out to be a failed acquisition.

I wouldn't call Motorola a failed acquisition - Google bought Motorola as a shield in an increasingly litigious environment: this was the age of Apple going "thermonuclear", Microsoft and patent trolls were wantonly shaking down Android vendors, and beginning to circle Google itself.

Motorola (under Google) had the best value-for-money smartphones - their midrange was solid, and reasonably priced while everyone else was continuously shifting to flagships, with each release priced higher than the last.

From the outside looking in, Google appeared to dispose Motorola to make Samsung happy - Samsung had been complaining loudly and widely about the Motorola acquisition, and openly flirted with other mobile platforms as a hedge.

Motorola shareholders got paid, Google got the patents it wanted, Samsung remained the 600 lb gorilla in Androidland, Lenovo got a good brand and keeps making ok phones to this day. So, not a failed acquisition by any reasonable measure


Soon after the acquisition, Google, really Google X, started working with Motorola on a new watch, prototyping it on existing hardware (Motoactv?). Within a few weeks, the whole thing got cancelled. Not sure if that was because of the Samsungs of this world complaining or because of government agencies.

Source: I was supposed to run the dogfood program in the NYC office. I never got the watches, but somewhere I still have the USB extension cords and the (then quite fancy) chargers with dual USB ports, one for your phone and one for the watch.


Yeah, I generally agree with that framing. That’s a good detail about samsung that I was not aware of. But I will say that from an ex-insider perspective — what you describe is mostly a failure.

Google was searching desperately for revenue diversity and was acquiring pretty hard at the time. I think the intent with Moto was to acquire a hardware shop and establish a market leading brand to compete directly with Apple. They eventually arrived at Pixel by building it entirely in-house. That Moto got raided for IP and spun out to Lenovo did not meet those (high) expectations. There was no need for anyone to take a loss but I think Google wanted to make a whole lot of money and did not.


I'll defer to you an insider on the hardware efforts, but the timing of the Motorola acquisition in the immediate aftermath of Google's failed bid[1] on Nortel's patent portfolio made it seem more like a patent-play more than a hardware acquisition. I recall Motorola's then-CEO even threatened to sue Google over patent Android infringement just before the acquisition, so it most certainly wasn't just about building a hardware business.

1. It was a crazy time. The winning consortium - which included Apple and Microsoft - invited Google to join their $4.5B bid; which would have made the patents entirely useless as a defense of Google against Apple or Microsoft. Google wisely declined, but bought Motorola less than a year later for $12B. https://www.theguardian.com/technology/2011/jul/02/google-pi...


> Motorola (under Google) had the best value-for-money smartphones - their midrange was solid, and reasonably priced while everyone else was continuously shifting to flagships, with each release priced higher than the last.

And they still do under Lenovo. Multiple-day battery, almost stock Android with very useful enhancements.


Loads of phones could side-load J2ME apps. I never owned a RAZR V3, but I imagine it would have supported J2ME apps as well as I ran them on even crappier phones. The first things I'd install on my dumbphones back in 2005 was Google Maps and Opera Mini. I'd grab all kinds of J2ME games off Zedge and other sites and copy them over back in the day.


IIRC, moto X was the first phone made by the Google-owned Motorola. The devices were made and assembled in the US.

I was hopeful for the post-acquisition Motorola but it didn't quite pan out. The IP salvaged from the purchase was always a big part of the deal so it wasn't really a loss, just not really much of a win either.


Moto X (2013) was the device I intended to describe. You are right on both counts. I think I conflated the second gen closure of the US-based plant with the acquisition in my memory. And I'm now remembering that the real reason I was so disappointed is that the Moto X (2013) was a phone that you could operate the entirety of the screen one handed. With the Moto X (2014) you could not.

I have an iPhone mini now, but it's still not small enough to 100% operate one-handed, at least not without grip adjustments. The Moto X wasn't quite a flagship phone, but it came close enough. I can't even find a properly one-handed phone anymore


Moto X was peak phone.


How can you assure us that Vision Pro is nowhere close to that? Do you have one?

As someone who bought the original iPhone, it was extremely impressive, but had many many flaws. The browser was practically unusable over 2g and the whole pinch to zoom the New York Times desktop site was never actually practical.

I think the parallels are clear.

I also bought the original MacBook Air. Now that truly was terrible and stupidly overpriced. More expensive than the Vision Pro when adjusted for inflation, and with major functional problems. Today it’s the world’s most popular laptop.


Iphone was made from pretty much what was available to other manufacturers plus some secret software sauce, and was priced like a Blackberry of the time. There was immediately plenty of use cases that competitors kinda did, but not so well.

These googles are as bespoke as it gets, are priced at 7x the competition and more than a flagship laptop, steer clear of the most popular existing use case, which is games, and offers... what exactly again?

That's quite some difference.


iPhone used capacitive touchscreen whereas the competitors used resistive touchscreens in their smartphones, which instantly added more usability to iPhone compared to Nokia N95/97 that was already a fully featured pocket computer in mobile case and likely much more powerful than the original iPhone. Apple did the dirty logistics trick on other smartphone manufacturers by buying all production of capacitive touchscreen factories and similar key components 3 years ahead, leaving other phone manufacturers unable to respond.



> Iphone was made from pretty much what was available to other manufacturers plus some secret software sauce, and was priced like a Blackberry of the time.

Neither of these statements is remotely true.


Samsung CPU, samsung oled display, Balda touchscreen analogous to what LG has used previously, Marvell wi-fi, Skyworks cellular, various Intel and Infineon aux chips. $499 vs Blackberry's 8320 at $449.

Not even remotely.


The iPhone was also subsidized by Cingular/AT&T at launch. The $449 was the retail cost of the BlackBerry.


Samsung OLED? iPhone X was first to feature OLED display. Surely you meant LCD?


Except when the iPhone came out all the reviewers were like "holy shit, this is mind blowing" while with this one everyone is like "it's a shittier oculus quest with some apple polish"


No one is saying shittier oculus quest. It has a much higher resolution and I presume people will get used to letting their eyes linger a bit longer on what they want to click. We’re so used to a mouse paradigm we try to immediately apply that here.


Almost every outlet is calling it a better Oculus Quest but one that is fantastically expensive.


I mean it's been over ten years since the new generation of VR headsets (I'm thinking of the Oculus Rift) came out; if at this point in modern VR development it wouldn't be better than existing offerings, I'd be deeply disappointed in Apple's R&D.

Anyway, I bring that up because when the iphone came out, it really did do something different than the locked-in feature phones of the time; I did have to look it up to refresh my memory (https://www.cnet.com/pictures/original-apple-iphone-competit...), but its competition in that year was a lot of Blackberry-style physical keyboard and resistive touch screens running Windows Mobile. I do want to highlight the LG Prada, the first capacitive touch screen smartphone - came out in the same year the iPhone was announced, and it along with the HTC Touch on that page had a similar screen focused form factor.


I think it's fair to say that having sharp text without screen door alone is doing something different than the existing headsets, and is very important for the more serious uses Apple is imagining.


Definitely looks better for productivity, but for gaming and social?

Nothing on Vision Pro is comparable to things like Beatsaber, VR Chat, Pavlov.

Hard to tell if the hand tracking could handle those sort of experiences currently.


Hard to tell if Apple cares about Beatsaber.


The verge is probably the most critical. Here’s what they say:

“marvelous display, great hand and eye tracking, and works seamlessly in the ecosystem, … The Apple Vision Pro is the best consumer headset anyone's ever made”

Yes, they also list a bunch of flaws. But the people trying to make out that the reviews are saying it’s a shittier oculus quest are not being honest.


I do think iPhone is often made more mythical than it was. Sure it was good and we could finally use our sausage fingers to navigate it. But I recently heard someone say on Radio that iPhone was the first phone with a touch screen. Meanwhile me an the guys (yeah all guys) we’re rocking Sony Ericsson P800’s and the like in 2003,4,5.

I had an HTC touch when iPhone came out and I was most envious that you could do two finger zooming in Google maps instead of tapping zoom buttons.


Many things Apple related are very often about applying existing technologies to a novel place. Many forget that the touch screens at the time were resistive [1] that required pressing (small but nevertheless) at the screen and mostly usable with styluses. As long as I recall the novelty was to apply a capacitive touch screen [2] to navigation, it does not require physical force and that's where the fingers shined.

[1] https://en.wikipedia.org/wiki/Resistive_touchscreen

[2] https://en.wikipedia.org/wiki/Touchscreen#Capacitive_touchsc...


Sure. But one can argue that capacitive touchscreens were about to happen anyway (like ARM laptops..). And HTC with their touchflow were moving into sausage friendly UIs.

But I agree, Apple is absolutely good at taking all this tech at the right time and making a compelling product. Steve insisted on glass/capacitive and that is just the best choice. Also, the UI didn’t feel like a layer (that you could easily get out off) as with touchflow. I’m also an iPhone user atm. Switched from Android about 3 years ago. The whole experience feels like higher quality to me still now. Although I have friends that would argue against that.


Peak Blackberry was 2011. It's easy to forgot that that the iPhone takeover took years and many releases.


I kind of find that hard to believe but maybe there were government/enterprise purchases boosting it or something? As far as I'm concerned Blackberry died with the Blackberry Storm 2 in 2009, which was a huge piece of junk.

I used to go back and forth to Shenzhen all the time trading refurbs and by 2010 I straight up would not buy anything other than Android devices because 1) iPhone was actually more expensive in mainland China due to it being a grey market item back then and 2) demand for everything else fell off a cliff.


I worked at Nokia and we certainly did not had that opinion, specially since we already had a couple of touch based Symbian devices.

If we weren't busy with Symbian vs Linux internal feuds, and the Microsoft deal, things would have turned out much differently.


I love how people just rewrite history on the internet lol.

Iphone 1 was a collosal PoC. Slow, most of web didnt work. Its only appeal was the full touchscreen, which of course sucked to type on, but looked cool (which is the reason people bought it mostly). Everyone that needed mobile compute functionality was still on Blackberry and some other devices.

There was a time during early 2010s where the iphone was better than everything else due to native hardware and in house software and updated functionality. However by 2016 Android caught up, and since the first Pixel came out it pretty much has been ahead ever since.


Not sure what you are going on about, that the original iPhone was a PoC.

Given what was available at that time (I was using a Windows Mobile O2 XDA AND a Blackberry at that time), the iPhone was simply magical. The ability to browse the full web on the go and a proper mail client, was amazing.

Worth the money to travel to San Francisco from Singapore just to get one (and the cost of the AT&T SIM masker to spoof it on the local Singapore telco network)


Again, no. You completely somehow forgot tech in late 2000s lol.

The internet on anything mobile was pretty painful when it launched in general. Websites weren't optimized for mobile, mobile data was unusably slow. Most people who wanted portability were using things like netbooks, which you could actually multitask on.

Blackberry was the goto for actual phone because it was much easier to type on due to the best keyboard at the time, well developed software for things like email, basic browser, e.t.c.

Ill leave you with this staplepiece of internet history: https://maddox.xmission.com/c.cgi?u=iphone/


I am so glad his website is still up and running.


I wouldn't bother responding to grandfather. Literally every time apple releases a new product, there's a bunch of people collectively shrugging off whatever the product claims to be bringing, and along come the "the iPhone v1 was crap too and look how that turned out" apologists. Not worth the discourse.


I think you have rose tinted glasses. I had one too, and the browser was garbage over 2g. You forget how much time was spent looking at that checkerboard pattern.

As for email. Proper email client? It was pop3 only, and you had to manually tap to fetch new messages.


You're right about the email client. I had IMAP email clients on mobile for a while before the iPhone supported it. Email on the OG iPhone was terrible.


> Its only appeal was the full touchscreen, which of course sucked to type on, but looked cool

and motion controls. Between the Nintendo Wii and the first iPhone people were obsessed with motion controls for some reason.


> Its only appeal was the full touchscreen, which of course sucked to type on

I could've sworn that "it's easy to type on" was the one weird trick it did right? Though perhaps I'm just misremembering the media; my first iOS device of any kind was the iPod touch with retina display.

Something about Apple having a temporary monopoly (or possibly monopsony) on capacitive touch screens, where everyone else was stuck with resistive ones?


I don't think Apple had a monopoly on capacitive screens. There were a couple other mobile devices that used them and came out around the same time. Maybe they tied up all/most the available production capacity for a bit?

Typing on the original iphone wasn't perfect, but it was generally better than tiny physical keyboards in many cases


Physical keyboards were better than any touch screen one until the swipe typing became standard. You could type on them faster, and had more features like arrow keys, which were useful for smaller screens.

There was a whole era of autocorrect and the memes that came with it due to how much it was being used with touchscreen keyboards.

Of course the advantage of a full screen for things like web and media was more important, and making fullscreen phones was cheaper, so physical keyboards died out. The size of the screen increased as well.


I think if you talked to someone working on Meta Quest they would say this is going to blow the competitors away. If you talk to a reviewer for a tech magazine they’re going to complain about any detail they can find in the 1.0 launch of a new product line as if it’s a colossal failure (aka PoC).


I think the difference is everyone knew market penetration on cell phones would be close to 90%. This may be better than the Quest but is it going to take AR/VR mainstream? Seems iffy. In which case drawbacks may never get ironed out.


Quest and Vive weren't great, but they actually had working VR that set the stage for subsequent development.

Iphone 1 set the stage for tech jewelry.


That's the history I remember tho.

I remember the first time I saw an iPhone. It was 2am at a house party with a bunch of 19-25 year olds. Pretty much everyone stopped drinking or dancing and played with this dudes phone for three hours.


I had the first Occulus when it came out. Big hit at my workplace. Anything flashy is going to get attention. The problem with Apple is that they have, do, and will continue on prioritizing flashiness over usability. Its actually pathetic that you can't install linux on Apple silicon (and no, REd hacked together Asahi linux does not count)


This was my first experience with an iPhone too. A rich guy (friend of a friend) had one, my friend and I spent the rest of the night trying it.


Internet is the new story telling at the camp fire, turning humble actions into historical myths for eternity.


Can you elaborate on this?

The first iPhone, yeah, it had some detractors, but I don't think the kinds of criticisms the parent poster gave ever really applied to the iPhone. To succeed, the iPhone didn't have to be this utopian product; it just had to be more useful than its main competitor, which was dumbphones. People who complained that it was missing features the Blackberry had were working from an unstated major premise that the iPhone was initially targeted at enterprise users, and I think that everyone who wasn't too busy being a pundit to see how the world works could see that that quite transparently wasn't the case. There was even a time period where I had both an iPhone for personal use and a Blackberry for work.

And I think the criticism about entertainment is spot-on. By contrast, despite being extraordinarily limited compared to even the very next modal, the first iPhone was fantastic for entertainment, precisely because it was good for fostering shared experiences. It didn't take long after the device came out before you'd see groups of people clustered around an iPhone, looking at photos together on that big, vibrant, gorgeous screen. That was something that none of its competitors could do. And you better bet that people saw that happening and started wanting to have one of their own so they could have fun, too.

I do think we're still in the "wait and see" phase for this product, but, unlike some of the original iPhone criticisms or cmdrtaco's original dismissal of the iPod, the criticisms this article points out feel really personally relevant to me.


I checked a couple of those old reviews and a couple things that seemed to be a common take with the iPhone that definitely aren't holding now:

- incredible amounts of hype

- loving the design

- loving the touchscreen and input (directly contrasting folks worrying about the eye tracking now)

- a sense (at least from cnet and pcmag) that it's really just an overgrown iPod so they keep comparing it to an iPod (compared to the vision where folks get that it's a new category for apple and have good comps outside anyway )

There are definitely similarities in terms of complaining about missing features that apple's probably going to add soon anyway (keyboard showing up in portrait and stuff). Lots of complaints about not supporting flash but we know how that went. Also apparently the headphone jack position was annoying.

What I'm not seeing in the current vision reviews - and maybe it's impossible to see this in real time - is some feature that has the chance to change literally everything that people arent able to comprehend just yet. These reviews being relatively dismissive of this web browsing on your phone thing is absolutely hilarious in hindsight. The only similar thing in the vision is - the passthrough eye thing maybe? Nothing else seems particularly baffling.

I'm glad I read some of those reviews. The vibe I'm getting is - the iPhone was doing something fundamentally weird with this whole smartphone thing that reviewers just didnt get, so they kept reviewing it as an iPod with really bad voice calling and a browser and being confused by all the hype. The vision though? It's a vr/mixed reality headset, we know what those are like, and apple didn't throw any real curveballs.


An addendum - I'm surprised at how spot on so many of these iPhone 1 reviews are outside of the not getting the browser thing (and even there folks complaining about how web2.0 is taking off and there's no js/java/flash support? Good point!). Keyboard should work in portrait mode, voice calling is bad, 3g support is needed, no multimedia messaging. No replaceable battery and not user servicable enough. Camera is OK but needs dramatic improvement. Typing urls is very hard. And of course, no games, no 3rd party apps, no app store. After hearing all these memes for however long about how the media just didn't get the iPhone or something - yeah no they did pretty well


Before the 3g the web browser thing wasn't really that useful. If you have to be on wifi to get reasonable speeds, it's easier to just use your laptop vs a tiny screen with a slow processor.


Yeah (though apparently the wifi support was good idk) and honestly from the reviewers perspective - I think they'd be doing their jobs wrong if they were 'right' about the iphone. What were they supposed to say/forecast? "Oh and while the browser is pretty good, it's obviously incomplete, doesn't support flash, can be hard to read websites, etc. However, this won't be a problem if the iphone ends up catapulting apple to becoming one of the most powerful companies in the world, at which point they'll kill flash. Also while all websites are designed for viewing on desktop screens - that's OK because in the future the iphone is going to be so damn impactful civilizationally that basically the entire human species will start accessing the internet through small mobile computers in their pocket (and start accessing it a lot!) which means that society itself will reshape itself to prioritize making sure websites work on this phone."

Like seriously idk what people want from those reviewers. If they were 'accurately' predicting an outlier product like the iphone I doubt their wild fantasies would be accurately predicting much else


The first gen iPhone had a much better UX than other phones, so missing a few features like copy and paste was palatable.

I do think future generations of AVP will do well. Iterating and applying learnings and customer feedback will make this a good product.


You could say that about literally any product. The most important factor that will determine success is the starting point. Initial conditions matter. I actually believe this a developer tool from start to finish. It will end of life a lot quicker than we expect. The real product is going to be lightweight glasses we wear all day. But when? How many iterations of the AVP to meet the developer needs for seeding this future product that might be 10+ years away?


> The most important factor that will determine success is the starting point. Initial conditions matter.

So, I don't know if I agree with this considering Apple has such a large cash cushion. They can easily make missteps the first few generations until they figure it out.

Early Apple Watches focused on luxury and personal communication with loved ones. But later iterations de-emphasized all that and pivoted to being a health and fitness band and some subset of iPhone features like cellular phone calls and texts (instead of the weird heartbeat sharing stuff we saw at launch of the 1st gen device)


The iPhone also came with a great data deal for the time, as I recall.


Not really. I had a cheaper unlimited AT&T unlimited data plan than the original iPhone data plan, and that included 3G data!

The trick was to just buy the smartphone outside of the plan. Then you get unlimited data at half the price. I'd pull gigs of data a month on my "dumbphone".


iphone by far did not have a better UX. It looked nice, but it had no more functionality than other devices at the time.

In general, UX design is the argument people used to (and still do sometimes) run to "prove" that the device was better when it was clearly not. Fancy icons dont make a good UX, functionality does. You dont say copy and paste is good UX, you say its a feature.


Functionality does not make good UX. Good UX makes good UX

Not sure how people don't remember what a revelation the capacitive screen was. It was miles better than most Nokia phones that mostly used resistive screens (not saying that Apple invented capacitive screens but they most certainly made it popular) and the navigation with the simple home button and everything else being instant feedback with the buttons on the screen was better than anything else on the market from what I remember. The keyboard especially was incredible with that light tapping sound and instant keystrokes appearing. While it wasn't functionally better than most phones (famously less functional than a Blackberry), it was very much the leader of the pack in details that ACTUALLY contribute to good UX. Just as the iPod and the clicker wheel was ahead with the instant feedback and usability of rotating the wheel to scroll at high speeds through hundreds of songs


I honestly think that people like you experienced smartphones in mid 2010s and just extrapolating back to what they think they were in late 2000s,

When iPhone came out, the mobile internet was so shit and screen was very low res, so the benefit of having a full touchscreen was minimal (mostly being able to select things directly, but that was a very small advantage). Mobile web wasn't a thing. You had to have wifi for any real speed. And if you had wifi, you were likely in a building where you could sit down, and there were things like netbooks and mobile pcs that were just better than the iPhone for doing web stuff. Even people with iPhones still mainly used them as iPods and phones before 3g cellular.

The keyboard on Blackberries was in fact better UX because it was easier to use and faster to type on, without any delay in appearance. The on screen keyboards were all shit until swipe typing became defacto standard.

And then you look at the drawbacks, like no removable battery, no microsd expansion, no shit proprietary cables that would break, no copy/paste, e.t.c, all of them absolutely make the UX horrible.


And before that the iPod. CmdrTaco wrote: "No wireless. Less space than a Nomad. Lame." Couple of years later the Nomad was gone.


If anyone does not understand the reference, "CmdrTaco" was the editor of Slashdot many years ago. Ref: https://en.wikipedia.org/wiki/Rob_Malda


If anyone does not understand the reference, "Slashdot" is a link-sharing site oriented at tech news, that was popular many years ago as HN is today.

https://slashdot.org


Ah, memories from 20 years ago. How time flies.

I wonder what new site, and what new euphemism, will replace the "hug of death" and "the orange site" in 2044?

(Hopefully not robots that go around actually hugging people to death…)


Was it not a mailing list before a site?

I let the next guy explain what a mailing list was.


Most of the issues listed by @zmmmmm are the same issues plaguing other VR headsets for a decade now: motion blur, pixelation, distortions, fake looking colors.

Apple didn’t fix any of them.


I think this is the main thing — plenty of other headsets showed how these aspects are problematic. I very much appreciate just how hard a problem things like hand tracking, distortion, etc. are to solve, but I was hoping (perhaps unreasonably) we’d see a break through in at least one of them. It also feels like Apple design choices got in the way somewhat. Two things that surprised me most are the FoV and limited DCI color space coverage — now I get why they didn’t readily share that spec earlier.


> fake looking colors

I never heard this complain before. Can you explain more?


See the video in the OP at timestamp 5:00 to 5:20. The review from The Verge touched on it as well; essentially, at the end of the day, it's still a bunch of displays showing you camera feeds of the real world. And both displays and cameras have a lot of flaws—low field of view, motion blur, pixelation in low lighting, and a much more limited set of colors compared to our actual eyes.

VR avoids this since they just make up their own designed world instead, while most AR glasses avoid this by having actual transparent glasses and reflecting images off them instead. The Vision Pro is more ambitious and tries to pull off both AR and VR, resulting in these compromises.


The iPhone was subsidized by mobile carriers and/or interest free payment plans. I don't see that same path for this VR device, but maybe I am missing something.


Apple was paid a monthly fee by Cigular per user for the iPhone 1, so it was just subsidized weirdly.


The original iPhone wasn’t subsidized… it was bought outright. It was only after Apple proved the potential of the iPhone did the carriers get on board (and even then, it was limited support for quite a while).


> The original iPhone wasn’t subsidized… it was bought outright. It was only after Apple proved the potential of the iPhone did the carriers get on board (and even then, it was limited support for quite a while).

Completely untrue. The original iPhone required a 2-year contract with AT&T/Cingular. https://en.wikipedia.org/wiki/IPhone_(1st_generation)#Releas...


The contract was a requirement, but it wasn’t subsidized like you’re thinking. Other phones at the time were cheap and subsidized as part of the contract. The iPhone, by comparison, was freakishly expensive and I don’t think Cingular was subsidizing it. And I don’t remember there being any penalties with cancelling the contract. But I already had an AT&T/Cingular account, so I’m not sure about the contract info.

The contract issue had more to do with how the phone interacted with the network. IIRC, AT&T was an exclusive provider because of the backend requirements (visual voicemail notification maybe?). I assume the contract was in part because they wanted to recoup those expenses.

Here’s a news article from the time that says that AT&T didn’t actually start subsidizing the phone until the 3G arrived.

https://usatoday30.usatoday.com/tech/wireless/phones/2008-07...

Also, the subsidy might have been from Apple, if anyone… Apple got a kickback of $10/month per iPhone user. They might have used that to keep the price lower, but that wasn’t from ATT’s side of the account until the 3G followup.

https://www.wired.com/2008/01/ff-iphone/


You response seems a bit confused/confusing, but the linked articles explain the situation fairly well. The crucial point is that AT&T/Cingular had a 5 year exclusivity deal from the very beginning. They were "on board", and indeed no other carrier could get on board. The initial terms of the deal were that AT&T gave Apple $10 per month for every iPhone user. Then the deal was changed to have AT&T subsidize $300 per iPhone, thereby lowering the iPhone price. In either case, every iPhone sold required an AT&T contract and was locked to the AT&T network.


My only point is that the initial iPhone was an expensive phone that didn’t have typical carrier subsidies. It was successful in spite of this.

The original parent post claimed that it did and implied that this was the reason why it was so successful. They also implied that the new Vision Pro would need similar subsidies to be successful.

I’m not quite sure the killer feature is there yet for VR headsets. But if the usability is better for the Vision Pro than the Quest, et al., it could still be successful, regardless of the cost.


It did have subsidies, though. And they were “typical”. AT&T was paying for part of the cost so that people could buy it for $499 initially. Most other phones were that price at retail and unlocked.


>Completely untrue. The original iPhone required a 2-year contract with AT&T/Cingular.

That's completely untrue. The original iPhone DID NOT require a 2 year contract, you could absolutely buy it on a prepaid plan. Yeah, you had to "fail" the credit check to get offered the prepaid plans, but all you had to do was put "999-99-9999" as your SSN in the activation screens to get them.


That is not true at all. The original iPhone required a 2 year contract and you could only buy a maximum of 2 phones per account.


It was sold with a subsidy in the uk in a weird way. Iirc you had to buy it from apple but o2 subsidised it as it they were the only ones with edge (so assumed they would also sell you a contract). Worked out £50-100 which was mad as o2 couldnt actually make you get a contract.. quite a few people i knew boight a bunch of them and gave them away as gifts.

Also it was a totally stand out appealing device accessible to everyone with immediate value to everyday people.


Pretty sure AT&T had exclusivity there for some time and didn’t get that for free..


Or the first Newton.


And the touch bar, and the magic mouse, and ping, and mobile me, and hi-fi.

They're just a consumer goods company. Sometimes they make good things people don't appreciate at first. Sometimes they make bad things that people don't appreciate ever.

Their track record is better than the mean, but comparing every criticism of a first-gen Apple product to the iPod/iPhone launches is unserious. Of course some people panned any given new thing on Earth.

And this isn't even in response to someone predicting that AVP would fail, but just that the 1st-gen AVP is an immature product. The 1st-gen iPhone was an immature product! It's delusional (and discrediting to Apple!) to think the iPod, iPhone, OS X, Intel Macs, M1 Macs, etc. were as mature at launch as the later iterations we associate those technologies with now.


Does "AVP" mean Affordable Viable Product?


Apple Vision Pro (assuming you were genuinely asking).


facepalm Thank you to correct me. Yes, I was asking sincerely!


Reminds me of Lisa.



Fools and their money are quickly departed.


iPhone 1 launched without cut/copy/paste


the price is quite different though


iPhone 1 was the effectively the first mass market smartphone

This however is coming after a decade of existing AR/VR consumer electronics and still misses the mark


first mass-market smartphone with mass appeal maybe, but it wasn't the first mass-market smartphone by a long shot.


> iPhone 1 was the effectively the first mass market smartphone

... no?

Windows Mobile, BlackBerry, Palm, Nokia, Symbian, Maemo?


I wonder given all this... what the expectations are at Apple from a higher up/board/executive standpoint are

Most of Apple offerings are good:

Watch

iPad

Mac

iPhone

services

Are they really expecting this to just be a hard problem initially that they get better at over time? When is the last time they launched a "so so" product?


Apple Maps falls in that category. Maps was bad when it came out but after years of effort (and a lot of money), it's pretty good now. That was no small feat given how good Google Maps already was when Apple Maps started.


In my area (Long Island), Apple Maps works great (better than Google Maps). I hear that it falls down, in rural areas, though.

[edit]

Looks like I pissed in someone’s cereal. Didn’t mean to, but different strokes, and all that…

I was just recounting actual personal experience, which, I suppose, isn’t popular, hereabouts.

Just for context, I have been writing Apple programs, since 1986. I am totally committed to their products, and their vision (although I have no opinion, –yet– on their Vision), and have every right to be critical. Those of us that have been on the ride for that long, have seen a very bumpy road.


I just checked my area in Apple Maps out of interest and my own house, which I've lived in for longer than Apple Maps has existed, is listed as belonging to a business I've never heard of. The only actually existing business I can find with that name is at the opposite side of the country.

Not a great first impression. Also while I can view Apple Maps via a third party like DuckDuckGo, you can only report a map issue through an Apple device, so I guess it's staying wrong.


It's possible a previous tenant used that address or was attached to a business with that name. It happens to offices all the time as well. We had like 8 businesses attached to one of our offices, where only 1 ever existed. They were all tied to the owner and original filings used the address temporarily.

Still weird either way.


Apple Maps in New Zealand is really, really good. Travel estimates are nearly perfect.

In smaller countries though, it's total rubbish. Samoa, travel guidance is nearly non-existent. Tahiti, very patchy. In those countries, Google Maps was perfect.


It really is location dependent. I find that Apple Maps has much better driving directions and a better UI that doesn't get in the way. Google Maps on the other hand typically has much more up-to-date information on businesses and restaurants. I am in the habit of using both if I am trying to find an unfamiliar business address.


I generally find the quality of Apple Maps to be quite good here in Tokyo, although unfortunately Google does still seem to have them beat pretty consistently. The recently completed Azabudai Hills complex, for instance, is correctly rendered on GM, while AM still shows the pre-development layout.

However, one bug that is a showstopper in AM is the dreaded “can’t reach the server now”, which I get probably more than half the time I try to use it, and with apparently no solution other than wait.

As a result, I just removed it from my phone.


I gave Apple Map a second chance after hearing good things about it recently.

But it gave me a route where I need to turn left right after taking a right-turn ramp, except that the left turning line are separated by road dividers before the exit of the ramp. So I would need to either go wrong direction and do a 180, or bulldoze the dividers. As a comparison, Google Map never gave me that route in the past.

This happened in a moderately sized town so I guess I will still stick to Google Map for now.


In my particular rural area, Apple Maps knows the street I live on exists. Google Maps still needs a call to delivery people to explain to them that their map is lying to them.


Apple Maps could be improved with classic hard work on the software etc. Apple Vision Pro is against the same problems of physics and usability that other VR and AR headsets are up against. I’m leaning towards they will never be “solved”, because using a monitor and keyboard or tv and controller works really well.


Portable monitors do not work as well. Monitors in general could also always be bigger speaking as a programmer.


I live in the UK in a small village next to a large city. Just had a quick look and there are tons of issues within the 1 mile radius, that don't have issues on Google Maps.

* A pub that doesn't exist - there is one about 15 miles with that name, but there's never been one called that here.

* The local church is in the wrong place

* The opening hours of the local cafe are wrong (11:30-3:30 instead of 10:00-3:00)

* Business names aren't correct


Apple Naps has sent me to the middle of cornfields.

Generally, if you are in a city it is fine, but don't use it if you are rural.


There’s a clear strategic reason for that — reduce dependence on Google for more vertical integration.

I don’t see the VisionOS play here and wonder if it’ll be Tim Apple’s first major fail at the helm. There’s no clear set of competitors or established industry for Apple to just polish like they have with most products. There’s not an obvious large tam for VR headsets and I’m skeptical optical ar is close. I don’t get it but I’m sure the internal Apple and board view it as a strong thesis.

Still glad it’s this and not a car.

Looking forward to how it plays out.


Duckduckgo uses Apple Maps and it's completely unusable. Might be a Duckduckgo issue but last I tried a few weeks ago I couldn't even pan in Firefox without randomly snapping back to the pin. When I wanted context I'd work around it by just zooming out and be done with it. Still have to turn to the Devil if I want a decent map experience.

Ridiculous.


Apple Maps still sucks though. Even if it’s better now, recent use is still ridiculous erroneous.


> That was no small feat given how good Google Maps already was when Apple Maps started.

Gmaps also helped by enshittifying itself.


The first iPhone was pretty terrible. I couldn't get my Marketing peers to take it seriously (which they came to regret).

The first Watch was awful. I love my Series 8.

Don't get me started on the first Mac...


The first iPhone was terribly limited compared to what came after it was not in any way terrible for the time. The demo Steve Jobs did was so good that a lot of people just refused to believe it was possible, only to be proven wrong when the thing went on sale.

You have to put it in context with what passed for a smart phone back then. No web browser (they sort of existed but were extremely cut down and not really usable), small screens with limited software, very limited connectivity that was very expensive.

A valid criticism was the lack of physical keyboard, which phone-jockies found a deal breaker, but in all other ways a phone with decent apps, a large screen, a real web browser designed around and provided with unlimited internet connectivity was always going to be a hit.

It wasn't even that expensive compared to other phones.


> You have to put it in context with what passed for a smart phone back then. No web browser (they sort of existed but were extremely cut down and not really usable)

I based my purchase of phones in the early to mid 00s on whether they came with the full Opera Mobile browser.

I could use just about everything online the same as desktop, that didn't rely on flash of course. The screen size was small but surprisingly wasn't a big issue as everything else was a massive improvement.

Oh Nokia 3660, how I loved thee. I actually chose the 3650 and then 3660 because they seemed to be the best smartphones at a standard sized. I just couldn't justify qwerty.


We all forgot how awesome inertial scroll was. Scrolling through emails or text messages was absolutely bonkers - it actually allowed us to use the full informational capacity of the screen.


Exactly this. I vividly remember watching that first public demo and thinking "sure, this is OK but touch screens are really annoying to use". When I saw Jobs flick through a list of contacts (or whatever it was) I was blown away. Finally somebody had made touch screens useful.

I am sure someone with quibble that some other product technically had something similar first, and they are probably right. But the iPhone demo was the first time I had seen anything quite like it.


Yup. This was what they were supposed to do for 'spatial computing' https://www.youtube.com/watch?v=c40cxE-dfPg

Turning on a light bulb, ordinary light in the house, meh, ok, then swipe your hands out over the counter and the recipe pulls up. That's the 'oh wow' moment. Then grab and throw a song to a Homepod connection. You know what I mean...

There's a gap someone can shoot, maybe 3 years tops. Honestly though, the developer ecosystem is just so tight, continuity so powerful, doing this solo requires more than what the whole MFi program offers.


> The first iPhone was terribly limited compared to what came after it was not in any way terrible for the time.

At the time my feature phone had much better MMS, video calling, and J2ME apps. The first iPhone was a joke by comparison looked at through that lens, and detractors generally were looking at the fact it was this hobbled 2G device missing features that were common in other, cheaper phones. Oh, and the camera was absolute garbage, too.


You are not wrong but Apple predicted that none of those things mattered and they were proven right.

MMS worked well enough but was/is pretty limited. Video calling existed but postage stamp video at 8 frames per second was not something that anyone wanted to actually use. And don't get me started on J2ME, an ecosystem so vast that Google didn't even bother to include it in Android even though they are both based on Java. Although other phones had better cameras, none took what we would call an acceptable photos today.

What people did use all the time was the browser and only the iPhone had a decent one. That was the only thing that mattered. Heck, the iPhone wasn't even a very good _phone_ and even that didn't matter.


Please don't remember me of J2ME. I felt relieved getting back to develop EJBs with xdoclet after a short stint working in a J2ME project.


> The first iPhone was pretty terrible.

I definitely do not think that opinion was widespread. If anything I think that the biggest discussed shortcoming of the first iPhone was that it was only available on AT&T.


For example:

It didn't multitask, even though other pocket computers for a Long Time had at least the appearance of multitasking. It didn't have the ability to install any additional software, even though other pocket computers for a Long Time handled third-party software just fine. It didn't even have a copy/paste function, even though [WTF? Srsly, Apple?].

All of these things were eventually corrected by Apple, but it was pretty awful until they were corrected: For quite some time, the iPhone was just a rather fancy touchscreen music player with telephone and SMS programs tacked on.

(More damning: All of these things were corrected very quickly by third parties via jailbreaks. Some of us were having a ball with first-gen IOS devices very early on in the game, but Apple wasn't any help in getting that accomplished.)


I guess I (and to be honest, reviewers at large at the time) just have a very different definition of "pretty terrible". Sure, it was easy to see how it wasn't "fully complete", but I think that is true of literally every brand new product.

And to take two of your examples, multitasking and 3rd party software, yeah, other pocket computers had them, and they generally sucked hard (see, Windows Mobile). Even the lack of copy/paste - other phones had them, but there was (and honestly still is) considerable debate over how it should be implemented given multitouch was new.


Yeah, but the first iPod touch was great. It came out only 3 months after the iPhone.

It seems like the Vision Pro 1 has the M2 so that when the Vision, or Vision Air comes out, it will be at least as powerful as the first Pro. And the Pro 2 can have the M3.


First iPod Touch was even worse: It had all of the limitations of the iPhone, plus it additionally lacked cellular connectivity, messaging, voice, Bluetooth, GPS, speaker, Bluetooth, and camera.


The first iPhone didn't have GPS either. No iPods had speakers back then so that wasn't missed, you just used your earbuds. It was great for browsing the web and media consumption, since you either had WiFi or you didn't (no slow Edge network to tease you). Once the App store came out you had Instapaper for offline reading, and Google Voice for messaging/calls.


You're right. The first iPhone did not have GPS. But it did have the connectivity to make other geolocation services useful, at least in a "Where the fuck am I at?" sense. (And the iPod Touch also had wifi-based geolocation services that were spooky-good, if it was online somehow.)

But from various PalmOS devices to whatever Android device is in my pocket right now, carrying wired headphones has never been a thing for me for whatever reason.

So as a Google Voice user since it was still called GrandCentral, having Google Voice available on a Wifi-connected touchscreen pocket music player called an iPod Touch was simply never very useful to me: The OG iPod Touch was lousy as a telephone, since it lacked all of the basic parts (like a microphone or an earpiece) that made telephones useful, and SMS was not yet in its heyday back then either.

I got much better use of the service with my dumb phone with T9 text input and transcription of voicemails to SMS, and my dumb phone worked anywhere instead of just where I could find a Wifi network.

Jailbroken, the iPod was an amazing pocket computer with a brilliant display, thin profile, and exceptional responsiveness to touch input, especially with third-party apps and improvements installed. It was fun having a real *nix userland installed on it, and it sure seemed novel to SSH from it.

By default, though? Almost useless except as a music player -- a task that previous iPods did better.

(I mostly used my OG iPod Touch to take offline notes. It did OK at this, but previously-used PalmOS devices did better in terms of input speed and portability of those notes.

Even relatively high-end aftermarket car stereos at the time were afraid of it: "Oh, why sure I can play music with your iPod!

Except.. is that an iPod Touch?"

"shun shun shun")


>For quite some time, the iPhone was just a rather fancy touchscreen music player with telephone and SMS programs tacked on.

It had the only truly usable mobile web browser. Windows Mobile's IE was trash, and the browsers on Palm OS, Blackberry and Symbian were laughable. This was the killer app (especially on Wifi, EDGE was not great).


We should bear in mind that at the time it launched many early adopters were on the fence and kept a second phone around (most people around me did until basically the iPhone 4). We look back at it as a fun area, but it was in part because we accepted it was flawed and limited.

When trying to use the iPhone seriously, at some point it wouldn't receive calls anymore (just silently crash the deamon receiving it, you'd never know if not told by the caller afterwards), drop active calls when battery was too low (single digit), call quality could be horrible, stuff would crash from time to time.

Memory would leak like a sueve from everywhere so rebooting the phone every now and then was good hygiene. It's only after the 3GS that it stabilized, otherwise antennagate would have been just another tuesday IMO.

It was still a stellar device, but not a fully reliable device in any way shape or form.


It was usable on any other network, I had one and it wasn’t AT&T locked

Compared to all the smartphones of the time, it was a slow toy with no apps and a 2G connection that was completely worthless

It looked nice though, and eventually it got an app store (unnoficially first), and the overall UX was a clear winner (nobody believed touch screens would ever replace blackberry style devices!)


https://appleinsider.com/articles/10/05/10/apple_att_origina...

https://en.wikipedia.org/wiki/History_of_the_iPhone

> When Apple announced the iPhone on January 9, 2007,[38] it was sold only with AT&T (formerly Cingular) contracts in the United States.[32] After 18 months of negotiations, Steve Jobs reached an agreement with the wireless division of AT&T[39] to be the iPhone's exclusive carrier. Consumers were unable to use any other carrier without unlocking their device.


A few people had it in Japan connected to docomo as a GSM phone. I suppose there were workarounds for other carriers as well.

I don't know how they did it, but "life finds a way" ?


There were apps and hacks to sim unlock the original iPhone. I was using mine on T-Mobile after using it for a couple of months on prepaid AT&T GoPhone.


I definitely did not use AT&T :) (in fact I didn’t even use my iPhone in the US)


> on any other network

> I didn’t even use my iPhone in the US

So you can't really describe the experience for the several different US carriers which it didn't work on, and thus can't really say it worked on any other network. Verizon was a network it didn't work on, and thus fails the "any other network" standard you set earlier.


Pedantic much?

It worked on any GSM network. Tmobile for instance. Verizon and Sprint were the exceptions in the US, since they weren’t GSM.


It is not a minor detail, Verizon and Sprint were two absolutely massive carriers and it was truly impossible to run the iPhone on those networks. And running it on T-Mobile meant voiding your AT&T contract and paying some big fees if buying it first-hand. Practically speaking, to most consumers it was only AT&T.

It is not being pedantic to point out someone is saying something massively untrue.

Your comment makes it sound like anyone could just easily take an OG iPhone and run it on any network, but in reality, to US consumers it was only a few smaller networks and only after breaking an expensive contract. See how that's more than just a pedantic difference?


No copy/paste & no apps were the main criticisms, given that Windows, Palm, & other "less-powerful" devices from that era had those features.


The lack of Flash support was mentioned a lot of times.


Bugginess was to me the main criticism.

Albeit not from the computer nerds who knew what they were getting, but the general public who thought they were buying a phone on par with the Motorola, Nokia or Panasonic phones in term of reliability.


I remember what I was doing when I first saw it. Eating lunch with a work friend who bought it. I remember what I was eating.

It was incredible and obviously a game changer.


That opinion was so widespread that it was still common for people to dismiss the second iPhone version.


My memory of it was that when it first came out, it was worse in all ways compared to the best of what already existed, save one, and that was capacitive multi-touch. That was a big enough improvement in itself that nothing else mattered.


In the case of my company, the camera was bad.

Also, it took over a year to be able to write apps for it.

JS Web apps were not “apps.”


The first iPhone was pretty terrible

Before Steve stood on that stage in 2007 and demoed the first iPhone, most people had never even seen 'swipe to scroll'. The first iPhone absolutely kicked the arse of everything else. So what if it was 2G that was how most phones were back then. The only objection anyone could think off (e.g. Ballmer) was that it was expensive.


Agreed. It had limitations but at the time nobody was "this phone sucks" because what it did have was fucking amazeballs and clearly the first glimpse at the future. We were witnessing a paradigm shift and aware of it while it was happening.


For 5 year-old me, the first Mac was pretty amazing.


Tim Apple reportedly overrode the design team on launching it prematurely relative to their typical standards in order to enter the market and begin iterating before waiting too long. It’s the first new product category made under his leadership and he’s eyeing retirement, as context.

Btw, don’t forget visionOS 2.0 is just 18 weeks away. (Source: WWDC is every June and every platform gets a version bump, as we saw with watchOS launching in April then getting a 2.0 immediately after at WWDC.)


> It’s the first new product category made under his leadership and he’s eyeing retirement, as context

I've been wondering how much this is part of the context here. He may feel some pressure that he hasn't really launched a new major product category from scratch in all his time as CEO and if this has been running 10 years as a project now, that it would be a blemish on his legacy to not get it out the door before he leaves. Perhaps without him there it would even be binned which would be even more pressure to deliver it.

Contrary to all that he really seems a bit ambivalent about the device himself, having never allowed himself to be seen publicly using it.


This all feels like so much projection.

As far as I can tell Tim Cook has never pretended to be a product guy. He seems perfectly comfortable being what he is: a ruthless operations guru bent on efficiency. In every profile I’ve ever read he’s not really the one making hard product decisions, and seems content to leave that to others better suited for it.


The decision to ship Vision Pro before the design team considered it ready/good enough was his decision. That's more a product decision than an operations one. But yeah otherwise he is reportedly hands-off and disengaged from internal product demos including of the Vision Pro.


Apple Watch, HomePod, AppleTV+, Apple News+, AirPods.

And I would argue the M-series CPU are pretty major as well.


Apple Watch began before he became CEO, it wasn't under his direction. He just continued the existing initiative.

Wikipedia's citation that the Watch began in 2011 after Steve's death is incorrect. It began in 2010 under Steve's direction after the acquisition of Bob Messerschmidt's Rare Light, which brought hearth rate monitoring tech to Apple. Messerschmidt was assigned a team by Steve. Don't blindly trust Wikipedia, folks...

As for the others I just meant platform products, none of those have their own OS until homeOS later this year. None of them defined new product categories for Apple - for instance the AirPods were an iteration on EarPods, not a new product category. Apple was already shipping a home speaker product before HomePod, which was an iterative product without its own software platform for developers. M-series is iterative tech, not a new product category. The Motorola ROKR wasn't an Apple product. But sure HomePod is perhaps Tim Apple's one original new product category before Vision if the 2006 Apple Hi-Fi counts as a distinct product category.


Discounting genuinely new products because Apple made them nearly 20 years ago seems a bit ridiculous.

No one would call the iPhone an existing product just because Apple shipped the iTunes collaboration phone with Motorola.


The watch is a new product category under his leadership isn't it?


Watch still under Steve, put it in other way, it's last product under his supervision.


Doesn’t look like it.

> Ive began dreaming about an Apple watch just after CEO Steve Jobs’ death in October 2011. He soon brought the idea to Dye and a small group of others in the design studio.

https://www.wired.com/2015/04/the-apple-watch/


That is incorrect, or rather, misleading - maybe Ive didn't join the project until 2011, sure, but it began in 2010. Ive didn't lead all design work yet in 2011, he was still just a VP.


AirPods, too.


AirPods weren't a new product category, they were an iteration of the Apple EarPods which looked the same, had the same remote features, etc.


I highly disagree about that, especially for the Pros with ANC and spatial audio. EarPods were nice wired earphones. AirPods seem like way more of a leap than just EarPods-but-wireless.


If those features defined new product categories, they wouldn't have replaced existing AirPods with "3rd generation" (non-pro) AirPods that have spatial audio and ANC in the upcoming 4th generation. They're iterative features on an existing product category.


> don’t forget visionOS 2.0 is just 18 weeks away

Source?


Its speculation, but probably reasonable speculation. At WWDC, I think almost every OS version gets an announcement of a major version bump. They will announce the next macOS major version, iOS major version, etc. So it is totally reasonable to suspect they will announce the next major version of visionOS.

Edit: But since it is still so new, it could not be until WWDC 2025 that visionOS 2.0 gets announced.


watchOS 1.0 launched in April and got a 2.0 in June that same year

it'll be a beta


The first two iWatches were borderline pointless/bad.

The first two iPhones weren't as innovative as they make them,just more polished than other symbians with cameras and internet, it took off with apps really in third iteration.

I think visionpro has lots of opportunities in the next iterations, early users will provide feedback this gen.


> The first two iPhones weren't as innovative as they make them

Yes they were. Multi-touch in particular was a revelation. Making a big screen with one physical button is a simple idea, but making it work well was the hard part that nobody else had figured out.

Those first iPhones were dog slow, sure, but they absolutely defined how smartphones work ever since.


don't forget pinch to zoom/expand on actual webpages in an actual web browser.


Do you remember the first phone that allowed you to play music? No. The first one to take photos? No. The first with internet? No.

Yet we're supposed to give to consider the iphone a revelation because of multi touch. I had the iphone 3g. It was a more polished experience than other phones but that revelation part was just not there. Apps made it big, later.

I swear people treat iPhones like a cult.


I think if you find a good, objective history of the first iPhone, you'll find that it was groundbreaking in many meaningful ways.

It was a full unix OS, not some mobile-specific trash like the others (Symbian, Windows Mobile, Blackberry). Capacitive touch wasn't an invention, but a premium form of touchscreens (most were resistive).

Having nearly 60fps high quality 3d accelerated animations on a phone also contributed to a heavily premium feeling for the software. It enabled things like Inertial Scrolling, which among other things (like first class typography afforded by using OS X, and a fully standards compliant web browser), none of which were "innovations" in the sense of "we came up with this first", but more exceptionally good engineering in service of a great experience.

In almost every way (apart from the "traditional phone metrics" like 2G ironically), the iPhone was a huuuuuge leap forward from the status quo.

It might be true that anyone else could've done it. But nobody else did. It wasn't just luck that Apple did.


I had phones before the iPhone that took pictures and played music (and a Nokia tablet with wifi internet browsing).

The iPhone was the first one that I actually used to take pictures and play music. It wasn’t the first to have those features (and the edge network was super slow), but that polished experience is what made it work. And made it clear to everyone how we would interact with phones for decades to come.

It was a technological inflection point. That’s why it is looked back on the was it is.


Multitouch was a huge deal because it allowed a usable Web browser to be offered. That was something no other phone came within a light year of providing.

There was a "cult," all right. It consisted of people like Steve Ballmer plugging their ears, shutting their eyes, and denying the obvious.


> Those first iPhones were dog slow, sure, but they absolutely defined how smartphones work ever since.

I don’t think this was my experience at all. The first iPhone was always snappy until they started adding random crap everywhere (e.g. by the release of the iPhone 4 the original was near worthless without really old firmware)


yep ... i'm generally downplaying Apple's "innovations" but I will say, there was huge scepticism that you could have a phone without a physical keyboard and Apple overnight turned that around. That was a real contribution, IMHO much bigger than anything in Vision OS so far.


To each their own perspective! As for me, as a user at the time of fairly cutting-edge cell phones from other manufacturers, my first hands-on with the first iPhone was one of the most memorable technological experiences of my life. Just using the Maps application with multi-touch was magical. Yes, it was slow, limited, etc., but it felt like an entirely different class of device, and that proved to be true.


Isn’t that what people are saying about the Vision Pro too?

Clunky and sorta useless, but “a new class of device”… eventually


Are current watches good? I genuinely don't know and am interested. I hadn't gotten one because I was an early iPhone adopter (Gen 1), but haven't been willing to be an early adopter since then. But I would like a watch, if they are good now.


I'm sure the later ones are also great, but as of Apple Watch 4, they are quite compelling depending on what your needs are. From a fitness / voice assistant button on the wrist, getting notifications without pulling your phone out standpoint the Series 4 and later are all fantastic.


I find them exceptional for a somewhat narrow range of activities (I love going on a run or hike navigating by my Apple Watch while streaming whatever music I want in the world to my AirPods - no phone, nothing to carry), and only slightly useful for a broad range of other things (notifications, weather, etc - conveniences).


What I love best with my Apple Watch is that I can respond to Duo/Okta/BankId directly from my watch and don't have to look for my phone or pull it out of my pocket.


Yep. They've been great since v7 or so. My S0 watch was... something. Apps took forever to load. Battery life sucked. The UI was still clearly highly experimental. Now even complex apps like OmniFocus launch instantly, and I charge my Ultra every other day. In short, it's a mature product now and works like it's supposed to, without qualifications like "...eventually".


I think Apple was smart to focus on Fitness and Health monitoring because that's about the only thing I've found immensely useful with my watch.

Apart from that, yesterday I just played Pickleball with a guy who used his apple watch to keep score. It was a little odd at times but seemed like a cool use.


If you want a very small cell phone on your wrist, they are good at that. I’m nit sure what they’re good for , but they can be that.


> When is the last time they launched a "so so" product?

All of them! The first version of the iPhone, iPad, iPod, watch, AirPods...

They all had similar reviews. "Seems like a tech preview, not really ready for general use, too expensive", etc.


Yeah - the first one is always at the edge of what's possible with the hardware capabilities at the time of launch at the quality level Apple demands. Then usually they refine it and end up defining the category.

The idea of 'spatial computing' or a heads up display that augments your vision and gives you eye tracking input to manipulate UIs in your visual field seems clearer superior to looking at little glass displays (meta agrees). If you can project out a future where that hardware becomes better and better it's a pretty powerful interface paradigm.

Eye tracking as an interface for this can get pretty close to telepathy if done well.

I ordered one - it's the sort of thing you really have to play with yourself imo.


First iPod and iPad were fine.


The first iPod's wheel would break and skip all the time. And so would the hard drive. And the OS was really simple, and the syncing was terrible.

The first iPad was ok only because it truly was "just a bigger iPhone". But they didn't really make features that were unique to the iPad and took advantage of its bigger screen for a few versions afterwards.


The os was simple, it played music, that's it?


It did, but finding your song wasn't easy when you had 1000s on the device.

I would say the main function of an iPod OS is finding what you want to play, not actually playing it.


Way back then it also competed against actual cassette and CD Walkmans. So just playing was a big part of it.


> It did, but finding your song wasn't easy when you had 1000s on the device.

It was extremely easy if you tagged your mp3s properly.


The first iPod had a wheel that physically moved and would get dirt and stuff stuck in it/have issues. The HDD would also freeze up. It was better than alternatives at the time, but it wasn't really until the click wheel it got really refined.

The first iPad was similarly expensive, thick and underpowered. It wasn't until a couple generations in that it was good.


They were both significantly better than alternatives and didn't cost 7x as much as this does vs meta quest 3.


The watch first 1-3 generations were clearly "so so".


So-so in what way? This product is clearly a toy. Which is to say that it is genuinely new. Maybe like how the Apple II was when it first came out. PCs were quite expensive back then too, if I remember correctly. This really will take time. All the important technological things were toys before they became tools.


Newton and to a much much lesser extent the first iPod and iPhone. But really the Newton while a product that I love and a super profitable line was the last time Apple created something that had so many rough edges.


Watch was cute but fussy and a little pointless when it first came out, and only gradually became really compelling.


> I haven't yet seen any sign of ability for multiple people to do these things together.

The reviews haven't mentioned it, but SharePlay [1] is OS-level functionality and the press releases mention using it with movies, music, and games.

[1]: https://developer.apple.com/videos/play/wwdc2023/10087/


"Want it for entertainment? people want to enjoy photos, videos, movies with other people and it can't include them. Even if they have a Vision Pro, I haven't yet seen any sign of ability for multiple people to do these things together."

This is not accurate, FaceTime has Share Play. Any app that leverages it can build a synced entertainment. Example out of the box, Apple TV, Freeform, Apple Music.

I wish the SDK for Share Play was not tied with FaceTime, since it limits you to only people you have their iCloud email or phone number to. Big Screen on Quest was a great app that leveraged the idea of multiple users in the same VR session, based on interest, Quest just lacked the quality.

https://support.apple.com/guide/iphone/shareplay-watch-liste...


A note on the sharing part: the number of people living alone is double digits percentage [0] in the markets Apple care about.

That might not be their primary goal, but the device could appeal to that demographic either way if the other features are appealing enough.

[0] https://thehill.com/policy/healthcare/4085828-a-record-share...


I think OP meant you can't include other people _in the same room_. If I want to watch Netflix (well, Apple TV, since Netflix doesn't have an app yet), my partner is completely out of luck if I do it on an Apple Vision. Unless he too has the money for one and we FaceTime each other to sync up our watching experience.


I once had a roommate who stopped playing video games a decade ago. We bought a PS5, and he kept trying to find games like 'Gears of War' and 'Lord of the Rings' with a multiplayer split-screen campaign in the same room. Aside from classic 1v1 fighting and sports games, the new games being released did not offer that. They were all built for high-quality solo gaming or optimized online gaming. Expecting a VisionOS experience with people in the same room reminds me of that. I'm not saying there won't be apps for it, but it would be niche.


1. This problem has already been solved on the Meta Quest by a 3rd party app called Big Screen. I would imagine that they or someone else will attempt to do the same for VisionOS. Otherwise, I can see Apple fixing this later.

2. 52% of media viewing in the US is done alone.


1. The problem isn't solved, because nobody has the patience or money to set up and sync multiple headsets when you already have a perfectly good TV.

I know zero people who have eliminated their TVs for a headset. Apple isn't going to change that.

2. And so the vision pro isn't even viable for 48% of watching. That's a huge percentage, and shows that nobody is getting rid of their TVs for this.


1. You can use a TV if you want to watch with people physically near you. Using a headset for that is nonsensical. I was talking about watching with people you don’t live with

2. Setting it up is as trivial as setting up Zoom

3. You’re still ignoring the fact that the majority of media consumption is done alone. It’s not all or nothing either. People both watch media alone AND in groups depending on the context.

Besides, how would you know the problem isn’t solved if you don’t use a headset yourself?


That seems to be an odd requirement/desire to have a head mounted display do?


That's the point. Watching TV with other people is a thing that most folks do. At least in my house, it's nice to walk into the room to see what someone else is watching and join. Unless they become comfortable and universally ubiquitous, watching media isn't something Apple Vision is going to be particularly good for.


> Want it for entertainment? people want to enjoy photos, videos, movies with other people and it can't include them.

Not on a plane they don’t. Not in a hotel room on a business trip.

IMHo, the Vision Pro is for being somewhere when you’re nowhere; not for being somewhere when you’re already somewhere.


So the use case is porn for people on business trips?


I want to watch stuff or collaborate on projects with my wife on a plane or in a hotel, you just haven’t seen these use cases from Apple yet because the product lacks it in 1.0


I think you're conflating two things.

The hypothetical use-case that the GP is talking about here, is AR -facilitated collaboration — people in your "see-through" view of the world being able to interact with the AR objects in your field of view, or vice-versa. Being able to AirDrop something to someone's iPhone by dragging a file from a window over to their body that you can see through the lens. You and your SO on the same flight, next to each-other and both wearing headsets, able to see the same set of shared/synced AR objects, and therefore watch the same movie. That kind of thing.

While this is an intended use-case for the Vision "platform" as a whole — it's an implicit promise of the whole "spatial computing paradigm" phrasing — I don't think this kind of AR-facilitated collaboration was ever intended to ship with the Vision Pro.

Why? Because, nobody in the Vision Pro's target market — at least as Apple would have it — wants or cares about AR-facilitated collaboration. I'll get to why in a moment, but in short — it's because it's the Vision Pro. And like Apple's other Pro products, it's mostly intended to be something possessed by an individual but owned and managed by their employer.

Now, Apple clearly intends to build VR -facilitated collaboration — but hasn't yet. That's what all the reviews mean when they say "the collaboration" is half-baked in the Vision Pro. The people in Apple's target market for this thing, expected VR collaboration, and it's not there. But that's an entirely different thing from AR-facilitated collaboration.

The Vision Pro as a specific product offering, almost certainly came out of Apple employees doing pandemic WFH, and realizing that all current "remoting" solutions sucked. Especially if you're a hardware engineer and trying to get a good look at a 3D real-world prototype of a thing. The people who are actually there in the office could just come in and stare at a real 3D-printed prototype. But the WFH people had to settle for looking at a 3D model on a screen... or maybe setting up a Windows machine, connected to an Oculus Quest, and using special compatible CAD software, in order to be able to do a walkaround. And that CAD software definitely didn't let them just paw at the 3D object with their hands.

The Vision Pro is clearly aimed at the "high-fidelity sensory feedback" half of the remoting problem. It's thus the complement to the companies who build telepresence robots — those robots solve the high-fidelity presence and interaction half of the remoting problem. (And you could really do some clever things by combining them!)

But note what else Apple released during the pandemic: Freeform, a piece of collaboration software. In fact, Apple added collaboration features to many of their first-party software offerings during the pandemic.

Apple never thought they'd extract their best work from their WFH (or now, travelling) employees by having them "show up to the office" with a telepresence robot. They rather expect that the best platform for all their engineers — in-office or otherwise — will be a digital office: one made of collaborative apps. And specifically, "level-of-immersion responsive" collaborative apps — apps that can switch between (or be compiled for) both lower-fidelity 2D screen-based UX, and higher-fidelity 3D spatial UX, on the same codebase.

It's this sort of 3D spatial collaboration-app UI experiences that the Vision Pro lacks in 1.0 but will clearly add later. (Which is why Apple cancelled WFH as soon as they could: their "solution to the remoting problem" isn't baked enough for them to usefully dogfood yet!)

But this is effectively VR-facilitated collaboration — working with other people who could be in arbitrary places, in VR or just interacting through a screen, where you see their VR avatars (personas) instead of seeing them. It's "Metaverse stuff" — but don't let Tim Cook catch you calling it that.

But AR-facilitated collaboration — i.e. having other people who are part of your work, present in the room with you, with some or all people in the room wearing Vision headsets with the see-through turned up, and where people wearing the headsets are interacting both with shared AR objects and with others present in physical space (rather than their VR simulacras in VR space)... this all has zero relevance to the remoting use-case. If you're in the office, then you're not going to be wearing a Vision Pro... because you can collaborate by just getting people in a room, and using various screens (AirPlay on an AppleTV display for shared viewing; your Macbooks for running collaboration tools; etc.)

Now, AR-facilitated collaboration is likely a use-case for the "Vision platform" as a whole. (Otherwise, why bother with the AR OS features?) AR-facilitated collaboration, is what a self-employed / freelance creative professional — someone who isn't remoting into some [idea of an] office, but who does have people [e.g. clients] physically present around them who they need to interact with them and with their work — would want. AR-facilitated collaboration would therefore be the defining use-case for a later "Vision Studio": a "prosumer" model targeted at such creative professionals — the same sort of people who buy themselves a Mac Studio. It would match the Vision Pro's targeting (which, if you haven't considered it, will almost certainly end up squarely on "businesses buying these for their remoting employees" — even if the early adopters are individual tech nerds.)


Hmm a lot to disagree with there


a hotel room on a business trip is pretty much "nowhere", so absolutely yes.


What do you mean, it can't run multiple monitors? I thought it lets you pop out windows free standing, no concept of monitor at all.


This tool evidently overcomes the display limitation: https://github.com/saagarjha/Ensemble

"Ensemble (formerly MacCast, before the lawyers had something to say about it) bridges windows from your Mac directly into visionOS, letting you move, resize, and interact with them just like you would with any other native app. It's wireless, like Mac Virtual Display, but without the limitations of resolution or working in a flat plane."


It does for apps running on the goggles themselves, but the Mac integration feature just shows the Mac's screen as one window, kinda like Remote Desktop on Windows.

And this is a quite hard limitation, since the Mac has to actually render those windows and then stream them to the goggles over radio. So, without quite a bit of magic, you have a limited amount of pixels the Mac can draw and send.


My launch day Apple Watch was unbelievably bad. Not even good for telling time as the raise-to-wake feature was so flaky. So by the sound of it, the Vision Pro might be starting in a better position than the watch.


Watch v1 was just awful. Those laptops with butterfly keyboards and the first gen Air were also completely broken in their own ways. If Apple manages to keep iterating on this device, they might eventually have a winner

Or maybe it’ll be another homepod


My job at the time gave me a brand new Series 2 and it was annoying to interact with... when it was new. Dropped frames, missed touches, etc. There was still a little bit of lingering hope there would be "killer apps" outside notifications and exercise.

But nope, nothing more interesting came, and OS updates ruined performance so badly that I happily returned it to my employer 2 years later and opted not to buy my own until 2021 (for exercise and notifications only).


As a counterpoint, I had a Series 2 for years and it worked really well. It was slow, so third party apps were not really usable, but the core functionality of the watch was perfectly fine. That is, until an unfortunate incident broke its screen while in the sea.


I had the watch they launched with, which Apple pretends didn't exist as they named the next model the Series 1 and this model never got a name. The watch improved considerably in the next generations, so you can do the math back from your Series 2 to get a sense of how truly bad it was.


> The eye tracking driven input method which was seen as holy grail turns out to be annoying after a while because people don't naturally always look at what they want to click on.

I wrote a long comment [1] months ago when the Vision was first announced expressing my skepticism about the use of eye tracking based on my person experience with the tech. At the end I said, "maybe I'm wrong." Turns out I wasn't.

1. https://news.ycombinator.com/item?id=36220097


i generally agree with the sentiment of this post. it does appear to be a beta/dev kit. i will say that the productivity criticism is a BIT unfair. It may be the case that you can only have 1 MACOS display, but you can have many non macos apps running right alongside the 1 macos display. You could have your macos display doing things that only macos can do, and then run the vision pro version of discord or teams or safari or whatever else you would use that has an ipad/vision version as floating windows separate from the macos display.


iOS/visionOS lacks good window management tools for this though and since macOS Apple has not demonstrated they can build them for new platforms. Maybe visionOS will motivate them to actually get it right, but that hasn't been shown off yet.

I think about all the apps I'm running and switching between on my computer now, using shortcuts and toolbars and docks to arrange, hide, and switch between them. Everyone using multiple apps on visionOS just looks chaotic.

I once had a second portrait monitor next to my ultrawide. I had to get rid of it because it was just too tiring to be constantly turning my head so far to look at it. It didn't work out.

I cannot imagine how uncomfortable it would be if each app needed to be in a different physical space that required turning my head to use. Painful.


yes true - a lot depends on the integration in that scenario though. Can I seamlessly copy and paste rich content between the, drag and drop, does the mouse seamlessly move from my Mac desktop to the safari window next to it, etc.

At a deeper level it depends a lot on the question of does Apple want this? If they do then all these will be solved over time. But if they actually see MacOS as a legacy integration then they simply aren't going to invest in encouraging people to use it. I'm waiting to see indications on which way they are going to play it.


Expecting the same, this is going to be a MASSIVE flop. VR is not for the masses until the goggles go away.


Its not gonna flop, for the same reason that Apple desktops dont flop. Anyone educated in basic modern technology can easily see that for the price, you can build a custom desktop that blows any Mac one out the water, but people still buy them because of 2 things: styling and ecosystem.

Vision has both of those. People will conveniently ignore all the downsides of it, like they do with current Apple products.


I think this product will be a slow burn. They are getting developers engaged now and subsequent generations will bring broader appeal while the software will get more refined and apps will expand in availability. I don’t think it will be a flop it will just take a while to get going. And they must absolutely know this given the current pricing.


It’s my understanding that eye tracking isn’t great as an input method, it should be used more for stuff like rendering or NPC interactions.


I think that eye tracking perhaps could be used to enhance gesture-based input methods though. It could provide a hint at which object a gesture is directed at in cases where that would be ambiguous.

I have tried eye tracking as primary input method some years ago in another setting, and I very much did not like the experience.


> I think that eye tracking perhaps could be used to enhance gesture-based input methods

That's how it works on the Vision Pro. I didn't really feel annoyed by it, but make sure the device is calibrated to your eyes, or the results might not be that great.


It's going to be a great, sneaky way of installing an ad blocker onto wetware - users will train themselves to not even gaze at anything resembling an ad, lest they look at it long enough for the headset to register it as a click.


Apps are not able to register where you’re looking. Only the headset itself does.


Sure, but the proof of looking is in the clicking. The threat scenario I'm describing here is "looked at the ad long enough for the headset to interpret it as wanting to click on it".


Does ios not rely on hovering/mouse over for alt text or similar? Im mostly on windows but i wonder if theyll eventually concede a second gesture for "put cursor here"


On visionOS the APIs essentially ask all UI elements for "alt text" and the system display is it when appropriate.


To me, 3D home recording playback is a huge use case.

Why would you want presence in an action movie? Cool, in an exciting sort of way.

But presence in recording of your family? That's powerful stuff, for everyone!

That feels like the long-term hook. iCloud to handle obscene storage amounts as a service. iPhone to generate new recordings. Vision to play back recordings.

And the dastardly brilliant part is... the more 3D video you record... the more valuable a Vision is to you.

>> Capturing -- One of the more remarkable things to watch? Your own home 3-D movies. Apple introduced “spatial video” for the iPhone 15 Pro a few months ago, and I started recording my sons with it. Watching the videos in 3-D in the headset now is almost like reliving the moment. The Vision Pro also captures these videos and photos—you just hold down a button on the top left.


This was always going to happen. The human eye has a field of view and dynamic range no display technology can hope to match anytime soon. The future of AR is not reprojecting the outside world on a screen; it is screens which can become transparent.


For that to happen we would need a transparent display that can block light. Seeing content on an additive display will always look somewhat transparent, with no hope of displaying blacks. The augmentation on the vision pro look so much better than on a HoloLens2, it's like looking at 3d printed objects.


My impression is that this is known to be possible and it’s mainly an engineering challenge to get good yields cheaply.


Dynamic transparency is the path to the future, and it's physically perfectly doable. Any news from the ray ben smart glasses?


AR Passthrough is the big v 1.0 feature for this generation of headsets...I'm sure Apple and Meta were developing in tandem without knowing what the competition was doing. It's a really need addition that brings significant improvements...and I could see Apple developing it as 'this is streets ahead' where Meta was just improving tech they already had.

At the end of the day, this is Apple testing the waters and trying to get a positive cash flow to help offset significant R&D...what they're showing is pretty impressive in a number of ways, even as it's lacking in others.


>Want it for productivity? it can't run MacOS applications and if you want to use your actual Mac it can't do multiple monitors.

Do you mean a Mac's external monitor visible through the vision pro AR view?


> Want it for productivity? it can't run MacOS applications and if you want to use your actual Mac it can't do multiple monitors.

I don’t know why, but while I feel multiple monitors helps my productivity a lot in Windows and Linux I find myself not caring as much in MacOS as long as the screen is big enough. I think it has to do with my habits around how I use the windowing in each. I tend to teasselate and arrange them in MacOS while I tend to maximize or lock to screed edges in Windows.


Unpopular opinion: multiple monitors are a meme for most uses and almost everyone is better off having a single screen and using their fingers to move the viewport accross virtual desktop spaces.


Not just an unpopular opinion, it’s also not supported by data. There are a decent number of studies that show that multiple monitors increases productivity.

Iirc though, there are diminishing returns fairly quickly beyond dual monitors.

Me personally, the sweet spot is three total screens.


Yes. I'd rather press a button than turn my neck.


> people want to enjoy photos, videos, movies with other people and it can't include them

may be apple expect every person to buy one. Didn't facebook recently try something similar?


No surprise at all i’d say - as Nilay Patel on the Verge review put it correctly:

> cameras are still cameras, and displays are still displays.

Anyone remotely familiar with the state of development in those areas would be aware that “even Apple” can’t cheat Reality (punintentionally).

Those left still raving about and/or hoping for a game changer will be greatly dissapointed - or only in it for the line go up.

The whole concept will be a niche product for many years to come and will stay an isolating experience.


> Even if they have a Vision Pro, I haven't yet seen any sign of ability for multiple people to do these things together.

No idea if it does this, but the obvious use case is for people who aren't physically present - but letting them somehow share a physical space. It could potentially be awesome for friends/partners who live far apart.


The eye-tracking thing doesn't surprise me at all, but I am surprised anyone thought this was a holy-grail sort of interface, in particular that Apple didn't rule it out themselves fairly quickly. Eye-tracking data is always a gigantic mess - it's why it's presented as gaze averages rather then direct replays.


Every hit from Apple had lots of initial (VALID) gripes, but the experience was worth it to advance the core product and eventually eliminate or accept those limitations.

iPod was the size of a beefy wallet, but was good enough.

iPhone was glorified plastic and websites looked like crap, no app store. But hey, it worked well enough.

That said... this isn't accessibly priced and what's the hook? Like if this launched the same time as Pokemon Go or WoW were taking off alongside it it'd get the social momentum all the other options had.

Also it's better than the competition in key was, but differentiable ways...? AR/VR could very well take off but it's not this year.


Eye tracking would be so bad if you wink one eye to lock on to manipulate a certain section of the UI and ignore the rest.


It’s first generation

First Gen is usually awful

This is not awful and maybe even closer to 2nd gen

Everything starts somewhere

Most things are ready for the masses by the 3rd or 4th gen.


> Want it for entertainment? people want to enjoy photos, videos, movies with other people and it can't include them

This is a disingenuous argument. Your other points are much more valid than this one. You don’t have a VR headset to interact with other people in the same room. If you want to watch a movie with other people around you, there are many other (cheaper) ways to do that (and Apple can sell you a nice AppleTV to do it).


And yet you literally have early access reviewers regurgitating talking points about how this will redefine the television-watching and movie-going experience.


What does disingenuous mean


genuine curiosity, how often are you clicking something without looking at it at least for a moment?

or, how often do we believe other people are clicking something with out looking at it?

im examining this for myself...hard to feel organic while im actively focusing on it, but i at least glance at my mouse pointer target while traversing the pointer towards the target across my screens


A lot.

I also started thinking about it reading the reviews, and the main cases to me are:

- checking something before commiting an action: for instance reread the product name before pushing the purchase button. The pointer is already on the button, I keep it there while checking the order, so I just need to click.

- focus switch: pushing another window to the forefront doesn't need a super accurate click. I assume most people eye ball it like me and will click on a emptyish part of the window from the corner of their eye. Same for moving the focus away.

- scroll and type like situation: mostly when using a document on the side while taking notes. My eyes and focus will be primarily on one side (with quick glances on the other), while the mouse/trackpad movement will be on another.

I think we'll discover a lot more instances of this.


your first example is p solid, i agree there will be a micro-tedium related to re-focusing on a confirmation button in that type of scenario

assuming that, in the AVP UX, the user could not linger hand-gesture focus on the button then immediately click after verifying whatever info they are reading as they will have to move the eyes up (to read lol) then back down to focus the hand-gesture control back onto the button


https://youtu.be/dtp6b76pMak?si=WIfkMdrsffMXwEvh&t=543

timestamp where reviewer discusses this idea


Often times I'll look to position the mouse and then look at something else when I actually hit the button.


> people want to enjoy photos, videos, movies with other people and it can't include them

Eh... I prefer empty cinemas

No loud babies, no popcorn sounds, no people explaining the plot to people not paying attention, just me and the world of the film. Bliss.


One of the aspects of the device that has been under-realized is that when mirroring your desktop/laptop display to the AVP, you can't break out its applications into different areas. You can't pull them away from the desktop window.

This is one of those things that Apple never claimed was supported, and yet there's something about that behavior that feels like such a natural intuitive implication to the technology that a lot of people feel alarmed or even cheated when they realize it's not possible (yet). It's been funny to watch the various discussion threads as people pop up talking about their shocked realization and disappointed feelings.

Update: I did realize when watching the WSJ video that the "mirrored" display actually appeared to have greater "resolution" (more pixels in height and width) than what she had on her laptop. So that's something.


It seems very reasonable this will be a future feature. I've long suspected iPad OS' stage manager feature shipping so half baked was really more of getting the platform ready to support multiple apps and easier manipulation (from a developer perspective) of the double buffered "window" textures - given Vision Pro is based on iPad OS.

With Stage Manager on macOS now, it feels like they have all the primitives in place to "transpose" macOS stage manager windows textures to Vision OS/ the iPad OS foundation.

Though this will be tricky to get right for all apps. Will be interesting to see if it's a macOS App store only feature/ API, opt-in, or some other option


They can already do this with the desktop composition software they use today. All the windows are virtualized onto backing layers that you can draw anywhere and add effects to. It’s how window shadows work, and how certain window effects are done.

They just haven’t done it.


Yes?

The first iphone didn’t have copy/paste.

Apple will always prioritize critical scenarios over nice to have. None of these things are technically difficult, it’s just time. I’m willing to believe they released too early, but at some point you have to start learning from real users.


I'm aware. I've worked in the space. It isn't as simple as you're making it out.


Sorry but almost 10 years ago I could do this on Xorg where the worst problem is that compositors cannot redirect input (so you had to kludge new events from scratch). I cannot imagine it would take more than _half an hour_ for someone with macOS display compositor experience to implement it.


I cannot imagine it would take more than _half an hour_ for someone with macOS display compositor experience to implement it.

What fools Apple engineering management must be, then!


It probably actually is a quick initial implementation, but like everything else, it needs to be planned, ticketed, developed, tested, and slated for release.

Compositor work isn't tremendously hard when the graphics primitives are already done.

UI state handling and input is a majority of the work from there. I've implemented this work before. A large portion of my background is in compositors and UI primitives.

What's really cool about having window backing layer handles, is you can do all sorts of crazy fun stuff in the office and show off to your coworkers or tech demo that basically will never make it to production because it's totally impractical.

The best example of that which actually did end up in a final product in my opinion was Windows Flip 3D. Totally fun implementation, I'm sure.


And the reason Xorg is now dying in favor of a system that doesn’t have this capability is because the architecture that enabled it, while cool at the time, severely limited the graphics performance and capability of the applications.


What? Doesn't Wayland work this way by design?


Don't be letting actual experience get in the way of shitting on Xorg, now! Wayland will never win with that attitude!


Indeed it does not. It’s local only.


I see. Do you mean like the Xorg fast path for local 3d rendering that became the basis for Wayland?


Exactly that. A local only path that provides high performance.

Remember the original comment was the claim that Xorg could easily do remote windows years ago. This is only true using the low performance path that modern operating systems have universally rejected.


No, the original comment is that by writing an Xorg compositor I could project separate windows into separate surfaces at arbitrary positions in the 3D world. There's no need for network transparency and you can very well do with an HDMI cable (actually, I was using an HDMI cable -- this was 10 years ago).


Network transparency is exactly what people want. If you're going to say just tether it to your computer with a wire then you just aren't understanding the product.


Network transparency is not required for wireless video, either.


I doubt making the windows draw is what’s taking them time to get right.


You nailed it, down to how you'd do it without any help from Cupertino: https://github.com/saagarjha/Ensemble


Except that doesn’t actually do it. It is just a proof of concept.

Try it with a full suite of Mac Apps and you’ll find it falls apart because they aren’t all well behaved.


It falls apart how exactly? There's some app for which the popup menus are not shown at the right coordinates?

You are making this sound much more complicated than it really is.


Amongst many possible edge cases. Not to mention shearing with things that have rapid animations etc.

The trivial case is simple. You are imagining it's all the trivial case.


But they have already fixed shearing, otherwise there would be no MacBook display mirroring feature at all, multiple surfaces or not.


I’m talking about this open source project, not Apple’s implementation.


It's probably more down to the getting the UI right on apple's end.


Unfortunately I happen to live in Cupertino


Yes, it is.

In fact, there's already a project for it on GitHub. https://github.com/saagarjha/Ensemble

480 lines total, including comments, headers, whole shebang.


> Ensemble is currently "pre-alpha": it's really more of a demo at this point. There's a lot of things I need to work on and until then it is unlikely I will be taking any code contributions.... The code is definitely not designed for general-purpose use yet, so don't expect much of it :)


Tech demos are often easy to put together.

It's all of the edge cases and UX refinements that takes time.


Yeah, sure. I'm gonna go ahead and say Apple probably could have found a way to ship this over the visionOS dev cycle.


Some of the code is factored out into individual libraries. For example the networking and serialization code is separate and it is likely that screen recording will get pulled out too at some point.


> They just haven’t done it.

Literally nobody has done it. It's beyond ridiculous that you can't already show or duplicate an application window on any display you want and allow it to be controlled from anywhere it is visible.

Searching for ways to do this lead one into extremely niche software ecosystems. Please is there any collaboration app out there that makes it seamless to toss windows around like everyone actually wants?


Isn't this how ordinary spanning monitors works? It might be slightly awkward with AR goggles since the relative orientation of the displays will be constantly changing as your head moves, and what happens to a window you have half on and half off of the Macbook's screen when you look away? Or do you want to have the application jump between devices, like appearing on your fridge when you go for a drink? With the old X11 protocol and a daemon in the middle this was possible but the use cases were extremely limited and the security issues made it a pain in the ass to actually use. With distros moving away from X11 this is only going to get harder, and you have to ask yourself how much you really want it.

This would mean the goggles would be basically just a dumb display for the Mac. It would also be weird to try to move an AR app onto your Mac.


I feel like multiple people did it back in the original X11 days, and almost certainly when compiz was the new hotness


Nearly 20 years ago there was an OS X screensaver that would capture all the window buffers and then float them around the screen as they rotated various ways. Another app would save a snapshot of all windows as Photoshop layers with appropriate transparency (before blur was added). X11 was always network transparent, though I don’t think windows could easily be moved from one client to another.

Even classic Mac OS, which wasn’t designed to be “rootless” could be run that way could have its windows mixed with OS X windows in the classic days.

As others have pointed out, there is already a PoC app that is doing it, so it seems if Apple wants to, it’s completely in their power. However, does this match their vision? (pun intended) Time will tell.


So Apple could have done this, but did not. Why? (Speculation and leaks welcome)

- [Profit on basic innovation] Did they want to wait and see how their customers would adopt VisionOS's native free-floating windows, so as to avoid cognitive overload by commingling with MacOS windows?

- [Benevolence to fellow competitors] Did they not want to takeover the existing market of virtualized VR desktops?


I'm reasonably certain it's a combination bandwidth and tech issue.

The Vision Pro is effectively using AirPlay to mirror the whole screen. If you used AirPlay to mirror each window as a whole screen, you'd run out of bandwidth pretty quickly.

The windowing system in MacOS, Quartz Compositor, also isn't built to stream window information. Right now it has a big built in assumption that any windows its displaying are on a screen it also controls. It was probably too big a lift across teams to also re-write the graphics stack for MacOS for the launch of the Vision Pro. Hopefully they get it working in the future, but neither of these problems are easy to solve.


Makes a lot of sense, future will solve it someday.


Most likely because it wasn't polished, so get it out now in a polished and limited state and now you have a fancy update to tout when it is done.


They shipped iPad Stage Manager half-baked, to get iPad developers ready for double-buffered windows, so they could eventually ship the visionOS macOS integration half-baked? Doesn't sound right at all to my ears, even though I'm stoked for my order!

EDIT: -5* doesn't make sense, this is the most polite way you can point out that getting macOS apps windowed on visionOS has ~0 to do with double-buffered windows on iPad OS. n.b. I didn't use half-baked, OP did.


I don't think it's half-baked. I think it's lightly toasted. :-)

I use iPad Pro as a kind of sidecar daily driver, in the magnetic dock magic keyboard w/ trackpad.

As I type this, the screen shows a traditional MacOS style dock across bottom, four Stage Manager window clusters I can tap with a thumb on the left, and Safari plus Messages taking 2/3 and 1/2 of screen respectively.

There's more app and pixel real estate than most Windows laptops, and bringing screen sets to the foreground or swapping them back to the side is so natural I almost feel like giving up that space on my Mac as well.

The big thing I saw happen from apps over the past two version of iOS is app devs realizing their windows will not always be full screen or 1/2 screen size, but arbitrary size.

By now, most iPad apps of any serious nature are effectively window size independent, making them play well with others in stage manager. It's easy to see how that would make them play well with the headset one day.


No, I'm saying they shipped iPad stage manager half baked for their own uses/ to refine for AVP. I'm positing that a major reason for macOS stage manager's existence is as a transport layer/ "texture formatter"


On another operative, I use Stage Manager everyday of Mac & iPad and it’s pretty neat. I actually forgot I was until you mentioned it


Same here, I actually really like stage manager on my Mac.


On Windows of all places (95ish to MEish) there was a remote tool called radmin and it had something that I wish companies had embraced: it hooked in to (maybe even before?) the window-rendering functions and sent the changes over the network. It’s hard to explain exactly what I mean because everyone is so used to streaming pictures of the screen over the network (if they even use remote access at all), but you could have less than 20ms latency while controlling over the internet while using tiny amounts of data (50kbps? 100? not sure but somewhere around there).

OSX had the opportunity to follow that path before settling on the “render windows, capture the screen, compress the image, send it over the network to be decompressed” VNC-style remote access that’s bog-standard today, and if they had Vision Pro would be set up to be an absolute mind-blowing macOS experience.


This is how Windows Remote Desktop used to work - it would forward GDI instructions to be rendered remotely.

It falls apart as UIs got richer, browsers in particular: they're entirely composited in-app and not via GDI, because GDI isn't an expressive enough interface. So you end up shipping a lot of bitmaps, and to optimize you need to compress them. You might as well compress the whole screen then.

https://www.anandtech.com/show/3972/nvidia-gtc-2010-wrapup/3


I wonder how this works exactly. RDP lets me connect to a single monitor Win 11 host and display it on my client's three monitors. Everything is super smooth including browsers (I am connecting via Ethernet). Is the host managing the three screens or does the RDC do it on the client's side?


Note the "used to work" modern RDP will negotiate some form of image (or video) based compression for transferring data[1]. You can even share an X11 desktop over RDP using freerdp-shadow-cli.

1: e.g. https://learn.microsoft.com/en-us/openspecs/windows_protocol...


> On Windows of all places (95ish to MEish) there was a remote tool called radmin and it had something that I wish companies had embraced: it hooked in to (maybe even before?) the window-rendering functions and sent the changes over the network. It’s hard to explain exactly what I mean because everyone is so used to streaming pictures of the screen over the network (if they even use remote access at all), but you could have less than 20ms latency while controlling over the internet while using tiny amounts of data (50kbps? 100? not sure but somewhere around there).

This has been done many times before (see e.g. X Windows) and has known downsides. Off the top of my head:

- You need the same fonts installed on both sides for native font rendering to work

- Applications that don't use native drawing functions will tend to be very chatty, making the total amount of data larger than VNC/rdesktop/&c. style "send compressed pictures"

- Detaching and re-attaching to an application is hard to get right, so it's either disallowed or buggy.


isn't that how Xorg remoting used to work as well? the display server and client are separate, so whether the pipe was local or remote didn't matter. In principle, Wayland could do it too, I think, if there were a way to synchronize texture handles (the Wayland protocol is also message-based, but IPCs GPU handles around instead of copying bitmaps.)

I guess one downfall is that that your pipe has to be lossless, and there's no way to recover from a broken pipe (unless you keep a shadow copy of the window state, and have a protocol for resynchronizing from that, and a way to ensure you don't get out of sync.)


Yeah, you can do this with x forwarding on Linux. Not sure if there’s a modern Wayland equivalent.


Wayland clients don't draw things the way old-school X clients do (neither do modern X clients), so it doesn't make sense at the Wayland level. KDE or GTK could potentially implement something like this though.


This only works with two major assumptions, neither of which is true for the VisionPro:

1. The receiving side has to have at least as much rendering power as the original side, since it will be the one actually having to render things on screen. This is always going to be the opposite case with any kind of glasses, where you'll always want to put as little compute as possible for weight and warmth reasons.

2. Each application actually has to send draw instructions instead of displaying photos or directly taking on control of the graphics hardware themselves. No or very few modern applications work like this for any significant part of their UI.


IIRC many windows apps at that time were using MFC or otherwise composing a UI out of rects, lines, buttons, etc. Then came Winamp and the fad to draw crazy bitmaps as part of the UI. If everyone does that, shipping draw commands is less useful and shipping pixels makes a lot more sense.


This can only work until it doesn't, and it won't work in many situations because eg 1. apps aren't going to bother being compatible with it 2. compositing has surprising performance and memory costs, and in this case the destination is more constrained than the source.


Windows does this very well since at least a few years back. When connected via Remote Desktop any native application will get the behavior you describe, so the UI gets updated with almost no latency.

Applications which bypass the native APIs to render their window contents, in particular video players or games, get a compressed streamed video which has very decent performance. The video quality seems to be dynamic as well, so if there's a scene with very few changes you can see the quality progressively improve.

All of this is done per window, so a small VLC window playing a video in a corner gets the video treatment, while everything else still works like native UI.


X windows system basically does that iirc, and I remember the magic you speak of.


And yet in practice most X traffic these days is bitmaps.


Can you give some more details on this? Google frontpage has lots of results,but it's not clearly the same thing you mentioned.

How did you know it sends windows hooks? Was it some sort of binary serialization?


Yeah, I don't think "mirroring" is quite the right term. It's effectively a 4K monitor for the laptop, with the laptop screen going black. Most (all?) Mac laptops don't have a 4K screen, so you have more screen real estate than "mirroring" would make you think.

But this is sufficient for many use cases (or at least, mine). I pre-ordered one with the idea that my main work will be on the 4K monitor, with most of my superfluous apps floating around as native visionOS apps. That's mail, a web browser, and zoom, which all have apps now, and Slack, which I could just use Safari for but may have a native app in the future.


The screen real-estate is the same as for a 1440p screen. From The Verge’s review:

“There is a lot of very complicated display scaling going on behind the scenes here, but the easiest way to think about it is that you’re basically getting a 27-inch Retina display, like you’d find on an iMac or Studio Display. Your Mac thinks it’s connected to a 5K display with a resolution of 5120 x 2880, and it runs macOS at a 2:1 logical resolution of 2560 x 1440, just like a 5K display. (You can pick other resolutions, but the device warns you that they’ll be lower quality.) That virtual display is then streamed as a 4K 3560 x 2880 video to the Vision Pro, where you can just make it as big as you want. The upshot of all of this is that 4K content runs at a native 4K resolution — it has all the pixels to do it, just like an iMac — but you have a grand total of 2560 x 1440 to place windows in, regardless of how big you make the Mac display in space, and you’re not seeing a pixel-perfect 5K image.”


> 4K monitor

It's more like 1080p monitor. The virtual monitor only covers a small part of the VisionPro's display. You can compensate a bit for a lack of resolution by making the virtual screen bigger or by leaning in, but none of that gives you a 4k display.

To really take proper advantage of the VR environment you really need the ability to pull out apps into their own windows, as than you can move lesser used apps into your peripheral vision and leave only the important stuff right in front of you. You also miss out on the verticality that VR offers when you are stuck with a virtual 16:9 screen.


It is a 1440p display.

Which is the resolution that the majority of PC users are likely using.


4k is important because of perspective, rotation, and aliasing. Just sending 1080p would look terrible.



Slack's approach of being a glorified webview everywhere is really paying off (a bit of sarcasm here, but I see it as net positive)

We're pretty close to "Write once, (rewrite a bit,) Run everywhere"


lol yeah, the ux is great across all devices i have tried the app.

even the big UI update recently as completely overwritten how the old app used to look in my memory

the only feature i have tried and not really cared for is the "Canvas" feature


Slack is largely UIKit- and Swift-based on iOS. I suspect they are bringing over their iOS app rather than a visionOS-focused one, though.


Is it mirrored as some HEVC video stream from the laptop, or are UI elements actually rendered on headset itself?


It's streamed from the laptop. Technology-wise it's very similar to a screen sharing session but initiated from Apple Vision Pro instead of a Mac.


I agree that this should be considered long term, however... you are able to snap VisionOS / iPadOS apps anywhere around your Macbook view AND you are able to control those very apps with your Macbook trackpad.

So even though you have a sequestered Mac output alongside Vision apps, you can use the same controls for all them simultaneously. This should help in the interim.


Yeah when I found this out, it resolved my concerns. Most of my apps will have a native Vision release (email, web browser, slack, etc.) and my actual monitor screen will only need more professional software (e.g. Photoshop, Illustrator, InDesign).


Discontent over this implementation detail shows users are fully sold on the basic idea. Like, if the main complaint about the first Fords was the colour range.


I think it's a more important feature than just a cosmetic color. Imagine if you bought a truck to haul cargo, but were then told it can only transport one type of cargo at a time. That would suck.


I meant it's a superficial rather than fundamental design flaw, easily rectified.


It looks like someone is working on a Mac app that does exactly this, and they seem to have a functional prototype: https://x.com/TheOriginaliTE/status/1751251567641346340?s=20


Unclear if Apple will allow this in the store

edit: Yes I know you can build apps before they're in the store


It is permitted on the App Store. The developer had a thread on the fediverse several days ago.


The visionOS side of this, which is the one that requires App Review, is a glorified VNC app. They will probably review it thoroughly but it doesn't seem like it breaks any rules.


As long as it's open source, you can sideload onto any iOS device by building it yourself.


Iff you pay an annual $100 dev fee.

Yes I know you can technically do this without a paid dev account, but it's practically useless because it has to be re-done every 7 days.


Surely considering the fact that you are streaming your Mac to Apple Vision Pro you should be able to easily resign it on that very device? :)


I don't know about you, but there's nothing easy about having to boot up an IDE every 7 days to sign and re-install an app you depend on. It's just one more thing to worry about and mentally keep on top of, the opposite to Apple's convenience and smooth UX ethos.


Further, I wish they added support to make multiple virtual monitors from macOS Workspaces, like what happens today when you attach another monitor. Switching workspaces can be bound to keys in the Keyboard Settings. Moving windows to other workspaces is easy to do with third-party apps like Amethyst.

It feels like the Vision Pro would definitely be a great replacement for people who (want to) buy multiple expensive monitors, but it doesn't fully reach that potential today, and mostly because of software? Although rendering 3 or 4 virtual workspaces through ad-hoc Wifi at 4K 60fps+ low-latency would certainly be a huge challenge.


I do this sometimes on my meta quest. Go into desktop VR and pull up a couple desktop views so I can see things happen in real time on different “screens”.


You can put VisionOS apps next to the Mac desktop, so it isn't as much of a problem as it seems.


I wouldn’t be surprised if this came in a visionOS update. On a shorter timescale it could also come in the form of third party apps, because there’s no technical limitations preventing a server app from cutting out windows on a desktop OS and sending them over a wire to a visionOS client.


On a shorter timescale it could also come in the form of third party apps

If Apple approves it, of course. This is one of my major concerns; there's a lot of potentially useful functionality that could be implemented, but you have to jump through the app store hoops and hope that Apple doesn't decide that it conflicts with their idea of what you should be allowed to do.


Functionally speaking these apps would be scarcely distinguishable from the plethora of screen streaming apps that exist on the App Store already, like Screens and Moonlight. Of course Apple could reject these apps anyway but it seems unlikely.


There are third party apps that do this already on the Quest. I believe they can replicate Mac screens, they definitely can replicate Windows PC screens into the VR space. If Apple doesn't provide a 1st party solution I suspect someone else will soon.


Appears possible in theory: https://github.com/saagarjha/Ensemble


The inability to break out Mac windows curbed a lot of my enthusiasm for the AVP. I hope Apple will eventually add it, but I'm not going to spend $3500 on that hope.


It will be less of an issue for me if we start seeing native builds of popular IDEs like xcode, intelli-j/goland, etc for vision pro (and other apps for other people say photoshop). I think of the 'projected screen' feature more like a compatibility layer like Rosetta 2. You use it until you get a native build then it stops being a thing you bother with.


That will never happen. Nobody will make real software for iOS because nobody wants to pay a 30% fee for the privilege of randomly having their whole business shut down when an app reviewer is in a bad mood.


What world do you live in? The iOS App Store is probably the most full fledged and populated app ecosystem there is.


And yet none of the software GP mentioned are available on it.


It’s been 1 day.


No, iPads have been around for more than a decade.


Are you referring to IDEs? The iPad is not a productive machine. It’s for consumption. The VisionOS is for productivity which is very different from anything on iOS so I imagine it’s going to be available on it.


My point exactly, i'm going to be hard pressed to drop 3.5k on a ipad strapped to my face. If it can't replace my macbook why would I bother?


It has the exact same app review guidelines as iOS and iPadOS. If you can't put it on those, you can't put it on VisionOS.


I don't see why you couldn't port an IDE to ipad os.. filesystem is a bit tricky with the way icloud files work.. but otherwise it's just a thing.

I think mostly people don't (for high end IDEs) because they expect a keyboard and pointing device that isn't a touchscreen and those are faily rare per ipads sold.

The ipad pro certainly is seeing some more beefy productivity apps it seems.


That's not how Apple advertises iPad Pro.


IntelliJ, GoLand etc are Java apps.

Windows, BeOS and Commodore 64 apps also don't run natively on the iPad or Vision Pro.


Without a tethered connection the bandwidth simply isn't there.


As long as you keep your expectations modest you can probably do an acceptable job for casual usecases.


The fact that pretty much everyone who owns both a Vision Pro and a Mac would want that feature means it's probably going to happen.


The largest portable MacBook Pro 16.2 inch has a 3456-by-2234 native resolution at 254 pixels per inch which by default is halved . So I don’t know what she means exactly about 4k but there are enough pixels to do a portable 4k display.


Streaming an arbitrary collection of windows instead of a single finished, composited framebuffer increases the bandwidth requirements by at least an order of magnitude. That's never going to work well over WiFi.


As long as the total number of pixels is less, I don't see what that has to be true, at least bandwidth wise? Compute wise, the vision might have to do slightly more to separate the buffer and composite them into the AR view in different places, but the bandwidth should be direction proportional to the number/size of each window. If I can fit all the windows on a 4K screen, then I don't see why the software can't split that and lay it out separate in my view instead of in a single rectangle.


Some of the windows will be obscured by others. If you stream a 100 windows to visionOS, it's possible to lay them out so that none of them cover each other and you have to render them all. On a flat screen there is a limit to how many pixels you need to paint.


It seems pretty obvious that nobody wants to be limited to the total number of pixels being merely equivalent to tiling across a 4k screen.


Just compress the stream. Total pixels increase the vram on the device but popping out a static window shouldn't take any more than a trivial amount of streaming bandwidth.


Everything is already compressed. Uncompressed 1080p is 3Gbit/s, which is already well beyond what's actually achievable with WiFi. Allowing an unbounded number of window surfaces to be streamed invites people to use it in a way that current technology simply cannot provide the kind of slick experience that Apple needs from this product.


Seeing as you have the eye tracking you could probably greatly reduce the FPS/bitrate of windows that are out of gaze. This kind of foveated rendering seems to already be used in their ecosystem so I assume its sufficiently slick for Apple. Apps will sleep when not being gazed and such.

This allows you to restrict full bitrate to a single window with a maximum resolution and keep the user experience high and astonishment low.


Keep in mind that you are popping out a static window 60 times a second or so.


Use the touted eye tracking feature and compress that particular applications stream inversely to how much the user is focusing on it within the Vision Pro.


Does foveated rendering work with the extra latency of a round trip over WiFi? I can certainly see this working fine on a head tracking scale, but I'm less sure about eye tracking. I also wonder if existing video compression hardware can actually make use of such outside feedback to adapt bitrate and quality with the necessary flexibility.


I guess there is something to the macpro being able to handle the output for one screen at a time. If it has to render 4k outputs for 10 different screens simultaneously, performance is going to suffer.


That’s really weird, because even the hololens has this feature. Multiple windows, multiple desktops is how we want to work.


the iPhone debuted without copy/paste. They'll get to it but maybe not immediately.


I really like Joanna Stern, and how she approaches reviews like this. I’ve watched her review, The Verge’s, and MKBHD’s unboxing video.

However, the best review I’ve found that actually transmits what is possible and what it is like to use is Brian Tong’s 55 minute review video: https://youtu.be/GkPw6ScHyb4

I’m not familiar with him, but unlike other reviews I’ve seen, he spends less time evaluating or summarizing, and more time trying to actually use the device. I didn’t even realize that you can seamlessly use your Mac to control your visionOS apps, for example.


Good review. Most interesting part was at 43:00 discussing the ergonomics and weight, which is the real question for everyone hoping to make this a daily driver.

He said he could wear it 45 mins before needing to take it off, that it was overstimulating so you need to slow down how quickly you use apps and move things on screen, and that gestures also were fatiguing. You could tell he was trying to be fair but positive.

Headsets just haven’t cracked this nut yet, and tho tech may advance somewhat, they may be limitations inherent to the form factor. Even if it gets really light weight, the issues of overstimulation, headaches, and the amount of neck movement implied may keep these products in a niche. (I say this as someone super excited about AVP)

For everyone used to using their computers all day long wanting to do it in a headset, don’t throw your macbooks away just yet.


I've regularly done 2+ hours of light activity (e.g. mini golf, social hangouts) with my Quest 3 without issues, though I will note this is with a third-party head strap specifically designed to be way more ergonomic and comfortable than anything first-party from Meta or Apple [1].

[1]: https://www.bobovr.com/products/bobovr-m3-pro

A lot of the physical downsides here are basically self-inflicted by companies trying really hard to hide the "nerd factor" necessary for comfort, to the detriment of the actual user experience.


Yeah industrial design and ergonomics tend not to have the same goals. Personally I was able to use a Quest 2 for ~1hr without too much issue, but it's not something I'd want to do on the regular.

The big product marketing question is what niche do headsets fit in, and thus whats the ideal single session and daily usage goal for a headset?

If it's about replacing laptops or another high usage scenario, that's a pretty high bar, definitely too high for the next 1-2yrs. I imagine some people at Apple wore dummy see-through goggle ergo tester units of varying weights around all day to get at these numbers :) Wonder what they came back with. Even still, that only gets at weight vs the perceptual ergonomics, skin-feel, etc.

The issue I see with headsets is that there may not be a lot of improvement possible without compromising durability or other factors necessary when going to market. E.g. what if they can't get it below ~400g (making AVP ~40% lighter), but to make the headset comfortable for most people for the usage scenario that makes them mass market (e.g. 2h+ sessions daily) requires ~250-300g?


its not really self inflicted when it literally determines how many sales you will have. Plenty of people are put off of VR because of the form factor. Quest looks like a nerdy toy, which does indeed influence who buys and/or uses it


I think the "overstimulation" thing is a bit of a sleeper issue.

At first people think "wow it's so awesome I can be sitting on the moon while I browse the web". But after a bit of time you just get tired, and I think it's precisely because your whole brain is working in overdrive to understand the unnatural environment you are in. None of this manifests explicitly but at the end of it, when people are faced with the choice of putting the headset on or not, they just "feel" like it's a lot of effort.

I say all this as someone who does regularly spend 1-2 hours working in Immersed with multiple giant screens up. And I love it as a break, a way to focus or just relieve the boredom of working in the same space day in day out. But even I feel this effect of it being tiring and not keen to do it for 8 hours a day. And the minute you say that, you lost the use case of this being your "only" computer / replacing your laptop, so it's actually kind of crucial to its central justification as a replacement for a computer or a 'new kind' of computer.


Everyone I know even those who are into VR and even those who WORK in VR have zero interest in working with a headset of any kind on.

And those who work in VR report their coworkers and just about everyone they talk to customer wise feels the same way.

I know there are people who really want to, or think they do, but most would rather just use screens until maybe such a time the form factor becomes a pair of eye glasses.


Is the problem "working in VR" or is it "with a headset on", though?

I think it's completely reasonable to say that this stuff is only going to continue to get lighter and more comfortable every generation. What's that balance of interest going to look like when a headset's eventually got the same weight and form factor as, for example, big ski goggles?


> Is the problem "working in VR" or is it "with a headset on", though?

IMHO both. As for form factor lots of people already opt for contacts over glasses for comfort. I think ski googles are right out for anything that's not recreation or industry/application specific.

As for being in VR people really do like the real word lol. Going in and out of VR is a bit of an ordeal. Anecdotally I like taking small breaks from looking on my screen during the work day. Frequently. I look outside at the trees, at my cats, stuff on my desk, and etc when I'm thinking. If I have to "go in and out of VR" to do that or just not be in VR I'm going to just not be in VR.

For it to catch on mainstream for productivity and day-to-day it's gonna have to be like eye or sunglasses with seamless AR. Via something like retinal projection perhaps. I feel like Google was onto all this hence the Google Glass.


This is headset, app and person dependent. I've done 8 hours in Elite: Dangerous (sat down, can't remember the headset) and well over 4 hours in Fallout 4 VR (stood up and moving around, Valve Index).

Having demoed VR at my old office I can tell you that the range of reactions varies from an immediate "nope" and having to take the headset off to being able to stay in it for a significant amount of time with no discomfort.


The fatigue will morph into buyer's fatigue.


I started watching the video, and at 0:40 he asked Siri to "close all my apps". At that point my own iPhone's Siri enthusiastically explained to me how I can close all the apps on my phone.


It is interesting how many here are excited about this for productive computer work. It’s also what Apple advertises with.

But what is the account situation like?

For years I’ve been complaining that I can’t easily use my private iPad with my company Mac because they have separate Apple IDs. Things like sidecar for a quick virtual whiteboard are basically impossible.

AirPods have gotten better over the years where today I can freely switch between devices belonging to different Apple IDs with the same AirPods.

But is the Vision Pro like that as well? It would seem weird to exclude the not-so-small group of people working from home but with company MacBooks


> But is the Vision Pro like that as well?

It's actually far worse. There's a single user and a "guest mode", but for AR/VR to work with, there's a calibration step, which means that the guest has to go through that step every single time they want to use the device. It might be fine for a real guest using it once, but it would be basically impossible to share the device with someone else. Having to setup the device every single time you use it sounds absolutely terrible.


The only reason the guest mode exists is to incite the “guest” to also purchase an AVP after having experienced it.


Of course. They want to sell them locked to a user so that every employee or family member needs their own, and can't use the same one at work and at home.


I don't think the question was about multiple people sharing a VP device, but about the same person using a single VP device with multiple Apple devices on multiple Apple accounts - can you easily switch between viewing your personal Mac's screen and then switching to your work Mac?


Man I've felt this for YEARS and I feel like I'm taking crazy pills with all the youtubers and influencers proclaiming Apple's Connected Ecosystem as such as productivity advantage.

Either you can't sign in with your personal Apple account, or you shouldn't (because MDM). So the only way to access anything associated with iCloud is what is available on the iCloud web portal; which is a horrible experience. You can't do sidecar. You can't do airdrop, copy-paste, continuity camera, nothing.

I've only ever used Macs in a professional environment. I've, also, always had a Mac and iPhone as personal devices. But I've never made the jump toward saying "Ok I'm actually using iCloud Seriously now" for this single reason. The best Google Cloud experience is available in a web browser, which I can be signed-in to on everything. Google Drive is everywhere. The list goes on.

Its such a crystalline example of why Apple's walled garden actually hurts themselves.


I thought I was the only one bothered by that! I'd love to use my private iPad with my work Macbook. And at least in my case preventing that definitely won't increase iPad sales: My company won't provide me a work iPad and even if it did it wouldn't work as there are no iCloud accounts attached to our work Macbooks.

Locking you customers into your ecosystem? Fine, whatever. But even within the ecosystem restricting usage in such a way!?

It's been said for years but the iPad could be so much more than a mere media consumption device if it weren't for short-term-profit driven design decisions.

Maybe they do better with the Vision Pro.


Basically the business and education group is about selling to businesses and schools, so they give them the tools they say they need. This means you wind up having configuration options which sound good to operations, but which break ecosystem support - and on BYOD break personal usage.

Literally the only cloud drive product I know of which doesn't work on my corporate laptop is iCloud Drive, because the EMM gave a checkbox to set a flag. As a result, a huge portion of built-in collaborative features and apps just don't work. I have paid seats in other products only to regain functionality lost by that checkbox.


> For years I’ve been complaining that I can’t easily use my private iPad with my company Mac because they have separate Apple IDs.

I have a similar complaint with my Apple Watch and my corporate issued laptop. When I am using my own computer (mac mini) I love how easy it is to use my watch to login, use it to approve actions, etc. However when it comes to my company laptop I have to type my password in repeatedly. It would be awesome if the watch could be linked to both IDs to make this much more seamless.


Apple’s solution is that your corporate should buy you a second watch.


On the upside, once somebody figures out how to use smartwatches for VR haptic feedback, nerds will have a reason to wear six watches.


Four limbs, two nipples. Math checks out.


No security-conscious corporation is going to allow you to approve any actions with security implications using an Apple Watch secured by a four-digit passcode, rather than an alphanumeric password on a Mac.


MDM is supported by the watch. My organization requires a 10 digit code for watches plus requires biometrics to be resolved on the primary device prior to the watch being able to control that device or interact with apps on that device


> For years I’ve been complaining that I can’t easily use my private iPad with my company Mac because they have separate Apple IDs. Things like sidecar for a quick virtual whiteboard are basically impossible.

This is kinda what Managed Apple IDs are for - the work 'owns' the Apple ID it puts into its management profile and can set policy. Apps write into a separate storage container which the company could remote wipe, without affecting the rest of your personal data. If they want to disable things like sidecar, they can do it.. for the corporate apps/accounts/web domains.

I'd' generally assume the multi-user aspect is worse (because face shields and prescriptive inserts) so generalized multi-account is pretty low on the priority list.


> Things like sidecar for a quick virtual whiteboard are basically impossible.

Zoom has a good Airplay sharing feature that works well in this situation.

But I get what GP means -- I do have a corporate profile, and I made my own @corporation.com Apple ID, but what do I do to use sidecar? Either log out of my personal iCloud on the iPad (gross) or log in to my personal iCloud on my work computer (grosser)


I suspect multi account support is not ‘down the priority list’ but purposefully not implemented at least when it comes to iOS. Why make it easier for customers to share iPads at home when they can buy multiple iPads.

I use my iPad so sporadically that it could easily be the house iPad, but I’m signed in with my email and so on it can’t be.


The entire screen sharing setup they demo'd in the original Vision Pro demo reels always made me laugh. They've had years to get Sidecar right, and have failed miserably every time. How am I going to believe that they'll get wireless display transmission to work perfectly for this thing?


I haven't used the Vision Pro, so I can't say how well it works in practice... but with macOS 14 this year they redid their screen sharing app to, presumably, use whatever technology is underlying the Vision Pro display-sharing. It's really good. Vast improvement over the previous tech (presumably VNC?).

Assuming the Vision Pro screen sharing works using the same stuff, I have high hopes.


It does use the same stack; I assume it was redone precisely to support this feature.


I haven't used it in quite a while so I'm wondering what the current issues with Sidecar are?