SwiftUI, RealityKit, ARKit, Xcode, Reality Composer Pro, Unity ... give me a break :)
These device specific environments and technologies are cumbersome to learn, will exclude everybody with a different device, usually change rapidly which makes code maintenance a nightmare and at some point simply die like flash.
I will start developing for 3D devices when we have a standard similar to HTML or when 3D elements are part of the HTML standard.
A text based way to describe how elements are positioned in space and what attributes they have. With JavaScript support for interaction.
maybe you're talking about Universal Scene Description? it's a format Pixar put together and opened up. there's a WWDC talk on Friday titled "Explore the USD ecosystem" about it; I'm hoping this means Reality Composer Pro/visionOS is using an existing open standard for this stuff.
For a tiny while that was so futuristic: the Web, in 3D. To ~10 year old me it was like seeing the prototype of a flying car, of course everyone will use this in the future!
One would just need a browser in VisionOS that supports displaying these elements in actual 3D rather than projected onto the 2D surface of the webpage...
It still misses some important things in the area of shading and animation for full transfer of all qualities you find in a modern 3d creation app like Blender.
That seems to be a way to define 3D objects and render them in 2D html.
This approach would be useful if 3D devices let you render a view for each eye. And report head and hand movements as events. So one would build a whole 3D engine with the current Browser APIs.
The approach I was thinking about was to tell the 3D device "This object is located at x,y,z position 123,40,8". And the device does the rendering. It's probably much faster, as the device probably has a lot of hardware optimized for 3D rendering. Let alone things like AR, where you have to analyze a given video input, figure out that there is a real table in a certain position in space where you can put 3D objects on, calculate the physical interaction and shadows etc.
Not sure which approach is better. Time will tell.
The first approach doesn't work because it'll give the user motion sickness. The device must be able to interpolate between "frames" faster than you can update them, and providing the info needed for you to do that is a privacy issue.
WebXR is the way to do this, and a reasonable more supported equivalent to react-vr is probably @react-three/xr which where the stack is react --> three.js --> webXR --> WebGL.
Apple announced support for WebXR on VisionOS as well.
I think it will be a lot more interesting once webGPU hits too, as it will be closer to native-level GPU programming but portable between both native and web contexts.
i actually see this sort of commentary a lot, and i suppose it's apt for HN and other tech fora where people want open standards, but for the life of my i don't understand why we shouldn't develop with technology designed _with_ the hardware. this is apple's forte, and i've done a lot of generic development in the past, as well as multi-cloud cloud computing and the only takeaway i have is that they always provide inferior experiences for the sake of some nebulous goal
the point of apple vision pro is to run apple hardware with apple software, and people _want_ that.
Because the alternative is a walled garden, for better and worse. Why don't most games work on Mac's? One big reason is closed APIs, especially as hardware improves.
Why is HTML becoming a ubiquitous API? It's free and open.
Yea no wonder the world is chock full of slow frameworks building on top on unnecessary cruft (ala chrome), and no one seems to understand the underlying tech anymore.
I wonder if all of this will turn out to be another WatchOS. It’s comprehensive, and frankly cool. But will that be enough to tip the scales in favor of a killer app?
Every new platform needs a killer app. Xbox had halo. iPhone had maps. watchOS didn’t really have one. Hopefully visionOS will.
The killer app is your current workstation. This thing is worth it alone just by virtue of being able to expand my workstation to an indefinite number of virtual monitors wherever I am alongside my laptop.
This is possible on current VR headsets, but the resolution isn't there for any of them, including the Meta Quest Pro. This is the first one with resolution enough to push it over the threshold.
> being able to expand my workstation to an indefinite number of virtual monitors
Can it do that though? In the presentation, they only showed 1 virtual monitor replacing you laptop's screen - essentially just mirroring the screen once.
Everything else were what looked like native apps, so likely as restrictive as iPad apps, but in (virtual) space.
If someone told me as a kid that I'd spend more than half or my waken hours with a screen in front of me I'd have called bullshit on them as well.
On principle if this device was better than our current laptop screens I think it would be a no-brainer. With it being so early in the tech yet, reportedly heavy (why did they think metal was a good idea), and with only so much resolution, it's still hard to justify as a work expense.
> I remain unconvinced that a lot of people will want to spend most of the time they are awake with VR/AR goggles strapped to their face.
Nobody suggested that it should be used all day for most people.
However, when I watched the keynote, I immediately thought how much nicer it would have been at the peak of the COVID pandemic to be using an Apple Vision Pro for all of my Zoom calls on the living room couch (or the porch) instead of being stuck at my desk with at the desktop.
I use my Apple Watch Ultra every day to track sleep, while integrating with the other Apple products that I own. For me that justified the price point given that I am doing my _hardest_ to improve my sleep (and am not that affected by anxiety about sleep data).
So far, it has more or less panned out as I want to. One thing I'm not happy about is the temperature tracking as I want to know when my temperature minimum is, but Apple doesn't give me that.
Other features are nice to have. One that's really nice to have is Apple Pay on your watch. With public transport (Amsterdam), it becomes seamless to go on a ride.
Maybe not. This is a popular argument but I think it's too simplistic. There must be different killer apps for different people. For the AR, there are meaningful applications especially for the Industry use. It's also meaningful for entertainment and games. It's an extension to existing platforms with a new interface. I think there will be multiple use cases for this platform.
What are the health implications of watching a screen this close for many hours every day? Is there scientific consensus on this or are we exploring uncharted waters?
No idea about the eyesight, but a personal anecdote about the eyetracking for the cursor/selection: It's an existing assistive technology and a few years ago I tried it for fun for a day. It can get tiring and you really exercise the muscles moving your eyeballs. I suspect it's something that you train / get used to in the end, but it's definitely an unexpected strain on the body. I suspect that's going to become a new thing people experience like the texting thumb RSI.
There is no scientific consensus, especially long term. But with all things, if it's not what nature designed us for, it's probably not great, if not terrible for us.
Then add the developmental result of having a child with this strapped for most of his puberty (you know it's gonna happen), and I am glad I'm too old to get sucked into this fad.
I feel alienated by how positive the reception for this thing is.
Since everyone responding seems to be missing the point: are there electromagnets in this, or anything else that could impact brainwaves, cell behavior etc?
It seems a lot of people have missed this detail. Additionally, it looks like the battery pack has a usb-c/lightning port on it (possibly for charging while you keep using it).
Not really excited about Apple trying to shove SwiftUI into everything. I really hope you'll have the option to use UIKit here.
SwiftUI is great if you're building simple things, but a huge pain in the ass for anything remotely complicated / with a lot of mutable state, which is what I'd expect VR experiences to look like code-wise.
If you wanted to develop for visionOS, you're probably going to need to buy your own Apple Vision unit, right?
Looks like there's already a hurdle to developing for Apple Vision. The first visionOS developers will probably come from established companies rather than indie devs (who helped grow the iOS ecosystem).
There's a pretty robust emulator they're shipping end of the month. There will also be dev centers in London, Munich, Shanghai, Singapore, Tokyo, and Cupertino. Finally, you can submit your app to them and they'll record a session with it, and you can ask them to focus on certain issues. I suspect once they start in-store demos, devs will be able to test their apps in person.
Why shouldn't there be? I'd be highly skeptical of any "developer" trying to build the cutting edge on brand new prototype hardware but didn't have $4k in their runway to get a single dev unit. This isn't "libre foss runs on toaster oven" so, y'kno, why not?
The SDK is coming out next month way before the devices launch so I suspect there will be some simulation options. I'll be playing about with it for sure - I want to see what gaming HUDs I can bring to life!
For everyone wondering why Vision Pro will succeed where the others have failed, I think this is your answer. Apple will provide a nice set of UI primitives that make building apps easy enough, but more importantly will give a consistent user experience and ease people into this brave new world.
What do you think the chances are of any old schmuck getting thier hands on a development kit?
I'm a web dev, and have absolutely no experience with Apple or VR development. But I would absolutely kill to get one of these dev kits for my own use.
I guess I can understand why but by having its own entirely new OS could be (emphasis on could, not saying it will happen) what sinks this. Not even iPad started with its own OS, it was originally iOS and then just forked off into its own thing which was still very similar to iOS, even to this day.
Though maybe thats what they wanted to avoid. iPadOS sits in an awkward position between MacOS and iOS, maybe Apple wanted to avoid a repeat of that.
I think it will be more like iOS for three reasons:
1. Apple don't want to be associated with smutty content, which it will inevitably be used for with sideloading (people will find an App Store approved way anyway)
2. The headset is far more intimate than a phone; better controls are needed to protect privacy and security
3. Although the device has an M2, it is not just a head mounted Mac, and visionOS is supposedly a realtime operating system
My iPad Pro also has an M2 inside of it but it’s still on the iPhone model of software distribution (which sucks).
That said I’m hoping for something closer to the Mac in terms of software. Basically I want Emacs on the headset and no policies or DRM that violates the GPL to tell me otherwise.
There has been experimental WebXR support in Safari since 2022, but looking at the WebKit bug tracker [1] it doesn't seem finished yet. I guess the next few days will tell us if this is a priority for them.
"xrOS" is visible on the simulator title bar in the platforms state of the union. Just didn't get picked by marketing ultimately. I'm glad; "XR" as a term sounds too overly techy. The marketing is bringing this futuristic thing down to earth, making it relatable.
These device specific environments and technologies are cumbersome to learn, will exclude everybody with a different device, usually change rapidly which makes code maintenance a nightmare and at some point simply die like flash.
I will start developing for 3D devices when we have a standard similar to HTML or when 3D elements are part of the HTML standard.
A text based way to describe how elements are positioned in space and what attributes they have. With JavaScript support for interaction.
Looks like there is "X3D":
https://en.wikipedia.org/wiki/X3D
At a first glance, it looks a bit like a 3D version of SVG? That could be a good starting point.