Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
SimulaVR's Reaction to Apple (simulavr.com)
105 points by georgewsinger on June 14, 2023 | hide | past | favorite | 148 comments


These guys seem cool, but if you're going to attack Apple, "it runs Apple's locked-down OS!" and "they don't provide hardware specs!" are not the avenues to go with. These are the exact same problems people have been berating Apple for for years/decades, and the market has repeatedly and decisively demonstrated that it doesn't give a shit.


Our critique isn't so much about Apple's locked down OS as much as it is "you won't be able to run powerhouse apps from macOS to get your more serious work done".

We hope & expect consumers to love the Apple headset(!), because it seems better/more focused on productivity use cases than the gaming headsets currently on the market. But in order for it to "actually replace a laptop", bringing iPad/iPhone apps into VR isn't quite enough, in our opinion.


You saw the "look at your Mac and the screen appears" bit, right ? And you can (from reviews) read and interact with text on your phone, so typing on the keypad with an appropriate level of immersion seems feasible - hell maybe even with total immersion, there's a ways until they're actually launched...

Given that this is a full-blown M2 on-board, I think relegating it to "just like an iPad" is a little lacking in foresight, but I guess we'll all see how it pans out...


> You saw the "look at your Mac and the screen appears" bit, right ?

I did but all I saw was a virtual mirror of the Mac's screen. Not apps. The mirror is the app, more like running a remote desktop app on the 3D world.

I didn't see VS Code running on its own virtual window. I didn't see iTerm running on its own window, separate to VS Code's. I didn't see a Mac's Firefox running on its own window, separate to the other 2. Sure, you can run visionOS's Safari, and probably even visionOS's Firefox in the future, or their iPad counterparts, but they're not nearly as powerful as the desktop's.

You can already mirror your Mac to an iPad. In fact, you can use the iPad as an extra screen for your Mac. In that sense, AVP is in fact not much more than an iPad with a spatial interface.


This is an amazing point. I was also thinking about the question in the parent, but reading this now makes me realize that unlimited monitors are nice but that's not what I would want for a truly immersive "spatial computing" experience.


Yeah exactly. I don't want to stream my Mac's screen. I want to stream individual apps.

Better yet, I just want to run those apps on the device and be free of the Mac. Add to that the ability to connect to an actual display, and this device could in fact finally replace a MacBook.


I'm reasonably certain that the team developing this have enough on their plate to not write their own custom apps just yet, and they're certainly not going to let 3rd parties close to it just yet.

There is an App Store icon in the video when the "menu" comes into view. That's a pretty heavy implication that there's going to be native apps running on the thing.


I 100% expect visionOS to have native apps. I never said anything in contrary.


There are iPads that also have a full-blown M2 on board and are still just like an iPad so it seems like a fair comparison.


> You saw the "look at your Mac and the screen appears" bit, right ?

You can already do that with something like XReal Air for 10x less (both the price and the weight).

> Given that this is a full-blown M2 on-board, I think relegating it to "just like an iPad" is a little lacking in foresight

The ability to have multiple virtual monitors with a mac? Thats a game changer (potentially). The ability to run multiple virtual monitors with an iPad? Pretty cool I guess, not sure what I'd use them for though. It'd be nice for watching movies, but again, see above, XReal (and probably a load of others by now) already has that covered.

Put it this way, theres a reason sidecar lets you use your ipad as a second display for your mac and not the other way around


One thing that’s super frustrating about this is that developing VR/AR apps right now kind of sucks. You do what you can to test functionality without the headset on because taking it off and on 200 times a day is awful.

This product is in the position to make that all super streamlined, but it seems that it won’t be.

To be fair, my experience is in games and non-gaming apps might not be as bad.

Also, I don’t personally own a Mac. I have a PC, and while I might be willing to plop down the money for the Vision Pro if it worked completely standalone but I absolutely won’t get it and a Mac.


>"you won't be able to run powerhouse apps from macOS to get your more serious work done"

why would you try to run a powerhouse app on a computer strapped to your face vs running it on the hardware it's designed to run on and share the screen of that system to the one strapped to your face? Just like they demonstrated being a key selling point. powerhouse apps tend to make the hardware run the fans from all of the heat being generated. no way would i want that near my face


cost, portability, better integration with the AR OS (as supported by the developers). SimulaVR do seem to agree with you though w.r.t strapping it to your face and have the compute tethered the same way vision pro has the battery tethered.


You are mistaken in seeing iOS frameworks as a limitation. It is an advantage! It is the natural fit for the eye tracking and hand gestures because those apps will work flawlessly with no change. You are also mistaken in thinking Mac OS can't work with Vision Pro, because it can. Apple has put a trojan horse in Vision Pro by giving us any number of screens we want to use with our Macs. In the future Apple will bundle Mac OS and iOS in Vision Pro. It is the ultimate evolution of computing. This is my prediction. Vision Pro will replace both the iPhone/iPad and Mac. All of it can coexist perfectly in Vision Pro.

This is what you should do. Copy Apple. Make your headset work with Android. Allow Linux and Windows to also be able to work through the headset with existing computers, by providing virtual screens. In the future you can offer a version of your headset with either Linux or Windows, but Android must definitely be in the first version of your headset. This is the only chance you have.


It's valid in this case because apple is trying to make the case that this device is supposed to be the future of computing. It's telling that you can't actually develop for the platform within the platform itself.

If you are developing novel AR experiences why wouldn't you want to be building them in a fully 3d environment, with new tools that can take advantage of all that brings? Instead we are forced back into the world of 2d, which shows an unfortunate lack of conviction on Apple's part.

If we are serious about this being the future of computing we need to build native tooling. This is a virtuous cycle, tools beget tools, but Apple needs to be willing to relinquish some control.


> It's valid in this case because apple is trying to make the case that this device is supposed to be the future of computing. It's telling that you can't actually develop for the platform within the platform itself.

Would you not have said the same thing about iPhone 1? It was absolutely the “future of computing” (whatever the hell that means) in terms of mobile ubiquity, but you could develop anything on the device itself at the time (or really, even today).


An iphone and a laptop are both just 2d screens with input, the difference is one of just screen size and primary form of input. The Vision pro is an entirely different category of device. Both the form and nature of the display and controls are vastly different.

The vision pro is a superset of a laptop but the reverse is not even close. If I'm building an AR or VR tool, I should be doing so in a 3d AR environment, anything else is obviously a substandard experience, removing options and fidelity for no gain.


The critique is that iPads are less effective for serious work than Macs. Apple continually markets the iPad as a capable everything device, when it is less capable than a Mac, even though the iPad Pro shares the same SoC.

Every professional I know who owns an iPad also owns a macOS/Linux/Windows computer. If an iPad is a computer replacement… then it should replace a computer.


I think you missed the point: You are to channel your frustration into those terms, so your complaints can be dismissed.

Why else would they mention those things?


from the article:

> While a premium VR headset built over iOS apps is a step in the right direction, we worry it could seriously hinder the device's ability to serve as a true laptop replacement. This is because the iPhone/iPad iOS is more driven towards the passive consumption of information than it is maximally productive work.

I was just agreeing with and rehashing the article


They're relevant but not dominant in every sector they're in except maybe the watch, so I would certainly not call it decisively demonstrated.


I really feel for George and team. It seems like they're quite a ways from a commercial product, far enough so that market leaders are likely to be established by the time they'd be ready to ship.

Critique #1 is about unknown implementation details, and we'll know that soon enough. In the meantime, an article on IEEE Spectrum says, "…journalists who’ve tried the device say it’s competitive with other AR/VR headsets, which offer a FOV between 100 and 120 degrees. That should place the headset’s pixels per degree around 50 to 70 PPD."¹

Critique #2 talks about the massive ecosystem advantage of the Apple Vision Pro, but I think the thesis — that HMDs will not replace PCs/laptops — is the wrong way to think about their value.

¹ https://spectrum.ieee.org/apple-vision-pro


> that HMDs will not replace PCs/laptops — is the wrong way to think about their value

For the majority of users, including most professionals, I think they absolutely will. What hurdles do you see?

Most people don't need high end workstations or gaming PCs. A good majority of the "heavy" work could technically be put into the cloud (or already is, for professionals).


Could you also argue that most people "don't need" VR either? I'd even wager it's less convenient than a physical metaphor for computing like the smartphone.


Nobody needs an XR headset, but many don't need a computer if an XR headset, Chromebook, thin client, whatever, can cover their use case. The need is to complete whatever task.

I read the comment as there being some fundamental limitation, a need, that will prevent people from moving to XR, and I would like to understand what that is. Perhaps I was being too charitable, and it's actually an opinion about others preferences, based on the first gen of a device that can replace a computer.

But, you could call me "biased" (or maybe experienced) since I've been working in VR for a couple of years now. I see it as a very obvious transition, in computing, once headsets (including the vision pro) get lighter.


>that HMDs will not replace PCs/laptops — is the wrong way to think about their value.

that was how apple justified the cost of the device, so I think its a fair critique.


>that was how apple justified the cost of the device, so I think its a fair critique.

[Citation needed]


watch the event where they presented it. it replaces a laptop, a tv, a camera, etc etc is how they justified the price.


I watched the event. Timecode for this specific claim, please.


> While a premium VR headset built over iOS apps is a step in the right direction, we worry it could seriously hinder the device's ability to serve as a true laptop replacement.

This remains the holy grail for work focussed headsets. Can I truly replace my laptop with it?

It seems the Vision Pro allows you to pair and cast screens, but not replace an entire macbook pro. A disappointment for sure but maybe it will be available in V2, V3, etc.


Except for software dev workflows, I think iPad/iOS app ecosystem is a more viable laptop replacement than the Linux app ecosystem.

They claim full desktop office suite support, but it won’t run Excel. iPad runs Excel - and Logic and Final Cut and Photoshop and Affinity Designer and Outlook and…

But regardless, VP is clearly positioned as a laptop display replacement, which is I think more realistic; iPhone can also be a laptop replacement (it is for many iOS users who don’t have computers) but realistically many people find a lot of value in both computing paradigms.


I tried to run just an ipad pro for office type tasks for a while and just found it supremely limiting. Just being able to run excel isn't enough. It's positioned as a laptop replacement to justify its cost, but I don't think the ios/ipados software ecosystem in any way replaces even a windows desktop.


I agree with your point, but Office 365 is slowly transitioning into being primarily a PWA. It should be able to run on Linux as well.

The big question is if MS has a special Vision AR VR experience


If they start with iOS, there is no way they redo it with macOS in the next version. Also the UI is completely different and the input system is different. I am definitely not going to spend $3.5k and still buy a $1k MacBook to do my work.


> I am definitely not going to spend $3.5k and still buy a $1k MacBook to do my work.

I would happily spend $3.5k on an insanely portable, wraparound, multi-monitor setup for my $1k MacBook.

If this thing works anywhere close to as well as they say it does for that use case, I won’t be able to get my credit card out fast enough. The fact that it could be used independently is just a cherry on top.


you can only get 1 screen streamed from the mac at 4k they said


Pretty sure with RDP type apps and lower res you can easily push 2 or 4 screens... but yea probably not at 4k. Sheesh that isn't even necessary for people doing productive work. Except for maybe video editors.


resolutiin requirements change dramatically when vr is involved. you don't want to be reading blurry text and pixels don't map 1:1 when "projected" in vr


Window layers are already separate and independently renderable. It would be great to get just all windows as opposed to a Mac desktop panel where windows are stuck inside. (Kind of how Classic apps could run within the OS X window space.) Stage Manager may have laid the groundwork for window groups operating outside the typical desktop or even virtual screen idiom.


Ok, but that's version one. Also, not sure I need more than that. Looking at what I have open on my Mac right now, all but two (the IDE and shell) have equivalent iOS apps. So in theory I could run all of those on the Vision Pro in other windows.

Pretty sure one 4k window is more than enough to handle the things I can't do on the Vision natively.


Kernel flavor doesn't matter. The real question is if you can install any app or is everything through a locked down app store.

Even if it's MacOS, what do you think the answer to that question will be?


The APIs are mostly cross-platform now, and the SoCs are the same, so the main reason you wouldn't want to just copy a Mac app to the iPad is the touch input. If eye tracking is as good as they say, that's a lot closer to a mouse than touch controls.


With the direction they're taking all of the "Continuity" stuff, I suspect the line won't really matter in the future.

You'll work near to your laptop. Programs will run directly on the laptop, but the windows (not the entire desktop) will stream to the headset. Vision Pro will manage "the space" and your laptop/desktop will run the heavy lifting.


But why not just do it on device?

The CPU in Vision Pro is more than powerful enough for most tasks; It's the exact same CPU that Apple are shipping in many of their laptops.

With the continuity workflow, I'll always need bring both my headset and my laptop with me if I might want to do anything beyond what the headset can do by itself. I'm also concerned about the additional latency that continuity will probably introduce.

Yes, continuity is a useful feature for when I'm doing stuff that does actually need the extra processing power of a proper workstation, but it shouldn't be the only solution.

(The cynical part of me is pretty sure it know the answer. Apple wants a locked down ecosystem just like they have on the ipad and iphone)


Vision Pro is powerful enough for "phone" or "tablet" type tasks, but I don't think it's powerful enough for actual work.

Development, Design, Video Editing, etc, etc still require additional capacity than what the Vision Pro is offering.


The Vision Pro has the same processor as my MacBook Pro. It will be able to handle most of that stuff just fine as long as the apps are built for the interface.


The M2 is a lineup of chips with a range of specs (much like i3, i5, i7, i9 lineups). The M2 that powers the iPad is 1/3 the performance of the M2 in my MBP. I suspect the Vision Pro's M2 will be somewhere in the middle to manage heat and battery life. While the Vision Pro M2 can technically "run it", it will still run faster on a dedicated external machine that can handle the heat and power better.

Add in the need to render a very large virtual space and you end up with a lot of capacity being used by default. That's likely fine for "iPad like tasks" (web-browsing, email, messaging), but it seems insufficient for anything requiring processing or memory.


I'm not sure where you are getting your 1/3 numbers from as the M2 in the iPad scores pretty close in benchmarks to the M2 in the 13" macbook pro. [1] vs [2].

It's not a family. It's the exact same physical chip. You can also check Apple's technical specs too. They are both listed as "8-core CPU with four performance cores and four efficiency cores".

The only real difference between the ipad, macbook air and 13" macbook pro is the cooling solution. The ipad is fanless and the macbook pro has a fan. If you had a longer-running benchmark, or a sustained workload you would see a larger difference, but not that large.

[1] https://browser.geekbench.com/v6/cpu/1591299

[2] https://browser.geekbench.com/v6/cpu/1594101


It has an M2, dude. It's plenty powerful for "actual work".


I think only 16Gb of RAM, though. And it needs a chunk of that to run the OS and “finder”.

I also think that if this were a feasible use case, apple would promote it. They are clearly positioning this headset as adjacent to the dev space, based on price and how they previewed it. And Apple is one of those companies that if you find yourself swimming upstream against how they want you to use it… it might be reason to find a different platform.


They do, though. There's an app-store icon right in the video. That very much implies it'll be downloading and running apps locally.


The M2 is a category of chip, not a specific model.

They'll likely need to put something comparable to the iPad M2 in there to manage heat. That iPad M2 has about 1/3 the performance of my MBP M2.

Add in the fact that you're rendering a spatial world and you quickly have a lot of resources tied up.


Yeah, that's the answer I expected, too, particularly based on Apple's video of the MacBook going black and the desktop moving to the headset. They already have the ability to broadcast a display (not even the primary) to an iPad or AppleTV, so surely the Vision Pro can receive that broadcast as well, and now you've got 1-or-many desktop windows in addition to whatever native apps exist.


I don't think it will ever happen. Apple has essentially stated that the openness of macOS is not something they'd repeat. We are also on Gen 6 of the iPad Pro a device that could very easily replace a laptop both in specs and feature set that is continually hampered from doing so. And there was/is countless rumors that Xcode will finally land on iPad and the best we've gotten is locked down and limited Swift Playgrounds.


You probably _already have_ a computer to use with this first gen at least. The 2 hours of battery life is going to be a longer term issue.


Simula is really, really interesting to me. The whole time I was watching the Apple presentation, I was thinking about how much I'd love the spacial computing facet of the Vision Pro but with a Windows or Linux machine so I could actually do things I do regularly.

Given that I don't use a laptop very much anymore, I've refrained from buying a SimulaVR machine. But I'm really, really tempted to, and depending on how it evolves I might just yeet it and get one anyway.


>Critique #1: Apple included almost no hard specs on the headset's visual capabilities (PPD or FOV).

>Critique #2: The Vision Pro seems to be built on top of Apple's iPad/iPhone ecosystem, which could hinder it from becoming a true PC/laptop replacement.

These are both indicators and flexes of Apple's strengths. Apple doesn't want to tell users what the hard specs are on their headset. Not because they're trying to deceive the user, but because they don't want the user to care. They don't want the experience to be hampered by comparisons to other products or considerations of "how real" the experience can be. They want it to be it's own experience. An experience where the conversation isn't ever "Which programs can I run on this..." or "how fast does it go?" because it's holistic. A big confirmation of this is how often Apple compares its products to their predecessors instead of competitor products.


I don't buy it, Apple very proudly shows the numbers when they can make them shine under a good light in comparisons.

When they were trying to sell the new mac pros as AI/ML devices they underlined a lot the technical specs of those Macs. Or when they introduced Retina displays they all went into PPIs and pixel numbers. Or when they introduced M1s to show the performance and wattage boosts.


Exactly. Didn't the same WWDC give us plenty of numbers (even if many were only relative) for the M2 Ultra?


One main difference is that you could order a computer with an M2 Ultra last week, but you won't be able to order Reality Pro until next year. Apple doesn't talk about numbers until they finalize exactly what will be shipping.


Apple doesn't even tell you the clockspeed of their CPUs in their products. This is utterly not out of character at all. The only reason they talk about all the stuff in the M2 ultra is because they are beating up on Windows laptops.


It's amazing how Apple fans manage to twist even glaring issues into some commentary on how differently Apple thinks.


They have an amazing track record for this as well. I buy m2 processors because I know they're great. I don't even bother trying to figure out a comparison to Intel chips any more. I've done semiconductor research and used to obsess about stuff like this.


The comparison only works in mobile. The M2 vs mobile Intel/AMD is indeed favorable for the M2 because it uses much less power.

But in general, it's much more nuanced. I have both a desktop with a Ryzen 7700x and a mac book pro m2, my coworker has a 13900k and a m2 max, the performances aren't really always comparable.

There's many tasks where you are looking at a much less capable machine. Compiling and IDE performances are still way better in general on x86 linux than a notebook macos. You may not care about that 0.5second more lag every time you save and EsLint reprocesses your entire file, and the watcher restarts some application after and you're looking at 5/6 seconds instead of a couple. I do.

There's no doubt that Apple produced a great CPU, but when you get into considering costs and performance there's still a huge lag. There's no doubt that the crown on laptops is in Apple hands right now. That's not every use case out there.


NGL M2 is incredible


Apple is not publishing a bunch of details but what we know already means it will not be a viable monitor replacement for extended use. The same is true for SimulaVR and for the same reasons: eyestrain.

Also, this -- https://www.wolframcloud.com/obj/george.w.singer/simula/png/... -- is the "year of the linux desktop" of VR. I imagine someone wants that, but no.


> we know already means it will not be a viable monitor replacement for extended use.

This opinion is deeply subjective. I have a Varjo Aero, and I can use it as a daily driver (for productivity), no problem. The Vison pro looks to be lighter... and so would have better ergonomics.

Notice, I don't even mention eye strain... because I personally don't experience any.


The Varjo Aero has the same PPD as the Simula One :] So if you are able to daily drive it for productivity, that's a good sign you would be able to do the same with our headset.

The problem with the Varjo Aero is that's it's a cord-tethered headset which needs to be attached to a powerful GPU. It doesn't run Linux Desktop natively (nor even when attached to a host) since it doesn't have Linux support. So in our opinion it doesn't actually replace a laptop, which we're taking as the "holy grail" of VR computing.


George,

If Simula VR had been shipping on January of 2022, I would have ordered it over an Aero.

If simulaVR were in stock and shipping today, I would sell my Aero at a loss and place an order.

I honestly do have my fingers crossed that by the time the SumulaVR is in stock and shipping I still feel the same.

right now the problem with SimulaVR is that I can not buy it.


You're getting heavy downvotes but I'm curious: do you have a reason (ideally a citation) to say that eyestrain is inevitable? Do you think that applies to all professional XR workflows, or just this generation of headsets for some technical reason?

Anecdotally I've worn the Oculus S for extended periods and didn't notice any eyestrain, but also "extended periods" obviously wasn't a full workday so can't be too confident yet.


Extended time fixed focus creates eyestrain. In a monitor environment, people look up, they look around, they look elsewhere, and so on, and even so monitors cause eye strain.

This is not a controversial take. Yes, there are tech bros who will insist that this is all fine, but the actual feedback from the majority of the population is that extended wear of pretty much all VR helmets anyone has bothered to look at causes the same eyestrain issues.

There are always people who will put up with discomfort or for whom the discomfort is relatively lower (or they are younger and it is less of an issue), but these products are not going to replace monitors. Resolution is not the only issue, just the one that is most notable when you first put the helmet on.

In addition, the weight of the helmets is an issue. Extended wear is a problem and one of the few notable, repeated comments from the carefully-managed 30-minutes-max Apple Vision Pro demos is that the weight started to get annoying. Seated demos under 30 minutes.

There's also VAC ( https://en.wikipedia.org/wiki/Vergence-accommodation_conflic... ) as an issue depending on the UI in use.

I want VR to happen as much as anyone, but the Vision Pro is an entertainment-light device for media consumption with a nod for people who want to check their laptop (and I'm sure, eventually, their phone or tablet) while watching faux-bigscreen movies.


You are mistaken in seeing iOS frameworks as a limitation. It is an advantage! It is the natural fit for the eye tracking and hand gestures because those apps will work flawlessly with no change.

You are also mistaken in thinking Mac OS can't work with Vision Pro, because it can. Apple has put a trojan horse in Vision Pro by giving us any number of screens we want to use with our Macs. In the future Apple will bundle Mac OS and iOS in Vision Pro. It is the ultimate evolution of computing. This is my prediction. Vision Pro will replace both the iPhone/iPad and Mac. All of it can coexist perfectly in Vision Pro.

This is what you should do. Copy Apple. Make your headset work with Android. Allow Linux and Windows to also be able to work through the headset with existing computers, by providing virtual screens. In the future you can offer a version of your headset with either Linux or Windows, but Android must definitely be in the first version of your headset. This is the only chance you have.


These guys are out here asking for VC money and a $2700 pre-order price, critiquing Apple for not publishing PPD yet... they don't publish expected battery life or weight, probably 2 of the most important specs for a "use for work" wireless VR rig....???

If I was advising these founders, I'd be advising them to do some deep soul searching right now. I can't see this going well for them at all, building a niche VR system for the 40% of developers who use linux and at the same time looking for VC funding (implying this will be a scaled business not a lifestyle business) seems like a recipe for disaster.


As someone new to startups, I'd love a clarification if you find the time: what about looking for VC funding instead of traditional bank-based debt is worrying for Simula's situation? The best I could find is this little passage, which doesn't clear it up for me:

  Often, the defining characteristic of a scalable company is the ability to replicate, create multiple copies of a similar product or service without substantive modification rather than “one-of-a-kind” items. This can lead to decreasing marginal costs where, as production increases, the resources needed to make the next item or provide the next increment of service go down.
As a very unrelated side note: I have faith in Linux, and don't think the patterns of the past necessarily will hold for the future in all cases.


VC firms take ownership and work on multiples, a good investment is expected to exit with 16-20x. For that to happen, the space needs to support that type of revenue, and the business needs to support those types of unit economics. Hardware is a very expensive space, you need a lot of capital to set up a supply chain and take a HW product to market, especially something like a VR headset where (according to their blog) they're doing a lot of custom/complex things. This all needs to be made repeatable and then brought to market and then that market needs to have a consumer base to support it.

How much money do you figure a startup like Simula needs to bring it's product to market just at the minimum? I was at DigitalOcean in the beginning, so I have a sense of capex heavy business. In the early days, we needed hundreds of millions of dollars to scale up the hardware, this had to come from venture AND banks. Additionally, in the instance of DO, we're talking extremely generic off the shelf hardware, nothing custom, our supply chain wasn't complex.

If I was these guys, I wouldn't try to build a scaled business in a space that is emerging in the way this space is. I'd either call it a day or try to no have any investors at all, find something extremely focused, get to a small profit, and enjoy a life style business the founders can live on for the next 60 years.

Of course, I'm always happy to be wrong and if they push forward I hope they win!


Thanks for the detailed response, I really appreciate it! Very cogent points, I see the logic. My not-even-napkin math makes it seem like the market is plenty big enough (40% of devs + Mac users they can coax over + change) for their vision, but obviously only time will tell. It certainly seems like a good sign that Apple is going all-in despite not coming near to their original XR vision (no pun intended)

Also it's always fun to randomly remind myself that I'm talking to people of "at DigitalOcean in the beginning" level of expertise on this site - keeps me humble :)


I'm stoked that you're building this open with Linux! I'm willing to tolerate a lot of (what's the opposite of polish?) but for me it has to be open and workable with my current setup (which is all Linux), and I don't want it loaded with privacy invasive analytics. I'll put down a lot of money to buy something like that.

My advice in no particular order (I know you probably already know all this but I already typed it so I'll post it):

1. Get something shipping ASAP. This space is rocketing forward now at an electic pace and the ecosystem for the average person is going to get locked behind walled gardens if something open doesn't get out there. If it were me I would try to get beta units available soon and let the open source community and early adopters run with this thing while you stabilize/polish. Don't rush to "stable" too quickly, but also don't prevent shipping too long that competitors beat you to the release line.

2. When you market this to consumers, don't hide the "Linux" part since people like me will be very attracted by that, but don't emphasize it either because most people don't know what it means. Just describe "computer workstation on your face" rather than "linux machine on your face."

3. Provide factory images so people can hack with the hardware but still escape back to supported territory. If you do this, there will be a ton of open source interest and efforts and they will not only develop awesome apps for you, but they'll port a lot of stuff too. If I were you, I'd be making open source collaboration a huge part of my strategy.

4. I would also be looking at things Valve did with the Steam Deck for tips/guidance.


>"you won't be able to run powerhouse apps from macOS to get your more serious work done".

There's nothing to suggest that powerhouse apps won't be developed for visionOS. The hardware is capable and at this price point it seems that the point of apple's hardware+platform is for the Autodesk/Adobe/Avids of the world to bring 3D-first workflows to professionals.

At the moment we use flat 2D paradigms to design 3D output. The VR era provides developers the opportunity to shed 2D design paradigms and operate directly in the third dimension.

If the objective is to just run 2D macOS apps on VR hardware, then VR is nothing more than an expensive novelty. A VR headset can be so much more than an expensive alternative/second display.

What VR is missing is "Developers developers developers", and attracting developers requires significant investment and commitment to a platform, not just a product.


A possible issue that came to my mind now: what's the weight of these headsets? Compared to glasses, which are lighter, they extend significantly from the vertical axis of the body. I wonder if we're going to develop a thick neck and/or neck pains instead of carpal tunnels. Nobody is 20 or 30 yo forever.


Yes I found this with any use of a VR headset. I'm surprised not more is made of it - the neck is a very delicate structure, and surely any serious health and safety implications would cause all sorts of legal bother if nothing else.


You build up the muscles and it's not a big deal. I've never heard of anyone getting hurt, only some fatigue at first.


It is absolutely a big deal. Poor ergonomics in regular office environments leads to neck problems that can get extremely severe. Throwing 2-3 more pounds on your head can make that a lot worse.


Yeah they can be heavy. You'll probably want a third party strap that holds the battery. Those are usually more balanced and comfortable.


my top choice is the bigscreen vr which is truly a goggle size headset


Which is https://www.bigscreenvr.com/

What are you using it for?


> In VR you can sit up, lean back, walk or even lay down while you compute…all in a compact form factor that saves on desk space. We believe that in 10 years, nearly every office worker in the developed world will be using VR/AR to perform their work.

Ummm.. "walk"??

Reminds me of that funny Google glass parody video. https://www.youtube.com/watch?v=t3TAOYXT840


You probably shouldn't walk outside with our headset on, but indoors, absolutely. Turn on passthrough and off you go.


> You probably shouldn't walk outside with our headset on…

Setting aside that griefers would pull it off your head and destroy it, why not? If you're using it for directions, it seems safer to keep your eyes on the environment than it does to split your attention.


It's probably safer than looking at a phone, but the most important reason is that your field of view/camera offset vs eyes is different than you're used to and it's easy to get into a dangerous situation.


Because it's distracting and fills your field of view. Even when i write something on my mobile I'd stop walking. I tripped too many times trying to do that.

But the video explains it pretty well :)


It was the whole "compute while you walk" thing that's not a great idea :)


I think they mean on a treadmill :)


> we are explicitly building headsets which are meant to 100% replace your PC/laptop as your primary working device.

This seems like a mistake. I'd be more intrigued if the vision was for seamless experience across workstations, laptops, phones, vehicles, tablets and kiosks with or without a stylus, and immersive computing with or without a headset.


Great post - got through all important points in a very concise letter. Vision Pro is a super awkward name, but I do like that it forefronts "pro"; VR games that really blow people away without needing a ton of space are a bit far off IMO*, but we are so close to replacing laptops with something that's 1000x more convenient and powerful.

I hope they're able to stick to their ambitious production schedule and they hit the shelves around the same time (early 2024). I've never rooted for any product harder than I am for SimulaVR, and that's saying a lot considering my dark days Kickstart-ing video games...

*: If you disagree and love your VR games, tell me: how often do you break it out in favor of a traditional monitor? My experience tells me not very often, unless you have a great setup or are into a seated timesink game like Elite Dangerous.


> we are so close to replacing laptops with something that's 1000x more convenient and powerful.

Strapping a heavy laptop tightly to your face is the farthest thing away from "convenient" I can think of, especially when compared to a laptop.

Then, the huge waste of compute power needed to run two separate displays showing 99.9% the same information, and to do it at huge resolutions to avoid seeing pixels up close, and then even more wasted cycles on filming, interpreting, compositing, and rendering your surroundings won't leave much room for a "powerful" device either.


I agree that it's a technical challenge. And that it wouldn't work with a heavy/uncomfortable form factor. But c'mon, even if you don't agree with "so close", you see how we could get there with just a few more iterative improvements?

Criticizing XR for displaying redundant information feels a little like criticizing smart phones for needing expensive multi-touch screens. Or laptops for not hooking up to your desktop where all the power is. Or personal computers for not hooking up to the school's mainframe. In other, less snarky, terms: the cycles aren't wasted if they get you 9 monitors IMO :)

EDIT because apparently I'm obsessed with this topic: at my latest company the only program we ran on our laptops was chrome, and all computing happened in the cloud.

Obviously not an option for all jobs/people/locations, but I'm confident such a setup would work for many of the current owners of heavy expensive "Pro" laptops. LLMs and their need for datacenter-level compute during training may accellerate this among certain dev circles, too.


No, we are nowhere near close to having a full-powered laptop in a form factor that could be worn on the face. I don't see any reasonable chance it will happen in the next 10 years, for example.

And I don't agree that that's a fair comparison.

First of all, because my main point was not to criticize AR/VR, but to point out that the display system itself will inevitably consume a huge portion of a laptop's compute cycles - so a pair of glasses with the same compute power as a laptop will be far slower in running any kind of software than the laptop. You might be able to display 9 huge windows in the AR space, but you will probably not be able to run 9 different programs at the same time.

Second of all, because smartphones and laptops are not actually wasting cycles compared to a desktop to achieve a version of the same workflow. They are more expensive for the same compute power, and they are more thermally-throttled, but they don't use extra compute power to achieve the same performance.


Compute cycles: CPU and GPU are very very different here, so I'm not sure that logic holds. Unless you're doing work that's heavily GPU-bound and truly using the same resource (which is a non-trivial amount of jobs, but far from a majority. maybe even less than 1%?).

For the rest, it shouldn't cost much more compute than two 4K screens, and people already do that successfully. It doesn't do much to increase the latency or cost of running a million Chrome tabs or using other CPU- or IO-bound programs.

But I will agree that this all consumes more electricity, which is certainly a problem for mobile use.


I'm not sure how much of the image processing for AR passthrough, environment recognition and spatial placement of app windows is actually done on the GPU. I would expect at least some of it has to be done on the CPU.

Additionally, laptops are rarely known for stellar GPUs, so I'm also expecting some need to offload GPU work to the CPU, at least in terms of power and thermal budget.


> No, we are nowhere near close to having a full-powered laptop in a form factor that could be worn on the face. I don't see any reasonable chance it will happen in the next 10 years, for example.

Um.

The Vision Pro runs an M2, the same processor they have in the best laptops Apple makes. I think it’s a huuuuge reach to just assume without having used the device, that it just must be visibly slow compared to running a modern laptop. Most of the pixel pushing is done on a GPU and is totally independent of what limits you to be able to run lots of apps. The M2 is already massively powerful for a typical laptop workload.

> You might be able to display 9 huge windows in the AR space, but you will probably not be able to run 9 different programs at the same time

What are you basing this on? You realize that iOS runs the same kernel as macOS, right? It has supported “multitasking” in the sense of “multiple processes” since day one. It’s just that iOS has intentionally limited the UI so that only one app is focused, and it eagerly kills (for the purposes of saving battery) apps that you’ve tabbed away from, with a lot of help from the software to make sure apps can persist their state and quickly recover. The limitations of multitasking on iOS is not due to there not being enough CPU, it’s to save battery. And Apple’s laptops use the same CPU, and have the best battery life in the industry, so I don’t see how you can arrive at the conclusion that a device with the same specs as their best laptops just obviously can’t multitask.


Yeah. Your average U-CPU (what you'd get in a 13" or 15" ultrabook, and what we use) has a TDP of 9-28W. That's well within the range of what a modern phone can boost to.


Thanks for the response! I still don't agree with your conclusion that the nascent proXR industry is doomed, but definitely cede the point that XR displays will be a strain on the computer they're hooked up to. I personally am more interested in software than hardware so can't say I've done a careful analysis of the required compute myself - trusting in the words of (biased) experts on that one.

re:the metaphor and your last paragraph, I think I was being unclear. I'm not saying that phones and laptops were wasting compute, I'm saying that they were both heavily criticized for not having enough compute. This is when they were introduced; obviously now its a simple money vs. portable compute tradeoff, which is my ultimate point - I see that same pattern holding in this case, too :)


>huge resolutions to avoid seeing pixels up close

That is not the reason to have huge resolution. You can have a 2x2 display with enough blur in the optical stack to make it impossible to see the pixels. No one wants to use such a low resolution though. Resolution and sharpness are different things.


The ideal optical design is such that your optical spot size/MTF is matched to your pixel size in a way that

* subpixels are blurred, and it's hard to see the boundary between pixels * you still have enough resolution to resolve text clearly

Our headset has that, and I'm confident Apple's will too.


Yes, I should have been more specific. The broader point I think stands: you need a very high resolution screen and high-res image (and thus rendering compute power) to get even a good-enough viewing experience, especially for text - say what you'd get from a mediocre FullHD monitor.


I'm not convinced this will actually replace laptops. There are clear use cases where an AR headset like Vision Pro makes a lot more sense than a laptop, but the opposite is also true. I'm skeptical that I will want to go through the effort of strapping goggles to my face just to sit on the couch and scroll through social media and catch up on email.

This isn't like the rise of smartphones, where iPhones and Droids were a mostly additive experience to the dumb- and feature-phones people were using before.


If you told people 30 years ago that in the future, people would be walking around city centers with their heads down and eyes glued to a lit screen they would have laughed. This is less a laptop replacement and more a mobile-compute replacement. Who needs a tablet or phone if you have one light enough and with enough power to see you through the day strapped to your face? It also allows for a more intimate, private experience, as you may balk at opening some items on your phone in a crowded subway where a casual shouldersurfer could catch a glimpse.


People from 30 years ago would probably be incredulous if you revealed that we still don't drive flying cars in the future. If you instead told them that we found a way to miniaturize television and celebrate content as short as 6 seconds long, they would probably be really horrified and then concede that you're right about the future. Comparatively, it's not hard to see why smartphones became popular. It's a cellphone, iPod and internet communicator in one device.

The Reality Pro, on the other hand... it can call people, circumstantially. You could listen to music on it, but it's kinda cumbersome and there are less intrusive options even today. You could also browse the internet on it, but it would require context-switching that isn't so intrusive with physical interfaces.

If your biggest pitch is that future tech will make this small enough to be competitive, I refute that by saying we'd just use that technology to make better/cheaper smartphones. Literally the same thing happened with television (and 3D TVs), there's hardly a reason to think it won't happen here.


  The Reality Pro, on the other hand... it can call people, circumstantially. You could listen to music on it, but it's kinda cumbersome and there are less intrusive options even today. You could also browse the internet on it, but it would require context-switching that isn't so intrusive with physical interfaces.
Very confused about this sentiment. Are you saying this all it can do? Even just saying this is all the ways in which it might compete with smartphones seems very uncharitable. I don't see people walking around the streets with these on any time soon, but if they do, there's a LOT more you could do than phone calls, music, and sometimes a browser. Surely you'd agree?

For example, portable XR would bring:

- The ability to work effectively in public without the need for a desk, monitors, or a shield from glare

- The ability to annotate everyday objects in your line of sight. See the reviews of every cafe on the street, historical facts about old buildings, the all-important orwellian social credit score of every individual floating above their head, etc.

- Walking directions that are imposed on your world instead of on a little abstract map. Less walking into things, less NYC tourists getting on the wrong train.

- Crazy immersive gaming experiences; in the short term we'll probably see geo-location based games (niantic vibes), and in the long term we might see crazier stuff like the shooter they showed off in the absurdly over-ambitious google glass announcement trailer.

- Replace real people with the avatars they wish to be seen as, either on body-dysmorphia-vibes or cosplay-vibes (or both).

Off the top of my head :) I'm gonna be honest this comment is more about me being excited than it is a reply to your point as I understand it, as I think you have a strong status-quo bias on this. IMO!


In its current form, sure I agree. But to me thinking that we will just see a future of miniaturized TVs is like thinking we would just keep making faster horses instead of shifting the paradigm. As the smartphone was the nexus for multiple technologies, so too are AR headsets. They are getting smaller and lighter and will become less obtrusive and with more intuitive user interfaces and interaction modalities, just as has been done with computers taking their myriad forms. I'd much prefer a private, hands free mode of interacting with a computer in public, not to mention the AR potential. Misplaced your keys? Well, they were last seen by your headset on your desk ten minutes ago.


I totally agree on part of that, thanks for clearing it up - I definitely don't think it will replace all laptops, or all other computing in general. We very often want to interact with computers casually or with a group.

But for extended sessions of computer work (a healthily sized market, to say the least), I would be very surprised if I didn't love having 9 monitors instead of 3. And that's to say nothing of the 3D/spatial computing modalities that will develop as the platform evolves away from its 2D roots.

If you're not convinced and haven't watched the awesome video/scrolling ads the marketing wizards at Apple worked up, I highly recommend it. Brings a tear to my eye ;)

https://www.apple.com/apple-vision-pro/


I was hoping they'd go with iBalls

Probably never happening. I will, however name my own "iBalls" when I get it :)


Summary: they give a lot of praise to what Apple showed, and mention two negatives:

- “they didn’t give us detailed specs”

- “it seems to be tied to the iPhone/iPad ecosystem, not the macOS one”

Neither of these is certain to be a negative of the product.

That, combined with the repeated “we’re looking for investors” makes me wonder whether this company will survive.


Apple already showed you can bring your Mac screens into the Vision Pro. Unfortunately I don't think anyone is going to be able to compete with the frontier hardware leaders right now on desktop computing, since we're at the point where you really do need the best-in-class hardware to make it work.

The other big elephant in the room for SimulaVR is Valve. Valve is working on a standalone headset, and it probably runs Linux. It's probably going to be open enough that you could get a Linux desktop environment running on it, and I'm sure many people will work on that for free. It's going to be tough for SimulaVR to survive.


> Insanely good text clarity. Higher pixel density (35.5 PPD) than any portable VR headset currently on the market (e.g., 56% higher than the Quest Pro & 220% higher than the Valve Index).

I'd order one right now if I could believe this was good enough for coding. But my only experience with VR is with an HTC Vive XR Elite headset with 1920x1920 per eye, and that turned out not to be nearly good enough to read comfortably at a reasonable font size. I'm quite willing to be convinced, but apparently the only way to do that now is to be a prospective angel investor in the company and wait for the single review unit to be passed around.


The HTC Vive XR Elite has 19 pixels per degree (PPD). Simula claims to have 35, which if true would make a marked difference.

But yeah, the problem at this stage is getting one to try.


In one of the WWDC videos, they mention they reimplemented their glyph renderer to be vector-based, so text should at least not depend on first rendering to laptop-screen pixels and then to face-screen pixels. Seems like it should be state of the art at launch.


Would you happen to have a link for that? It would certainly help, as they are currently around where I'd call the minimum necessary (likely) PPD. Every little improvement counts until they catch up to 4k monitors, where anything looks good.


It actually wasn't WWDC, it was an interview some execs did with John Gruber, which unfortunately was 2 hours long. Segment on visionOS starts here: https://youtu.be/DgLrBSQ6x7E?t=2323

I don't have time to dig in and grab the exact spot, but I'm pretty sure they didn't go into detail much beyond what I said. My guess is text rendering is as good as it can be with state of the art displays, from a software stack standpoint.


It's also not factually true. Xreal Air (formerly Nreal Air) has 49 ppd.

In "mirror" mode (aka a single giant screen), it feels like working on a 1080p projector. Not great. Not terrible.

In "nebula" mode (aka virtual desktop), it feels like I'm working on 3 720p screens. Great when I need to reference multiple different sources (like during development). If I'm just focused on a single document/window, a laptop screen is better.

----

I think the tipping point is going to be at about 100 ppd. That seems like it will be the point where you can simulate a 1080p screen at roughly arms length while maintaining excel clarity.


The Xreal isn't a wide FOV at all though, unsuitable for what SimulaVR is going for.

It's more literally a head mounted display, rather than a virtual reality headset, with its 46° field of view.


I've found the Xreal's FOV sufficient for productivity. Right now, I can't really make use of things near the edge of the FOV. 46 degrees requires a lot of eye moment to make use of.


Yeahhhhh probably should've left that part out... Hopefully that's a sign that they're not concerned about marketing/PR atm and are heads-down on the tech :) We can all discuss together in the launch thread next year!


>Insanely good text clarity. Higher pixel density (35.5 PPD) than any portable VR headset currently on the market (e.g., 56% higher than the Quest Pro & 220% higher than the Valve Index).

And on the Simula product page:

> • 35.5 PPD pixel density (higher than any other portable VR headset on the market)

Microsoft Hololens 2: 47 PPD (1)

Varjo VR-3: 70 PPD (2)

Food for thought.

(1) https://www.microsoft.com/en-us/hololens/buy

(2) https://varjo.com/products/vr-3/


The Hololens 2's PPD are largely fake. There was a writeup somewhere but it's way, way below the stated number.

Varjo is pretty high up there, and they're able to compete in the clarity department. But their headset is non-portable and only supports Nvidia+Windows (at least last time I checked).


While I haven't verified it, that's Microsoft's own claim so they will have their reasons for citing that number so I won't make a claim one way or the other, but have provided a link to the dispute below(1).

Simula is not a shipping product, which to be fair means I should be comparing it with any announced but not yet shipping product, and probably not with products that were announced 4 years ago.

(1) https://kguttag.com/2020/07/08/hololens-2-display-evaluation...


Why position yourself next to Apple and compare yourself to them? I respect that their main value proposition is replacing the PC. That is awesome compared to Oculus which has no value prop. However I suspect Mark would pivot towards work and enterprise more now, but Android is probably a bigger hindrance to work than Apple's ecosystem, which I suspect is much closer to macOS. So Simula and others have to position themselves separately from both these big platforms.


The smart way to go would be something like Apple CarPlay, so in three years when the computing device is out of date, you can just use a new computing device to project into your vr headset.

Not good if you want people to throw away the old headset and buy a new one, perhaps good if your selling point can be longevity and its associated advantages of cost and environmental friendliness.


Our solution to this is to make the compute modular. In a few years you can plug in a new (off-the-shelf) compute module and have a state of the art PC again.


Deckard's releasing as a response to Apple would be great for Spatial Computing. I'm not sure if Simula is interested in 3d spatial computing like Apple or is doing 2d spatial computing like 1980's text terminals.


Curious to hear why you're giving Apple credit for pursuing 3D computing when everything I've read says the opposite (for now). E.g. https://mixed-news.com/en/wp-content/uploads/2023/06/apple_v...


I'm giving them the benefit of the doubt but yeah... its not obvious yet but it easily could be.


No offense, but this thing looks like junk compared to Apple/Meta's headsets.


Apples to oranges comparison (no pun intended). SimulaVR aims to be a general computing platform allowing professional "desktop like" workflow, with the performance of a high-end laptop. For what has been revealed, Vision Pro is designed towards tablet-like content consumption.


> designed towards table-like content consumption

I didn’t get that impression at all. You can stream your Mac display to it and at that point you have a real Unix system in AR/VR. That’s good enough for many of us.

It seems that Simula is betting on people wanting to not need the external laptop and I think it’s a losing bet. There aren’t all that many use cases where having a Mac streaming wirelessly to your headset isn’t good enough.

The use cases where having a connection to a laptop isn’t feasible are probably the same ones where you are mostly going to do be using it somewhat passively. Doing things like working in a coffee shop will probably still be done on a laptop because who wants to be a glasshole isolating themselves from an environment wearing a ring of cameras on their head? I don’t think I’d be comfortable doing that or being around somebody I don’t know doing that.


You can stream existing desktops/laptops to just about any mainstream headset. The issue is the software availability for the headset itself. Apple showed specifically the ios basis of this device, not the mac basis.

>There aren’t all that many use cases where having a Mac streaming wirelessly to your headset isn’t good enough.

It's not $3500 good enough. Its the same solution cheaper headsets use, and don't require full laptop hardware in the headset to do it.


> It’s not $3500 good enough

Then it will be a failure like the HoloLens which is the same price.

I’m not convinced that this category will ever be much larger than it is now.


They openly admit that they can't match Apple's polish with their current resources.

Also, the SimulaVR shown is a prototype. Pretty obvious that comparing it to the Vision Pro is comparing apples and oranges at this point.


these tools are suppose to be about building connections.

but they are prohibitive. expensive and require you to operate within a corporate walled garden.

it is more about segregation.


LOL they’re fucked


> Simula is raising institutional capital for the mass production of our headsets. We're also soliciting angel investors who might be interested in alpha testing our Review Unit headsets (helping us form a bridge to our institutional round). More details on this below.

Another one getting pumped with VC money and inevitably going to push this project for an exit.

First Bitwarden, then GGML, and now SimulaVR.

Not again.


I understand your concern, but if we were interested in a quick exit we wouldn't be working on this lol.

We're looking for venture backing because getting a consumer hardware startup off the ground is really expensive!


You may discover that once you take money your specific interest is no longer the focus.


I feel this is the type of business that's based on scratching their own itch. They're part of the market they aim to serve. They should still have that itch after taking the money.


Ehhh... to put a more positive stance out there: this is absolutely the time to be looking for capital if you plan to make lots of these things in a reasonable timeframe. Doing that simply costs far too much for anyone not already a near-billionaire.

Apple just validated their market, which reduces a lot of worries about its long-term viability, and may kick off a big wave of highly-funded competition. It very well may become "keep up or die", and using your existing lead is often your best tactic.


Agreed. The big dawgs are entering the field, the startup needs to ship now. Take the VC funding and start getting hardware into people's hands. I generally despise the "ship it and iterate" approach that startups use which leads to shitty constantly updating/changing software, but I'm afraid it's that or die for this company, and I'm absolutely ecstatic at the idea of an open Linux-running headset!


This was me immediate concern when I read that paragraph.

I have been closely watching this company since they first announced the headset and I really hope this isn't the beginning of the end.


What happen with ggml?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: