Hacker News new | past | comments | ask | show | jobs | submit login
Why is the Simula One so expensive? (simulavr.com)
80 points by gurjeet 14 days ago | hide | past | favorite | 144 comments



Something I have been curious about – why is VR headset development going in the direction of cramming all electronics into a single component which sits on your head, rather that the headset itself being a dumb terminal and the actual storage/processing done on some other box (which doesn't have to be a PC) sitting somewhere in your house and sending signals to it wirelessly?


Latency, bandwidth (and with that, packet loss).

Right now the closest tech to do this would be something like WiFi - but the latency is high, and the typical home's WiFi network bandwidth (practically due to the vagaries of radio) is not sufficient.

A few reasons this is important:

- latency in VR induces motion sickness and operates on a different scale than we're used to for regular network communications. A 50ms lag in a 2D video game is acceptable, a 50ms lag in VR will cause nausea and vomiting. This is a domain where single-digit milliseconds matter a great deal for user comfort.

- resolution and refresh rate is everything - VR's viability in large part hinges on the resolution of the screens, which right now are somewhere between "poor" and "mediocre" - nothing so far has come close to replicating 20/20 vision. Moreover you need those screens to refresh far more quickly than a regular monitor - 60Hz doesn't cut it, 90Hz is basically the minimum. Streaming an uncompressed 8K stream at 90Hz over wireless requires more bandwidth than any existing standard can deliver. It's at the edges of our wired capabilities.

Of course you can compress the streams, but that not only degrades image quality (which matters a great deal when the screens are replacing your eyesight) but also injects additional latency into the whole affair.

It's theoretically possible - but would require developing a lot of custom tech that doesn't exist yet. It would also likely be highly sensitive to the exact particulars of the physical environment - one user may have a perfect experience while the other experiences so much packet loss/reduction in bandwidth that the whole experience is unusable.


Yeah, the only way this is remotely feasible without massive latency and compression is a 802.11ay link, if it works when you're moving around. And since it's mmWave you'd need a base station in every room and 100Gbit ethernet to connect to it.


Not too far off why evolution put your eyes right next to your brain, if you think about it.

>Right now the closest tech to do this would be something like WiFi

Is that true? 802.11ad (wigig) should be able to handle it. https://www.networkworld.com/article/2172394/understanding-w...

Have you ever used a wigig dock? You plug everything into your dock (hdmi, usb, ethernet) and it all gets beamed to the computer. ~10 microseconds latency.

edit: I see another commentor mentioned its successor 802.11ay. Same thing applies, except that 802.11ad is a delivery from the past, not a future promise. we are already there.


Unfortunately still not high enough for low-compressed (DSC 3:1 ~= 9.2Gbit/s) or uncompressed video (~= 27.5Gbit/s) at least at our resolution. But might be viable for lower-res displays.


What about splitting up the load? Pre-processing happens on the floor cube, with final work done on the headset. The headset doesnt have to be a video screen only.

The intermediary stages take more information then the final picture. So by moving part of the processing to the device you increase bandwidth requirements

The thought crossed my mind, but here is an example that I am not sure actually ends up being helpful but illustrates a point.

Imagine some kind of game with rain where water is collecting on your lenses. You could render and compress the frame, send it over, and distort it further You could even render the first source at 60fps, and the rain at 120fps. Or the first image comes over at a lower resolution, is upscaled, and then the rain effect is rendered at full resolution. The same could apply to synthetic film grain. Compressing a more pristine image and then adding film grain later should allow for significant additional compressibility? Decoupling the rendering into two layers could possibly allow for more resolution and framerate tricks like this? Or even color space upscaling upon display? Would it be possible to send half the color depth in even frames, half in odd, and have a nn up color both frames to their original?


Could be doable, and might be something we investigate in future iterations. I don't think it's worth it for this one, though.

I'd actually argue against this - I've been using my Quest recently to stream steam games over wifi on it, and I've been impressed with the capabilities. Overall the experience is seamless 95% of the time, and I don't notice any major differences between it and my HTC Vive. There are hiccups though - wifi will cut out briefly whenever there's any interference on that frequency, and you'll drop a few frames and quality will degrade for a couple seconds. Overall though I highly expect a stream over wifi 6 to be 100% usable for VR.


There is some promise, but I'd push back on the implication that this is viable as a mainstream option.

Right now through either Oculus (Wireless) Link, or something like Virtual Desktop, you can totally stream content from a PC to the headset, but that comes with a big list of asterisks (easily observable if you look at any support channel for both products):

- Most people's WiFi sucks, the room they're in has poor signal, or their router is mostly shite. The experience is awful for them.

- Most people's PCs are not connected to their network by ethernet, which seems like a crucial part of getting a good experience.

- Even under ideal conditions configured by an enthusiast who groks the tech, frame drops are relatively common.

- Streaming is generally not possible at full-resolution due to bandwidth limitations. A wireless-first approach presents an additional barrier to one of VR's biggest stumbling blocks - as screen resolutions increase streaming cannot keep up.

So getting the setup to work well right now requires a pretty knowledgeable user. Even assuming we can improve on this, I will wager that "95% effective, visible degradation a few seconds at a time at random intervals" is enough of a problem to be a hard stop on mainstream adoption.

This is the hole VR is in generally - the tech is "good enough" for enthusiasts, but punishing to the mainstream.


I'd agree with all of these caveats. I'll also say that I suspect part of the reason why my experience is good is my PC is hardwired, and my headset is ~2 meters away from my wifi router.

I'll also add to what you mention about the initial setup - it's an absolute pain in the ass... every single time. Takes me about 10min to do the setup dance each time I set up air link.

In general though I see all these as being fixable. A dedicated transmitter and improvements in the UX of this can address all of these.


Me too. I have a WiFi 6 router sitting on my desk with direct LOS to the headset and my machine is hardwired to ethernet and I have a really high quality network setup at home.

The experience with Oculus Link is pretty good - but I'm an outlier in my setup!

And yes, none of this is completely un-conquerable, but I'd argue unfixable without significant new hardware and standards. This is not a "just have to improve the software" problem.


What router do you use?

How about just a short cable to a box on the desk in front of you? The weight and heat of strapping it all on your head sounds tiring...


How about a short cable to a box on your belt? Maybe a vest for extra tactile feedback. Or even just putting all the heavy stuff like PCBs, heatsinks, and batteries on the back of the strap to balance the screens and optics on the front.

My little Petzl headlamp doesn't mount the tiny 23g battery pack on the forehead due to balance and comfort concerns, I can't imagine the Simula fares any better!


All the heavy stuff (compute pack) is in the back :)

We're considering offering a belt clip option, but I'm not sure it's worth it overall.


This is pretty much the status quo for tethered VR - you're hooked up to a PC sitting on your desk. It works, but has major usability issues that IMO make it a complete long-term dead-end.

It removes any interactivity that require the user to move significantly. More than that, even for just typing/using peripherals the cable is constantly in the way.


Having to play around a tether the whole time isnt a super great experience. The wire gets in the way and you have to think about it constantly.


Latency and bandwidth.

We push about 3.4 GByte/s (or 27.5Gbit/s) to our displays. Even a 26Gbit/s DP1.4a HBR3 link needs display stream compression for that. So a wireless link will absolutely need strong compression.

H.265 can push that to maybe (ballpark) 200Mbit/s, which is fairly feasible. But that will need to be evaluated and optimized for text quality and latency. Some googling suggests 100ms end-to-end latency as good. That's pretty high for VR, IMO.

[Edit: fixed the DP1.4a bandwidth)


You can definitionally get lower latency with H.264 - somewhere around 5-10ms encode and 2ms decode. That's what makes Parsec and streaming gaming platforms possible even with all the other latencies involved.

H.265 might be slower, but EVC, especially LCEVC, is supposed to be about 30% lower latency encoding than H.264.


Interesting. I wonder why the googled numbers were so high then. I'll take a look sometimes; wireless streaming would be cool.

What little I know about the subject suggests it has to do with retention. It seems users of VR are much more likely to pick it back up after the initial honeymoon phase has worn off if the whole "picking it back up" process is just strapping on a headset rather than involving a second device, pairing it, etc. The less friction the better.


Yes, exactly. We want people to just put on a headset and use it, instead of having to boot Linux and plug everything in. God knows it's a pain when I'm developing with a tethered headset.


Latency of wireless communication is non-trivial. Those extra 100 ms to transfer a frame buf will make the experience less.

Because the PC or laptop required to run a dumb terminal well is more powerful than most people have. By selling an AIO unit you have a console like uniformity of hardware and ease of use experience for both the devs and the customers.


The flip side of this (much like with AIO PCs and "smart" TVs, but in reverse) is that I'd be a lot more likely to buy a less expensive client/terminal and connect it to a more powerful PC because if the headset - which is the thing trying to prove its usefulness - breaks or ends up collecting dust, I still have the PC which can power all sorts of other tasks and uses.

If I spend all the dough on an AIO headset PC and it turns out to be underwhelming or I just never fit it into my long-term usage, then I've got a $2k+ computer I never use. Just as I'd prefer to separate the monitor from a PC or the media player from a TV, I'd be a lot more eager to try a headset if it wasn't attached to its own expensive PC.


That's why we offer a tethered option, and why the compute unit is detachable and dockable.

I'd still be okay with wearing the compute, just put it on my back or something. Having a ton of weight on my head is tiring.

The communicate via fiber or wire to a light headset.


Would you spend all day working in VR?

A few months ago, I would say no.

But I got myself a Quest 2 for Christmas, and now... maybe.

AR would make it better though, so I'm not just floating disconnected in digital space. I see in their homepage hero banner background[0] you get a floating window with video of your hands. It would be nice if the whole background is a passthrough to reality, and then you float your OS windows on top of that.

[0] https://simulavr.com/


>But I got myself a Quest 2 for Christmas, and now... maybe.

Let me know what you think in a month. The problem with current VR tech is that it's mindblowing and incredible for the first few days, then it quickly becomes too cumbersome and frustrating to deal with on a regular basis and you forget about it. Mass adoption really won't come until it's as seamless as putting on a pair of reading glasses.


I agree though I'd argue it's not a single axis of "bulkiness/easy of use", and more a balance between "bulkiness/ease of use" and "functionality".

The Quest (and self-contained VR devices generally) has done a lot to move the ball forward on bulkiness/easy of use, but IMO has not done a lot on the functionality part. Most VR experiences are toys that don't have lasting power. It is revealing that FB's marketing for the devices is overwhelmingly about a single game (Beat Saber).

I have a Quest 2 that has been sitting in the closet collecting dust for close to a year now. The experience is pretty mind-blowing the first few times, but there isn't anything there to keep me coming back. Once in a blue moon some novel (and usually rather short) VR experience will draw me back in for a day or two, but then the device goes back into the closet again.

The breakthrough hinges on the combination of easy of use and what the heck there is even to do with the device that is compelling.

[edit] Seeing some other folks opine about the lack of content elsewhere in the thread - yes it's true, but I think framing the issue as one about content leans heavily into the local maxima (which is a very low local maxima) we are in right now, where VR is really only about gaming. I remain unconvinced that gaming is the best use of this technology - and if we implicitly/explicitly define this as a "content problem" rather than a more general "things to do" problem I think we're missing something key.


I agree about the mass adoption threshold, but even the current breed of tech makes the workday-in-VR feat possible: https://news.ycombinator.com/item?id=28678041


Isn't that how the Quest is now? It is for me, pop on the device and get straight into my workout.


The Quest 2 has a few things that prevents me from using it for long sessions.

1. It's way too heavy, which makes it uncomfortable.

2. It gets too hot.

3. It lacks a hinge that would make it possible to easily flip it open, while it's still on your head. Taking it off all the time is annoying.

4. It's been really difficult for me to keep the content I'm viewing looking sharp. It always looks kind of blurry, especially toward the edges.


I think that's the rub; the VR industry is running into a similar issue that the Headphones industry has long dealt with, the reality of different body dimensions. 1 isn't an issue for me thanks to an improved strap, 2 isn't an issue due to, I think, my move to a rubber face wrap, 3 again was fixed with a new strap (though it just comes up onto my forehead rather than tilting), but 4 is the hell that I cannot think of an answer for. I may not have the issue, and maybe that's due to my prescription inserts or maybe it's due to the particular shape to my skull.

I am going into month 3 of every day use for work and play.


It will be. We'll be demoing the AR mode in our Kickstarter ad, but basically the entire background will be replaced by passthrough.

No depth mapping yet, so it'll only be in the background, but first things first.


Nice! That makes it more attractive.

Depth mapping would be great too, like I could pin virtual artifacts to my physical environment? Yes please.


I'm experimenting with mmWave RADAR to get depth mapping without the idiosyncrasies of stereo RGB cameras. Especially in wildly varying environments I think that's the way to go over traditional camera-based SLAM. But if it turns out to be unfeasible, nothing's stopping us from adding more tracking cameras and doing it that way.


I spend most of my days working in the Quest 2 with Immersed. It can ofcourse be better but it works well now. AR would indeed be better. If it is possible to keep the screen dark enough to see with room light of course.


OK so question: How generalizable is this? Does it come with controllers? Can I hook in controllers from another system (vive, index, oculus, etc.)? Can I play video games with this (I know that's not the main point of it, but still it would be nice)? Can I stream a video feed from a real world camera (like with a raspberry pi) to the interior displays?

Either way, glad to see you guys still going strong with this, been following for a few months now. As others have mentioned, I got a little bit of sticker shock when I first saw the price. Was expecting something closer to ~$2k. Do you have an ETA for when the kickstarter will be going up? I'll have to think about backing this, depending on the answers to my questions above.


> How generalizable is this?

All the software is or will be open source, so much more than any other headset on the market. Ideally the FPGA code would be open source to, but I don't know if that's possible with licensed IPs

> Does it come with controllers?

No controllers.

> Can I hook in controllers from another system (vive, index, oculus, etc.)?

Any controller that's supported by OpenXR is supported, and we'll likely have SteamVR support as well.

> Can I play video games with this?

In tethered mode, absolutely. In standalone mode, only very light games, but might be doable.

> Can I stream a video feed from a real world camera (like with a raspberry pi) to the interior displays?

Yes. In fact, we'll have AR passthrough by default, but nothing's stopping you from using different cameras.

If you are not interested in the standalone capabilities, the headset without compute pack (i.e. tethered only) will be ~2k.


A little off-topic, but does anyone know if there's been good research about the long term effects of having a screen a few millimeters away, beaming photons to your retina continuously?


I wish most devices had a gradual fade option for volume and brightness, etc. I'd rather hit a button to increase the volume once an hour than realize I've had the volume way to high for the last hour, or realize I've been staring at a full brightness screen for no good reason.


A little late to answer, but yes, there is research suggesting that periodically looking long distances has health benefits, including preventing near-sightedness. One could reasonably expect mass adoption to accelerate rates of vision problems. Here's one high level summary/starting point: https://pursuit.unimelb.edu.au/articles/looking-to-the-dista...

There may be a way for VR headsets to help with this, depending on why this occurs. For example, if the benefit comes from focusing on a distant object, headset screens may be able to simulate this. If the benefit comes from looking at physical objects, headsets may need to make it easy to quickly move the screen out of the way of the eyes.

What would be best, though, would be for us to figure out how to get past this glut of required information work and digital entertainment so we can spend more time away from screens--my opinion, of course.


Hadn't heard of this product but it looks pretty cool, although at this point for me content is a bigger problem than hardware with VR. But congrats on what looks like a cool product.

One thing I'd like to hear is how this device feels to wear as the weight sounds quite intimidating. Quest 2 already gives many a sore neck at less than 200g but this seems to be 800g? My assumption was/is that as you start increasing your perf/hardware it starts to make more sense to go with the hockey-puck/backpack/pocket smartphone etc. tethered processing unit elsewhere on the body rather than strapping a toaster to the front of your face.


The heat should be fairly manageable. Most of the front side components are low-power and we'll be managing the climate in the face part so you don't get sweaty due to low circulation.

The compute pack will be at the back of your head with the airflow going away from you, so it shouldn't be too noticeable. Thermal design is definitely a priority for us.

Putting it on your head reduces the amount of cables on your person, so you don't accidentally tangle yourself or something. Also, it improves the balance so it might actually feel better than a front-heavy headset (compare a Vive with an Index for example).


The weight is actually very similar to the HTC Vive with the premium audio strap, which is 741g.

That as well, it actually feels better to have the audio strap which distributes the weight better than just the 470g headset alone.


Quest 2 is a bit under 600g I think, not 200


I have to admit I got some sticker shock when I saw the price. I really wanted to get this too...

Looking at the details of the article I still can't see what the breakout feature is that keeps me from just getting an Oculus Quest and using the Immersed app to do basically the same thing. The biggest "negative" the article seems to be focusing on is that you need a Facebook account to use Oculus Quest but I don't see many limits on the basic experience.

Also, it focuses on how no sacrifices were made and that this is a premium product. This is a hard sell since you can't wear it or try it first before forking over almost 3 grand. When iPhones first were released, they were a premium product, but you could go to an Apple Store and play with it first. Here you have to rely on a datasheet and trust that these improvements will significantly matter.

I would much rather have started with a less premium product that allows me to try it first and if I fall in love with the potential for productivity, I may invest it the premium product in the future.


> Looking at the details of the article I still can't see what the breakout feature is that keeps me from just getting an Oculus Quest and using the Immersed app to do basically the same thing. The biggest "negative" the article seems to be focusing on is that you need a Facebook account to use Oculus Quest but I don't see many limits on the basic experience.

Compared to a Quest/Immersed combo:

* we offer significantly higher PPD (and better optics with no ghost images, Fresnel rings, etc.; but that's hard to quantify) than any other headset on the market except super-high end ones. You would not be able to read text on a Quest like you can on a normal 27" monitor; you can with our headset.

* Immersed does not do window management. We support unlimited, actual windows. Immersed only does a handful (up to 10 at reduced resolution? need to check) virtual displays, which is pretty annoying from a UX perspective.

* Immersed needs to be tethered, which is reduced quality and higher latency due to the WiFi connection.


Is there more detail/discussion on the optics posted anywhere? It would be nice to see some info like PPD across the image area, distortion (corrected and uncorrected) and chromatic aberration, pupil swim/eye box, focal distance, and some other display details like contrast, black levels, brightness - I've been interested (and spent a fair bit of time working on a POC several years ago) in VR workspaces, but shelved my work due to the lack of HMDs with sufficient visual quality/comfort.

Lately I've been interested in the newest upcoming micro-oled/pancake lens wave of devices (Arpara, MeganeX, etc) which seem to hit decent PPDs at reasonable price points. $3K doesn't seem entirely unreasonable but it makes me wonder - the XR2 has 3K/eye support and does all the hard work for device design, 6dof, and you could probably work with an ODM like Goertek to get an almost finished product at a fraction of the price? Also for hand tracking, are you planning to integrate something COTS like an Ultraleap 170?


We could make a blog post about it. I feel like a lot of it is going to be super technical for most people, though. Here's what I can answer off the top of my head.

Centroid separation for lateral color is roughly 0mm for 0-30 degrees then sharply decreases at about -0.1mm/5 degrees

When eye is rotated 25 degrees horizontally, pupil swim is within 10-20 arcmin for (+-40 V, 0-40 H) field angles. This is already fairly low, but we also plan to correct it with eye tracking if it's noticeable.

Allowable eye shift is +-2mm, eye box accounting for eye rotation is around 18mm. Full 20um RMS spot size in that region for the foveal region (+-25 deg).

Display is a LCD panel. 600:1 contrast, 150 nits brightness, 83% NTSC.

We actually considered pancake lenses, but the low transmission (15% in a pancake design vs ~85% in ours) and the ghost image issues made us step away from them.

XR2 does all the hard work, but Qualcomm pretty much ignored us when we messaged them. Besides, it'd pretty much be dead weight since we want an x86 processor for our primary use case as a laptop replacement.

And yeah we're planning to integrate the Ultraleap module if they get their Linux software sorted. If not, there's fallbacks.

(I think we also messaged Goertek and got nothing, so... a big part of why we're doing this, and will likely open source most of it, is that all the usual vendors didn't talk to us)


I think 3 grand for an early adopter product is cheap. The Macintosh was $2500 in 80s dollars (5-6k today) when it launched.

If this is something that actually helps your work, a few hundred a month is well worth it.


We should have added comps for early/retro PCs =]

My dad (who isn't a technical person; lawyer by trade) once told me that he spent $5K in early 90's dollars (which is much higher today) on an early 386 color laptop. This was like the price of a new car.

(Not saying Simula's VRC is an apples to apples comparison with this, or that we'd ever want to charge that much for something, but it is interesting to hear about how expensive some early computing devices were when the laptop industry was getting off the ground).


Would make sense for them to have an easy and painless return and refund policy


I'll talk with George about introducing a refund policy. In fact I'm pretty sure we had it in the original drafts for the webshop, but since we moved to Kickstarter funding we haven't thought about that yet.


It doesn’t make sense to put the PC right on the headset - it adds to the weight that needs to be supported by your neck. A backpack or Fanny pack Pc plus a tethered cable would be much smarter, and would also let you swap out the PC and headset separately.


Looks like there’s a cheaper tethered version available for $1999. Perhaps the computer is removable anyway?


It is removable. In fact, we plan to have it dockable so you can use it separately and reuse it for e.g. homeserver purposes when the hardware gets older.


When you say dockable, do you mean that the module will be easily removable to put in a dock or just that it's possible to remove it for that (e.g. by opening the unit)?

Mainly want to know like... is this the kind of thing I could do regularly when I don't feel like being in VR, or is it more effort than that?


The module will be easily removable so you can put in a dock. It basically slots into a receptacle in the back of your head.


Wow I forgot all about MagicLeap, surprised to see they're still getting funding.

re: Simula, this is the first I'm hearing of this device and it's a pretty cool concept. I have an Index but IME the larget bottleneck for productivity is the "screen door effect" and resolution. Text is tiny. The 2nd bottleneck being difficulty using peripheral devices (KBM) while in VR. Curious if anyone has tried this device yet and can confirm whether these problems are addressed?


I haven't tried this device, but I've tried HP's G2 (which is noticeable absent from Simula's comparison matrix). From personal experience, it's terrible for games that require controller tracking aside from driving and flying sims, but... it has a 2160 resolution per eye (text is very clear with no screen door effect) and it's $599 MSRP; it also frequently goes on sale for $399. Paired with Virtual Desktop software, you can work in VR now in Windows & MacOS if portability and Linux support isn't a necessity; though I imagine it could work on a high powered laptop

https://www.hp.com/us-en/vr/reverb-g2-vr-headset.html

https://www.vrdesktop.net/

On a side note, I don't notice the screen door effect in either the Index or Quest 2 while playing games. Are you saying you notice it while playing games, or while reading text?


I’m curious - can you use the valve knuckles and base station while using a g2 headset? All the g2 complaints seem to be about the controllers tracking, not the g2 tracking for the headset position/angle/… itself. I already have a Vive but the fuzziness really is getting to me.

I really think the lack of graphic/positional/… representation of keyboard/mouse/Hotas is a big issue for VR. Valve base station tracking point receivers are cheap, it seems like someone would have made at least a keyboard with them, or come up with a way to attach the vibe tracker in a not super janky way.


> can you use the valve knuckles and base station while using a g2 headset?

Yes, but I haven't tried it personally. Not sure how easy it is because of conflicting statements

> I really think the lack of graphic/positional/… representation of keyboard/mouse/Hotas is a big issue for VR.

This has been solved with the Quest 2 if you buy a specific logitech keyboard / trackpad combo

https://www.logitech.com/en-us/products/keyboards/k830-illum...

https://medium.com/xrlo-extended-reality-lowdown/the-logitec...


Wow, thanks. I did not know that.

re: index, it has definitely gotten a lot better vs. the OG vive but ex. 12px font is still fuzzy IMO. Overall it does not yet improve on what i can do on a regular laptop screen.


> ex. 12px font is still fuzzy IMO.

The G2 definitely solves that problem.

> Overall it does not yet improve on what i can do on a regular laptop screen.

If you're ok with a small laptop screen that makes sense. The main thing that it solves at the moment is available screen real estate. It greatly increases it. Depending on the software you use, it can also solve the issue of feeling isolated. IM and video calls aren't good enough


The Simula One has more than 3x the PPD/pixel density over a Valve Index, and we also have a special text filter we use in our rendering which is specifically optimized for text clarity. If you're interested in the Simula One, you should see a pretty immense difference between these two headsets.

RE peripherals: Simula itself (e.g. the VR window manager on our headset) is designed to work best with just a keyboard (obviating the need for peripherals/controllers). You control the mouse cursor with your eye gaze, and can move/resize windows with keyboard shortcuts and eye gaze as well (we plan on supporting hand tracking in the Simula One, but I think in practice it won't be used as much as people think).


I'm quite interested in the product because I've spent a lot of time using the quest 2 since it came out and I'd agree that it's not quite there resolution wise for work. But man is this written in an arrogant, self aggrandising, off putting way. It's like they're trying to channel the spotty PC Gamer / Linux master race energy. Definitely putting this onto the "do not back" pile.


I'm sorry we gave you that impression.

After several years, there's been nothing more humbling than trying to get Simula off the ground (first developing the software, and now the hardware). We've been smacked in the face so many times. So we don't feel very arrogant, at least.

Maybe our tone isn't quite right. We're just trying to convey to people that our headset has a different use case/is designed for something different than portable gaming/entertainment.


I think your characterization of your device as fitting a new category is entirely fair, but you don't do yourself favors by using glib language to describe competing products/categories:

"We decided early on that, given the cards we were dealt, it's better to build a premium headset with a high price than to build a shitty headset with a low price. This is because sub-par VR technology (e.g. the Quest 2) is simply not good enough for someone wanting to work several hours per day in a VR Computer instead of their laptop -- even if most people don't realize this yet."

If you simply reworded this from "sub-par" to "other" or "existing" or "lower-end" you'd come across as less arrogant. I say this with upmost respect for what your team has been able to accomplish, but if the Quest 2 is "shitty" then your product is flaming garbage. But if you simply mean to say it's shitty in the particular dimensions / role you are designating the VRC (VR Computer) then you may reword it and be a bit more diplomatic.


We just pushed the change to the blog post.

Though it's true we are very opinionated about the Quest 2's adequacy for long VR computing sessions, we're not trying to be glib or flippant about the Quest 2 (an otherwise excellent VR headset for the price).


I agree with @isotarical. Instead of using adjectives that you'd find in gamer subs on reddit, it would sound more professional and less condescending if you were more specific i.e. use the stuff you have in your comparison matrix as the talking point

That said, marketing is really hard, and you're still doing a better job than me.


I don't understand the design decision to use x86_64.

It's not covered on their site, AFAIK. [1][2]

It's not running Windows.

Most software designed for Linux can be built for ARM.

And, it's being billed as a next gen platform (by creating and using the term VRC) while using an older generation technology (x86).

With the amount of power required to run that chipset this device will either require a massive battery or to always be plugged into a power source, which effectively kills any appeal for me.

All of this said, I'm glad they're doing this. VR/AR in the professional workstation space is very exciting, I'm glad someone is focusing on it.

[1] https://simulavr.com/blog/how-we-designed-the-simula-one/

[2] https://simulavr.com/blog/technical-overview/


I don't consider ARM better unless you have billions to spend on custom silicon like Apple.

The power consumption is inline with any other ultra-low-power x86 (~15W TDP for the CPU). That's pretty comparable to a Snapdragon XR2 with ~10W TDP and worse performance.

Users would want to use proprietary apps that are only compiled for x86. Emulation for those on ARM is not particularly feasible for any acceptable performance. If you're buying a laptop now, you would not buy an ARM device.

That being said, I'm not married to x86. But I don't think ARM is the future, either.


> I don't consider ARM better unless you have billions to spend on custom silicon like Apple.

It's better when you need lower power utilisation and physically smaller footprints.

Apple isn't the only name in the ARM game, and I'm not suggesting that you should be designing and manufacturing your own ARM chipsets.

> The power consumption is inline with any other ultra-low-power x86 (~15W TDP for the CPU). That's pretty comparable to a Snapdragon XR2 with ~10W TDP and worse performance.

Which is all incredibly high for something that's portable.

If a user is required to have the headset plugged in for continuous use, it might as well just be a regular VR headset connected to a separate PC.

> Users would want to use proprietary apps that are only compiled for x86.

With this argument the Simula One should be running Windows instead of SimulaOS. Most Linux based applications already work on ARM or can be recompiled for it.

> Emulation for those on ARM is not particularly feasible for any acceptable performance.

I'm not suggesting emulation should be used.

> If you're buying a laptop now, you would not buy an ARM device.

I would and have bought ARM based notebooks. MacBook Air (M1), PineBook Pro, various Chromebooks.

I still don't understand the decision to stick with x86 on this.


The CPU performance should not be as important as GPU. How is the i7-1165G7 GPU vs XR2 GPU in terms of performance and power consumption?

Iris Xe is higher performance, about 50% more in FLOPs than the Adreno 650. Don't have data on power consumption separate from the CPU, though.

If anyone's comfortable sharing: what is the price point you'd be willing to spend on a Simula One? (Unfortunately our unit costs are pretty stuck at these low volumes, but it'd still be really useful to know what people's long-run price expectations are for a Linux VR Computer).


It's really hard to say without actually using the device.

If this can genuinely make reading text about as easy as it is on a current hiDPI screen - I don't think this price is unreasonable.

My issue is that - having gone through several VR headsets (Oculus, Oculus Quest, Vive, Vive Cosmos, and Valve index) None of them are even close.

Reading text in any of them is basically a non-starter outside of maybe game menus, and even then it's annoying.

Index is closest, but it doesn't support a wireless mode - so I end up using the vive cosmos with the external tracking faceplate and the wireless adapter the most.

Basically - I don't think anything around ~3k is really a deal breaker if you can get the latency/resolution to a point where text is easy. But if reading text gives me a headache after 30 minutes... it's a toy still, and competes with oculus at the ~300 price point.


To give a comparison, 35 PPD is equivalent to a 1080p screen 60cm away from you.

So the resolution is basically there, but what about latency? Output-to-photon is as direct as possible (NUC Displayport -> DP-to-MIPI -> Displays), so the only question is how low we can get the motion-to-output latency.

Unfortunately that's not a question I can answer yet. Tracking is hard, especially when you're designing for a portable headset. But I'm pretty sure we'll be able to get it to sub-frame latency if my mmWave idea works out.


I actually think the (backer) price is about what I would expect for what you get. My issue is that it's a big investment for something that's really cool, but not necessary, so my current finances don't justify me getting one. A professor I work with is big on buying every bit of AR/VR tech that comes out, I'll have to see what he thinks of it.


I'm an outlier but I'm ready to jump in at these prices, which is pretty much what I would expect for the hardware.

A bigger factor than price for me is payment options. If my only option is Paypal/Stripe/Google/Apple, I will most likely hold off; accept BTC and you've got me for sure.

I have a feeling that my preference may be unusually common among the people excited about dropping thousands on a Linux headset.


I'd prefer crypto as a payment option too. Not sure if Kickstarter offers it as an option though, so I presume that would be the core constraint. sometimes I've seen projects running on Kickstarter and Indiegogo at the same time. Maybe there is some kind of workaround like that, except for adding crypto as a payment option.

Ah, that’s unfortunate.. Having to ask a friend to borrow their card for that amount might be tricky.

EDIT: What would you think of something like this? HMU if you’d like to discuss setting it up, I’d be happy to help

https://medium.com/@BtcpayServer/btcpay-crowdfunding-abbb845...


Right now, here on the bleeding edge of VRPC, it would be something like $999. I'm seeing it as a Chromebook when they first came out — interesting concept, but not yet proven in an actual working environment.

Alternatively, some sort of money-back guarantee would be nice. "Try working in VR for 30 days, see how you like it."


It's about what I'd expect for a premium piece of productivity hardware. It's in line with specced-out laptops and workstations. At this price there will be certain expectations though, in particular regarding the finish and build quality (and even colour accuracy).

In any case, I've been toying with the idea of using a VR headset for work after having seen a post on HN from a guy who described his whole setup, but I'm not at the point where I'd jump into investing much into it.

I think your product is for people who have already a lot of VR experience and want to graduate to something more adequate. It's a hard sell for the mere VR-curious.


You're selling a new way of working on your computer, something supposedly greater than the sum of its parts. Such things can't be expressed in numbers and tables, they can only be experienced. That experience won't be for everybody; they might get sick, they might decide it's a worse experience and make things take longer, etc. Those are my fears.

Spending $2800 (or, gulp $3500) just to see if I'd barf while in vim is a really big stretch. Perhaps those with way more disposable income wouldn't mind trying, but that's not me. If there was a trial period with free return, that would help.


I did not realize the specs. I was hoping for like 600$usd but I might have reached for 1000$usd. I love the idea, however, most of my heavy lifting is done in a remote machine. I spend most of my day in a kitty terminal ssh-ed to a beefy box somewhere else. I dont think I am ready to early adopt a VR desktop yet.


Though our unit prices are more or less stuck at these low volumes, we appreciate the feedback in this thread, and hope to offer a lower priced VR computer during our next iteration.

If you're comfortable with our current price, we're offering a small number of early bird headsets for $2,499 (vs. our standard $2,799 Kickstarter price). If you're interested in getting notified an hour before our campaign starts (to max the chances of getting one), you can sign up here: https://buttondown.email/simula_one_kickstarter


Do you have any strong dates when it comes to delivery/production?

You are not competing what's available now but what is available when your device is released.


With our current lead times, basically a year from now.

Seems clunky. I would also go for a wireless solution. Moreover, I would bring out two products. The headset and the box. The box only has to be powered on. Nothing fancy. It can be completely dedicated to the headset. And the headset itself is very light and optimized for pixels and receiving data as quick as possible. In that case you don't rely on any wifi in the house.

What might happen is that you developed a box on h.256 optimized for super low latencies. Sounds as valuable tech on its own. :-)


If the specs can be delivered then the pricing seems fair, but I guess I was wishfully thinking for a deal. Guess not everyone can recoup their costs with your data like Meta


$2799 for integrated Intel graphics? I'd rather just put a real gaming laptop in a backpack and use a better headset.


$1999 for the tethered version, which is in line with other upper-end headsets like the Varjo Aero (and that one doesn't have AR passthrough)


And Varjo isn't optimized for Linux like simulaVR either.

I don’t understand the pixel per degree calculations. Quest 2 sits at 20.58 PPD in the table, which makes sense, as that’s 1832px/89deg. How do you arrive at a 35.5 PPD for the Simula VR with 2448px for 100 deg fov? It links to a [2] in the table, but there’s no [2] I the references.


We use variable magnification for the lenses. That is, the foveal region has a higher PPD and the periphery has a lower PPD. As your eyes follow the same pattern (your fovea has a high resolution, everything else drops off sharply), it's basically a free PPD gain.

I put a link to the paper [1], but it's paywalled.

[1] https://www.spiedigitallibrary.org/conference-proceedings-of...


Except that your fovea moves around with your eyes and this is glued to looking straight forward. You get PPD gain if you're looking at the same thing that your face is pointed toward, at the expense of when you look off to the side. Not exactly "free" but probably worth the tradeoff.

How big a "sweet spot" do you have in the center? Enough to read lines of small text without rotating your head back and forth?


About +-15 degrees. This accounts for 86% of saccades.

I need to confirm how sharply it drops after that, but in any case it should be above 20 PPD +-(15 to 30) degrees and above 12 PPD for +-(30 to 50) degrees.


This seems a bit disingenuous, listing the max/center PPD for your product, and the average PPD for the Quest.


No other headset except the Varjo Aero use this technology, and only the high-end Varjos have a focal view display (1920x1920 in the central 27 degrees). Rest is basically uniform throughout.

We use the peak PPD for the Aero.


Using Lenovo MSRP is doing deck stacking as they are notoriously overpriced and always on "sale".


Even if you reduce it to it's sale price it's favorable. It's not like we aren't on "sale" either via Kickstarter discounts.


Not sure to follow, my point was that the Lenovo is currently on sale for $2045 and that using $3500 as a price is kind of misleading as you would never buy this laptop at MSRP.

That being said your Kickstarter rebate is a real rebate (or sale ;)), I was using "sale" because Lenovos laptops are always on sale hence the real price is the price on sale.


I guess it depends? I bought my P14s at basically MSRP and it didn't drop much. The X1 Carbon did drop a lot, though. We might adjust it to be "$2k current price, $3.4k MSRP" though.


Any word on it having diopter adjustment? It'd be nice to be able to wear this without glasses.


Likely a stretch goal. But if we notice that it's a big usability deal as opposed to glasses/prescription lenses, we'll put it in the baseline.


So is the entire computer in the headset, or am I missing something? It seems like it would be heavy, no? I haven't followed this space closely, so apologies if it's an ignorant question.


The computer is on the back of your head. We target a weight of about 800g* balanced evenly, so it shouldn't be more straining than an Index

* Preliminary number, might change as we make tradeoffs of battery life vs thermal solution vs weight.


Is the charging port USB-C? If so, can it be charged through more than one port, in the event the primary one is damaged? (My phone experiences tell me the likelihood of port damage increases the more you handle something, and this will not be sitting on a desk all day)

My first thought, before I realized I had a serious question was that it would be awesome (in my particular opinion) if the battery could be separate, enabling easily swapping it or having various configurations... Maybe a belt clip or the dreaded fanny pack?

.....

I just realized this is probably quite silly, you almost certainly have it identical to a laptop where you can work with it plugged in/ charging, and that would make that port even more likely to be a problem.

Personally I would love to see easy support for external power packs to extend usage, but I can't think of a good reason why I would actually need that, so maybe I have been playing to many cyberpunk games lately...

I really whish I could know if I would actually get the use out of it it deserves... I love the idea but sadly I can only actually work on company devices because of export control restrictions, so my personal use case would need to include gaming to justify a high end new machine right now. I will absolutely be keeping an eye out though, and if I don't get one I will hopefully be able to buy your V2 model.


Every USB-C port will be a PD sink and source, and we plan for replacement parts/schematics to be easily available so any repair shop or competent person can do a repair.

We're thinking about a separable battery for the V2. V1, too much complexity for now.

You'll be able to power it off the charger or any USB-PD source while charging the battery (provided it has enough power, but that should be doable).


Honestly now that I think about it that is probably the best of both worlds.

Thanks for answering my questions!


Can anyone make the case for why VR will be more productive than a laptop and monitor? The Simula website mostly talks about specs and not why I'd want to use this for work.


I made several arguments along those lines in this article: https://news.ycombinator.com/item?id=28678041

For me it comes down to functionality, focus, comfort, and productivity. Convenience too, but it took a lot of setup and tuning to get to that point, so the net "convenience" gain is probably neutral. These days I don't want to work any other way.


Highly recommend ptom's article: https://news.ycombinator.com/item?id=28678041


Here are some reasons:

- 10x more windows/virtual screens than PCs & Laptops

- Persistent ("always on") computing wherever you are able to walk and think.

- Promotes better posture (you don't have to sit hunched over a laptop screen, but have more freedom of movement).

- Better work immersion/focus than PCs & Laptops. There's something about having the world around you blocked out and just focusing on your work (though Simula will support an AR mode via front cameras when you need to see things).

Though I am biased, there is also great aesthetic appeal to working in "The Future". From Iron Man to The Minority Report, our Sci-Fi has been promising us for decades a future of always-on spatial computing with omnipresent screens. Working in a VR Computer allows you start to experience that future.


> 10x more windows/virtual screens than PCs & Laptops

That's a productivity drain not a benefit. Modern operating systems could already drown us in zillions of screens if it were actually useful - it's not. That's a productivity fantasy element, like people pretending it's possible to multi-task (more screens, more work, more output). All you get is the equivalent of the hoarder clutter of a thousand browser tabs for no great reason. Humans max out on productivity and usefulness gains from additional screens at a very low number.

> Persistent ("always on") computing wherever you are able to walk and think.

A tablet, a smartphone - it's a trivial difference in timing, as those items are a moment away from use in terms of always on. And where are you walking with a VR computer on your head?

> Promotes better posture (you don't have to sit hunched over a laptop screen, but have more freedom of movement).

That one is false and probably your worst premise. Posture is a choice, you either consciously choose to pursue better posture and constantly reinforce it or you don't, and if you don't then absolutely nothing will keep you from bad posture. A VR computer on your head is very low on the list of things that is likely to finally encourage someone to consciously adjust their body toward better posture. I'd bet on the opposite outcome as far more likely, body damage from wearing a heavy object for too long.


with respect to posture,have you considered this opens you up to computing in a supine position? Quite a few people have experimented with this and find it helps.

Does it actually promote better posture? They don't mention its weight anywhere, which makes me think it's heavier than competing headsets, which already aren't the most comfortable to wear for long periods.

I am excited about the possibilities this opens though. Will it be more convenient than a terminal emulator running tmux? Maybe not for me, but I could see people who deal with more visual assets like game developers using this.


We don't promote the weight because we aren't yet at the stage of integration where we can promote it at. However, you can expect it to be around 800g front+back combined, so around the same as an Index.

In our experience, a balanced headset is more important than a light one.


My preferred setup is a sit/stand desk with a single high resolution monitor and a window to the left or right of me. For many years I worked with two or three monitors but found that it was more distracting than anything.

When I'm stuck on a problem, I stare out the window or go sit outside with a pencil and pad of paper.

I can't imagine I'd be happy working in a VR setup. I could see it being useful when I want some type of virtual presence (at least until I start feeling sick), but I only need that a few times a year.


Why stair at grass when you could stare at deserts, oceans, live animals or at the earth from the ISS?

If I wanted to watch TV, I could do that too. It's not really what I want.

Also, I think there's a big difference in focusing on something 5 cm away vs focusing on something that is a 100 m away. Part of the reason I spin around and put my feet up and stare out the window is because I want to stop looking at a screen.


Portability is a huge one for me. Having vast screen real-estate plus immersion seems great for roaming.


I have a couple of dumb questions:

I use reading glasses. Would I need to wear them inside the headset?

I dot lots of video conferencing. How would I appear in a zoom or msteams call with this on?


> I use reading glasses. Would I need to wear them inside the headset?

Depends. We're planning a diopter adjustment (either as baseline or as a stretch goal), which might be sufficient. Alternately, there's enough space in the headset for glasses and it'll also be possible to get prescription lens inserts.

> I dot lots of video conferencing. How would I appear in a zoom or msteams call with this on?

You would appear like you're wearing a headset. Not much we can do about that, unfortunately.


I do have to wear glasses with the Rift S, sometimes they get foggy tho.

I am curious -- why the 1:1 aspect ratio/FoV? A quick Google suggests the human eye is more like 5:3 (though vary variable).


Several reasons:

* Square displays are what's readily available in the VR form factor (2-3" diagonal)

* Rectangular displays will murder your minimum IPD, which kills accessibility for a lot of users

* Reducing the size/increasing pixel density makes the optics exponentially more complex. We're already pushing the limits of what's possible without pancake lenses.

* Pancake lenses are a nightmare of complexity when you want high-quality images. Ghost images, low transmission, etc. all increase development cost by a lot. And the unit cost is significantly higher, but that's not an issue here (the NREs are the main cost driver)


Very interesting. Thank you!


> This is because sub-par VR technology (e.g. the Quest 2)

Calling horseshit on that, I have a Quest 2 and it is a brilliant bit of kit for 1/10th the price.

Is it a VRPC or whatever they are on about, no - it was clearly not intended to be, is it the sweetspot for VR right now, yep and it works fantastically when connected to a PC or standalone, the boy loves it.


I should rephrase that sentence.

Agreed the Quest 2 is a great device for its price. What I mean to say is that the Quest 2 isn't pushing the current limits for VR pixel density (by having a high Pixels-Per-Degree aka PPD).

High PPD is important for gaming but extremely important for VR computing/office, since it heavily impacts how high quality text and other fine details (icons, etc) show up for you. This is one of the main drivers of our high price. Since we're trying to get people to work in a VR headset for 8+ hours in a day (replacing their PCs/laptops as their primary computing device), we needed to offer as high a PPD as we possibly can. This requires state of the art displays and a compute unit powerful enough to power the rendering. We also have a special text filter in Simula which is optimized specifically for text rendering.

There are other problems with the Quest 2 as well, but low PPD is the most important one. Price is definitely not one of its problems (though its low price is being subsidized by its bringing people into the Facebook ecosystem, etc). The Quest 2 is primarily a gaming/entertainment device, and it does a pretty good job at that. The Simula One is primarily a VRC (though it can be used for gaming in Tethered mode).


The Quest also isn't pushing the limits for FOV either, compared to my usual headset it feels like goggles. I'm impressed if the Simula really beats it on both of these.


We put in a lot of effort into the optics and we use variable magnification tech to get the most out of our displays (i.e. there's more pixels per degree in the foveal region, and less in the periphery where you can't see them anyways).

The drawback is that the optical train is long and there's 3 complex lenses as opposed to 1 glued assembly (not sure if it's 1 or 2 lenses). Adds per-unit cost and assembly labor, and at a $300 ($800 realistically) price point that's a lot.


The full quote is:

> This is because sub-par VR technology (e.g. the Quest 2) is simply not good enough for someone wanting to work several hours per day in a VR Computer instead of their laptop -- even if most people don't realize this yet.

Do you mean that the Quest 2 is good enough to do, say, programming work on for several hours a day, or just that it's a decently good gaming headset?

The last VR headset I tried was the Oculus Rift, and that was nowhere near being usable for work. I'm really curious about the SimulaVR, but it's a bit outside my price range. So if you use the Quest 2 for work, I'd love to hear about your experience with it -- what software do you use, is the resolution good enough for working with text for hours at a time, etc.


I use the Quest 2 almost exclusively for my day job as a programmer (any time I don't have to be on camera in meetings), and have been using VR to do this for years - I'm the guy behind this article: https://news.ycombinator.com/item?id=28678041

The Quest 2 is remarkably capable for its form factor, but has some significant limitations and requires a lot of babysitting to get it tuned "just so" to make it that productive. Reaching that flow state, or even making it more productive than a traditional physical screen layout, isn't particularly accessible, certainly not yet on a mass appeal level. So yeah, it can work, but there's a LOT of room for improvement.


I work in VR when I'm not in meetings. I use Immersed for it. I love it.

The text readability isn't perfect, but it's fine and usable. (Others don't consider it very usable, which can mean either they didn't spend the time to figure out the ideal setup for them or it's simply not usable for everyone yet.) There's a lot of after-market customization that help tremendously: better headstrap, upgraded facemask, prescription lens covers.

We're definitely in early adopter territory. It takes tinkering to find the best setup for yourself. Some people don't have the time or desire for that, some people just don't find something that works after trying it out. It's not sustainable for widespread adoption yet, but it'll get there.

It's improving every day as the Immersed team is adding new features along with the Quest opening up APIs. For example, right now you cannot see your keyboard. Most users get by with touch typing. You can bring in a VR version of your keyboard that is calibrated to the position, but it's pretty finicky. Quest is opening up an API soon for what is called "passthrough", which will allow the user to see the camera view outside of the set. Once a passthrough keyboard feature is implemented in the Immersed tool, I believe it's going to be a significant feature that will make it even easier to work in VR.


I used a Quest 2 for work for a few weeks while my monitor was being repaired. My biggest problem was not being able to see the keyboard. The display was not a problem for me. I was quite glad to have my monitor back anyway. For that matter, for all of the PCVR games I was so excited to play, I've gone back to playing them mostly on the monitor. I'm quite happy with the Quest 2 visuals, but the comfort (for longer periods) and controls are inferior for anything more complex than beat saber and golf.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: