Hacker News new | past | comments | ask | show | jobs | submit login

One of the founders of Simula here.

We're flattered that someone posted us to HN, but we were honestly not ready for this much publicity at this precise stage of our project. It would have been better had this happened a few weeks from now, when we have more accurate footage of our actual prototype to show. Let me explain:

1. *Video footage.* The video footage on the front page of www.simulavr.com is taken against an HTC Vive and an older prototype of our window manager. It doesn't showcase the higher resolution of the Simula One (more than 4x that of the Valve Index), or any of the new features we are intending on releasing with it (hand tracking, AR mode, environments, etc).

2. *Prototype pictures.* The website doesn't have any actual photos of our headset yet! That's because we are in the process of finalizing the design. We have printed parts and plenty of renderings, but they are still changing every week.

3. *Specs.* The specs are close to the final specs, but still placeholders. Between supply chain issues, stuff still under development, and issues getting support from manufacturers at our volumes, we might have to change things for the final prototype.

One of the reasons we threw up this website in its current form was to get the ball rolling for manufacturers. They won't supply us with parts unless we have some sort of product interest, but we can't generate any sort of product interest unless we have some sort of website. It's very much a chicken and egg sort of problem.

We appreciate everyone's kind words, yet also understand the skepticism. For people on the waitlist: expect updates to start to come from us in a few weeks, when we will show some previews of some of the actual goodies which makes our headset special.




Another founder here, with some more comments on the tech side of things:

1. The software is relatively usable, and you can try it out right now on https://github.com/SimulaVR/Simula

2. The hardware is still being worked on, and the website is kind of a list of expected specs/placeholder in that regard:

2.a. The compute unit is tested and works, but requires a custom carrier board to fit the form factor. This is a blocker for the final product, but relatively low priority for the prototype.

2.b. Lens system design is scheduled to be complete in early November, with first prototypes available in early December. We're planning to use Valve Index lenses as a stopgap right now for prototyping etc.

2.c. We're currently solving a few challenges in driving the displays, as we're pushing the boundaries of the available technology, and at our volumes support from manufacturers is like pulling teeth. BOE supplies the 2880x2880 panels and there aren't even enough docs to figure out how to drive the (non-trivial, local dimming based) backlight.

2.d. We're also experimenting with different approaches to tracking as our original plan (RealSense) became end-of-life recently. I'm interested in an mmwave based solution, but we might just use RGB cameras instead.

2.e. The mechanical design for the front part is reasonably advanced, but we're still working on the back part.

There's a lot more going on right now that's probably not coming to mind immediately, but that should provide a good overview.


What's the best off the shelf inside-out tracking system you can get now? Does anything compete with Quest yet?


Nothing that's satisfactory in one way or another. Probably Luxonis DepthAI?

The main problem with off-the-shelf solutions is that they add another set of cameras, and afaik nothing exists that allows custom cameras.

We're gonna need an FPGA anyway due to the large amount of IO (2 cameras for AR, 2 for eye tracking, IMU, whatever other sensors we need, plus potentially mmwave radar if we decide to go that way) so it's tempting to put the processing on the FPGA as well.


Interesting - I guess I assumed the hurdle is both hardware and software. Oculus's hand tracking was a huge lift. Is there any commercially available software stack being worked on that is at least hardware generic? Or is everyone forced to build from scratch?


There's a lot of research papers that I found, but nothing hardware generic unfortunately.

Hand tracking is a difficult beast especially, and we would like to just use the new Ultraleap module for that, but they don't support Linux yet.

Eye tracking is relatively simple because it's a closed/controlled environment. Just some IR LEDs, an IR camera, and some edge detection and math.

SLAM (positional tracking) has a lot of different approaches . There's open source software, but it's generally running on a normal computer and that's not particularly efficient (especially with our GPU already loaded). Some research papers use a FPGA, but the code is rarely available so you just have a starting point.

You could probably crib the software from DepthAI or similar? We could implement the AI coprocessor they're using and adapt the code. I haven't looked closely enough yet to see whether that's a good use of resources.


Cool, that's helpful, thanks!


I recommend QP if you are going to do FPGA processing using a softcore or hardcore processor. It's an event-based state machine framework that handles IO really well. A hardcore processor would be more performant and take less LUTs but softcore will give you more flexibility as far as sourcing FPGAs.


Appreciated. FPGAs are something I've been aware of for a long while now but haven't used before, so recs are always good.


What are the potential advantages of an mmwave tracking system? The only previous commercial application I can think of was the pixel 4, which was very range and accuracy limited and power hungry.


You get position/velocity/angle data directly, and it's less power hungry than running high-res cameras specifically. Also some research papers show an increased tracking accuracy with mmwave+IMU than RGB+IMU.

So less processing + potentially less power + better performance, in theory.


does the device have a cpu or it needs to connect to a pc?

what is the predicted price point?

how to fit prescription lens ?


Compute module based on NUC will be included, and is pluggable on the back of the headset.

About 2-2.5k predicted price point.

Prescription lens we'll figure something out. We're trying to keep enough eye relief to support glasses, and we'll have at least provisions for mounting prescription lenses.

If we can, we'll be able to supply prescription lenses with the headset (for a surcharge) or collaborate with an existing vendor to provide lenses.


2.5k price point, ouch.

On a side note... are you on Kickstarter?


We will be, once we've sorted out all the blocker issues and our prototype is complete.


Thank you for the clarification.

Let me ask you a quick question that is surely on the minds of many other HNers:

Are you guys using the prototypes for day-to-day work (i.e., are you dogfooding Simula hardware and software)?


Yes. We're building the Simula One because we ourselves wanted to work all day in VR, using the best OS (Linux). Here's a fun video I made working on Simula, in Simula: https://youtu.be/FWLuwG91HnI

And I love seeing the progress unfold on our own headset, because I can't wait to start working in it.


I should also say that when you work a lot in VR, you get intimately familiar with its improvement bottlenecks:

1. *Text quality.* Text quality is really important, especially to sustain long work sessions. This is why we're pushing as hard as we can on resolution. It's more important for work than it even is for gaming, because gaming doesn't require you to sustain focus on detailed text for long periods of time.

2. *Headset bulkiness/portability.* Headsets are too bulky, and tethered ones are annoying to work with. While the Simula One won't be as light weight as headsets will become 10 years from now, it will at least be truly portable (not requiring you to tether to a PC with chords or over WiFi). We are also planning on using something like a halo strap to make flipping the headset up and down more easy (instead of requiring you to take the headset fully off or on).

3. *Real world stuff.* VR forces you to be very touch-type proficient. But sometimes you want to be able to see your keyboard, or see your surroundings, etc. We are planning on having an "AR mode" for our headset to help accommodate for this.


Points 1-3 make a lot sense to me.

What about 4. Impact on neck.?

Is there any risk of repetitive-stress neck injury from all the looking up and down?


Good question. That's an open research area for this particular use case. Some research about flight helmets/night vision goggles indicate that a counterweight alleviates neck strain. But it doesn't have the specific up/down motion that'd be more common here.


"because gaming doesn't require you to sustain focus on detailed text for long periods of time"

Have you never played VR Zork? You haven't lived yet.


Awesome. That means more than any tech specs to me.

Wait a sec, are you on HN via Simula right now?


How good is the battery life for the prototype headsets? If the headset is small enough and lasts long enough, I could totally see myself tossing it in my backpack instead of a laptop.


Thanks for posting here. I was getting ready to post something about the resolution, but... dang, now I'm excited. Good luck and god speed!


Your humility is endearing. Seriously.


> They won't supply us with parts unless we have some sort of product interest, but we can't generate any sort of product interest unless we have some sort of website. It's very much a chicken and egg sort of problem.

Well I reckon that problem is now solved.


I'm very excited about SimulaVR and watching you guys closely now. Great work and can't wait to get my hand on the headset!


Shut up and take my money!


Any price indication? $500, $1000, $2000?


Tentatively 2k.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: