Hacker News new | past | comments | ask | show | jobs | submit | arjonagelhout's comments login

In line with the answers in the FAQ and what other commenters have mentioned this is intentionally made to be hostile and out of touch, and I absolutely love it for that.

There’s not enough humor and parodies in startups and software. Bring it on :)

Also, definitely not paying for a service like this. Especially not with this level of professionalism.

I think I’ll go to the competitor, Oracle Advanced Image Sharing for Hadoop…


How DARE you


The page seems like a first draft. Clicking “Guide” in the hamburger menu even shows Lorem Ipsum.

EDIT: And it seems they just used this project as the base: https://landscape.cncf.io/


OP in this case was referring to the author of the article, where the term mobile phone was used.


The main point I tried to make, but apparently muddied through subpar writing, was that visionOS ultimately is an operating system more akin to iPadOS than macOS, which I think ultimately results in people being less able to deeply understand the inner workings and of the computer.

An iOS-like operating system aims to abstract away file systems, how software runs and leaves the user merely running apps.

You can use a code editor to write code on iOS or iPadOS, but you wouldn’t be able to run that software outside of that code editor on the device. You couldn’t use it to build an application binary / executable with it to be run directly on the device.

I did not aim to make any points about the development experience for building apps for the Vision Pro. Rather about the restrictive nature of the operating system when using the Vision Pro as a primary computing device.

As mentioned elsewhere, next time I’ll make the post more informative and less a quick “does anyone else feel this way too?”, as I see that this does not fit the standards of the HN audience.


I agree, I’ll do better next time. Or post it as a blog and not put it on HackerNews.


I think DrMiaow is referring to how the APIs and SDK could become more “open” over time in the sense that more functionality is exposed to developers.

One example is that right now Apple makes a distinction between Immersive (AR, digital objects superimposed in the real world) and Fully Immersive (VR, fully digital, real world is no longer visible).

In Fully Immersive mode, a developer gets full control over the entire screen and can draw using a custom rendering engine on top of Metal.

But in Immersive mode, RealityKit handles all rendering, and you’re restricted to whichever rendering features RealityKit exposes. There is some support for custom shaders, but it’s quite limited.

Unity solves this issue using PolySpatial, which translates Unity’s scene representation to RealityKit.

In that sense I can imagine (or hope) that Apple opens up the platform more. Similar to how it added the AppStore to the iPhone after one generation.

But it will, unless some drastic internal push and change in direction happens, remain iPadOS-like.


I’ve also been following development of the Simula One[0], but it will be really hard to compete with Apple’s design and technical execution, and I’m afraid it could stay in the same category as Linux Desktop.

In this light, having a company as Valve put their weight and resources behind the importance of more open ecosystems, would be amazing.

[0] https://simulavr.com/


I’m sorry if you feel that way and looking at it now I do agree. Next time I’ll provide a more thorough analysis that is more in line with the standards of HackerNews.

I was mostly curious whether the sentiment of the post resonates, as I feel much of the coverage on the vision pro is focused on either technical prowess or how it could change the way we work with computers, rather than the openness of XR operating systems.


I’m curious how well this could work for using it as a portable multi-monitor setup. If that works well in combination with my MacBook, I’ll buy it in a heartbeat. Imagine just sitting anywhere, but immediately having 3 large vertical screens with code and documentation.


Resolution is the killer for that application. The panels in the Vision Pro are dense, but through the distortion of the optics and with those pixels stretched over your entire field of view only a fraction of them will actually be displaying your virtual monitor(s), and as you scale a monitor down in virtual space it also becomes lower resolution. For comfort and ergonomics you don't actually want a giant monitor that fills your entire FOV, there's a reason why real monitors have settled around 27-32" despite larger panels being available.

Ironically existing Apple users are the most likely to notice this limitation, since their computers have had very high DPI displays as standard for a long time.


23 million pixels, divided by two, makes 12.5 MP per eye. Let's assume it's nearly square, so taking the square root gives us 3535 pixels in height and width.

This graphic claims 3400x3400 resolution, so we're not far off:

https://i.redd.it/apple-vision-pro-resolution-vs-other-heads...

Frustratingly, the spec sheet doesn't list FOV, but let's assume it's on the order of 110 degrees. That gives us about 31 pixels per degree.

Right now, I'm sitting about 30 inches from a 24" diagonal 1920x1200 monitor. That gives a vertex angle of about 45 degrees, and it's 2264 pixels corner-to-corner, so I've got 50.3 pixels per degree. That's not high DPI, but it's acceptable. I also have a 4k, which is lovely, but given the chance I'll take more real estate over a small, high-DPI display. And with Vision Pro you can put monitors anywhere.


Another detail to consider is that your 24" monitor is probably benefiting from subpixel font rendering effectively tripling the horizontal resolution of text, but that can only work on a pixel-perfect display, not on a virtual display which is scaled and transformed before being displayed on a real panel.

I agree low DPI monitors are still acceptable, I'm using one now, but that's really contingent on subpixel fonts.


I don't personally use Apple products, but I do have some trust that they can get font rendering right. The browser window I'm looking at right now is a subset of my monitor, and could be considered a 'virtual display', but subpixel rendering works in that region. There's no reason other than laziness or lack of vertical integration that it wouldn't work in a virtual environment.


Keep in mind that the lens distorts the pixels so the density is not uniform across the FOV. Word is the PPD is 50-70 in the center, and beyond 60 is not visible.


This is precisely the use case I’m interested in. I work with a single monitor using a tiling window manager and multiple workspaces. I usually have one workspace with email, iMessage, another with code+docs+terminal, another with some desktop app, etc. if this device can make that more efficient and allow me to separate contexts better I’m going to go for it. If I can have that AND be on the go it’s absolutely worth the money.

I’m a bit loathe to go in on a $3500 device without trying it out though, so I’m waiting do a few weeks at least for some more reviews to trickle in and whatever hype to die down before checking it out. I am particularly interested in how heavy it is after wearing the thing for 4+ hours, which is not something you can really get a vibe about at the Apple Store


Unless I have missed it (someone please correct me if I am wrong) it only does 1 screen for a Mac.

But I agree, and the ability to do that really would make the price "reasonable". I mean I have 3 monitors on my main setup and one of those alone cost $1200 (I realize I didn't need to do that, but it still puts the price in prospective if I can have 3 virtual monitors on this thing.


As others have said I think it just does 1 screen but it's a 4K screen you can make whatever size you want. I run 3 monitors (2 vertical on the sides, one horizontal in the middle) and I love that setup but with a window tiling app I could make 1 big screen work just as well I think.

This is why I preordered it and if I don't think it delivers on that experience I'll probably return it. Productivity is my #1 use case with things like content consumption being a distant second.


I use the Quest 3 for this and it works really well (I have a m1 mbp).

I use the Immersed app and I'm able to code for 3-4 hours a day with the headset.


I recently retired my Quest 2 in favor of a Pimax Crystal QLED. The resolution is superb, and it's well-suited to my primary use case (gaming/flight sim), but I will say that the one app that I miss from my Quest 2 is Immersed. I _really_ wish I could use my Crystal for productivity, but I've found no other virtual desktop app on any platform that can compete with Immersed.


I’ve tried it with the Quest Pro, but latency and connectivity issues in combination with the clunky Quest operating system and confusing Immersed user interface, pushed me away.

I want it to just work. Put on the headset and immediately be connected.


What do you wish was better?


Do you have any material online on Tiagix OS? I can't seem to find anything from a quick Google search.


I think there's a build of an older version of it on the YoYoGames archive. The archive didn't save the screenshots that the original website had, sadly.

I also spent a long time working on a very large update which I might have archived somewhere locally but never shared it online.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: