I've tried it in person and it was truly amazing. They used some very fancy tech when I saw the demo, so I'm thrilled it is finally being announced and possibly shared with a larger audience.
Explanation of why it is amazing: It totally fools your perception. No glasses or goggles--but rather an 8k display with special glass that allows your different eyes to see different pixels (a light field).
They also optimize the sound, and the rest, so as all the testimonials point out, you actually feel like the person is in front of you.
It also works for the "cube" around them, so if they hold up some object, it also feels like that is in front of you.
Because optometrists are illegal in my country (here only medics can decide what glasses you can use), currently I don't have stereoscopic vision, although my brain CAN do it, if I had the correct images sent to my eyes somehow. (one of my eyes muscles is slightly shorter than the other side thus the images on that eye are shifted unless I had an optometrist design me glasses with a prism).
So when I watched Avatar, an actually well made 3D movie, it literally felt more real than reality, despite it being obvious fantasy with aliens, floating rocks and all that stuff.
EDIT: for those wondering, I am from Brazil, here medical professionals often sue the shit out of anyone offering any service remotely similar to an optometrist, they are quite aggressive about it, some attempted to make even discussion of the subject illegal. And when I was trying to get the prism and asked around my medics about it, one of them went really ballistic, I honestly thought the guy was going to punch me. I believe the reason for that is that for many medics, designing glasses is their only source of income, a guaranteed one, since here is ALSO illegal to buy glasses without a medic desining them for you first, even if the new glasses are supposed to be identical to the old ones!
Something used in some countries is a briefcased-sized glasses making kit. Eyeglass lenses have three parameters - spherical radius, cylindrical radius, and axis of the cylindrical curvature. The trick is that for round lenses, you can use the same lens for all axis angles, which reduces the number of combinations to a set you can carry around. Once the right set of lenses has been decided, a notcher is used to cut a small notch on the side of the round lens so it locks into the frame and can't rotate.
>my brain CAN do it, if I had the correct images sent to my eyes somehow.
>when I watched Avatar, an actually well made 3D movie, it literally felt more real than reality
Buy an Oculus Quest 2. (Or any VR headset, but that's the best value at the moment.) It sends seperate images to each eye. You should get that same 'more real than reality' feeling. It may even train your eye muscles to see in 3D, I know I've seen some research into that area. (https://www.seevividly.com/ comes up on a Google search, though it seems to be prescription only)
Watching 3D movies in the theaters just gives me a headache but Oculus actually works fine, I've played a few games and it was amazing...
I'm asking in case I might need to look for one in the future.
In places where they are legal they spend their time learning more and more about optics, physics, math and eye anatomy, they don't study diseases, infections and so on.
As result you can't rely on one to fix certain stuff where you do need a medic, but if you need fancy lenses, they will calculate them, not just use a number they get from their measurement machine like ophtalmologists do.
They also would be helpful to design proper 3D glasses and whatnot, holograms and so on.
This is a major screw up and inefficiency from this country.
Thanks for explaining the concept more in-depth too.
Back then people even were saying it was pseudoscience or a scam, but seemly there is ongoing research that shows that the article wasn't lying.
Maybe it depends on the exact condition you have. Is it Binocular Vision Dysfunction?
You sit down and you forget that there's technology happening. The person is there, in front of you. I don't know how else to describe it.
The testimonials in the video aren't exaggerating compared to my experience.
I mean, not in the same room, but down the hall or so.
You can have the most perfect rendering in the world and 100ms of latency will be enough to make the experience miserable.
It seems like the imaging/rendering technology that Google is using is much more advanced.
That said, there was no “emotional connection” like the Google one is described as offering. It was still a video call. There was no forgetting that. I suspect the 3D and the apparent physical closeness to the display add a lot.
Partially there are just affordance issues of things like eye contact which are physically out of alignment unless you start using two way mirrors .
It is impossible for me to explain how/why it works so well.
WebAssembly SIMD is coming to Chrome as well. 2D images and video that only consisted of RGB and Alpha channels may appear downright primitive to future generations as depth camera rigs gain distribution ;)
I think this is cool tech, and valuable. I'm just not sure that it offers a communication benefit over well-lit, well-miced, wired, low latency, 8K videoconferencing.
Maybe there's some 3D emotional perception face processing stuff that we have deep in our brains that can immensely benefit from this, but I'm skeptical. I think simply doing 4k or 8k low latency high quality videoconferencing might be a 90 or 95% solution without needing special cameras/displays.
Yes, having low latencies and high definition video is an important aspect of this, but the 3D part is no gimmick. Once the technology improves and gets affordable this is a game changer for how we communicate online. The step after that are holographic displays, and since we'd be used to 3D models and smart displays, it probably won't feel like such a big jump.
I'm _super_ excited about this project. Hopefully Google doesn't axe it. (:
I've got one and the first minute is always noticing how low res the eye screens are, then as soon as the game starts, I've forgotten and I'm _there_. The 3D part makes up for the low quality
The state of video conferencing today is a poor one and I'm very excited for something that can change the industry like this.
If Google wanted to make me believe they care about videoconferencing quality, they'd have a 4k 60fps option that auto-enables in Meet if it detects everyone on the call is on wired gigabit with a 4k camera.
Even 100mbps is sufficient for a 1-on-1 4k video call, as high-bitrate 4k is 30-40mbps. Most commercial office buildings in business districts have it available. Even Starlink (20mbps up) should be sufficient for 1-1 30fps 4k videoconferencing with a lower bitrate.
>Maybe there's some 3D emotional perception face processing stuff that we have deep in our brains that can immensely benefit from this, but I'm skeptical.
>I think simply doing 4k or 8k low latency high quality videoconferencing might be a 90 or 95% solution without needing special cameras/displays.
From my experience, 4k or 8k doesn't matter. Sound quality actually matters most, really clear low latency audio alone will give you a surprisingly strong sense of presence.
Video quality is important but 1080p is enough, beyond that the lighting and latency matter more.
Equally important from my personal POV is video size - physical size. Take a cheap 65 inch TV, turn it vertically, and talk to someone on that. When your talking to someone that is actually life size the sense of presence is vastly improved, even at the exact same video quality. And TVs are so cheap this doesn't seem like much of a techical barrier.
If you just screen share from your cell phone to your 65 inch TV and video chat -- holding everything else equal for audio and video quality -- it's SO MUCH BETTER.
Even from viewing the short demo, the stereo display alone is an entirely new dimension that no amount of studio lighting will recreate. While better lighting and audio setup would certainly improve the average person's videoconferencing experience, this looks to be a genuine step beyond.
That said, we've been seeing holographic-display prototypes for the better part of a decade, and it'll be interesting if this actually pans out or fizzles.
The Oculus Quest 2 doesn't do anything like what you're describing -- it essentially just pipes in stereoscopic video from its stereo cameras and stitches them together in a trivial way. It doesn't attempt to build geometric representations of objects in your environment at all.
(For guardian functionality it does very simple things like using the depth cloud to figure out the height of the floor, and if there are points inside the guardian that shouldn't be there, but that doesn't inferring object geometries.)
Then it would be done already.
The traditional glasses free 3d monitors rely on special coatings on the glass to "split" the direction of a pixel. Some coatings are electronically controlled (like Sony's), some are physical (lenticular filters) It still displays a 2D projection of a 3D image, but twice.
By contrast, my understanding is lightfield displays projects the entire volume. So you don't have a pixel, you have a voxel. So each eye gets a different bit of the same voxel
Many ordinary folk already have 4k TVs at home and 8k will probably become commonplace in the future. The real bottleneck will be good low-latency broadband both up and down, but fibre to the home should make that easier. I wonder if ISPs in the future will offer QoS guarantees to enable really good videoconferencing?
I mean, Zoom, Meet et al are much better than video conferencing solutions like Webex even from a few years ago, but it's hilarious how much drama there still is around video calls. "oh, sorry, I didn't realise I had muted myself", "can you hear me...?", "I think we just lost Steve", and so on. I'll be glad to see all of that just go away.
I have 300/100mbs connection which costs 20 euros in month.
You can't do weird texture mapping or lossy compression and expect people to really seem like they are there. Even if you don't notice that stuff normally watching a video, I think you'll notice it when you're interacting like someone is really in front of you, and that will throw off the immersion.
That said, my intuition is they aren't doing a pure video encoding solution. The fact that they talk about 3d modeling leads me to believe they are doing a combination of model + texture to get the realistic results. That would significantly decrease the amount of bandwidth and computational power needed. Over a low bandwidth situation you'd simply need to send model updates and do some smart interpolation to determine what things should look like.
Similar to the concept that playing a 3d game requires MB of resources but recording the same game at 8k would require a boatload more memory.
My assumption is they are using LIDAR to get a good model, high quality cameras to texture things, and a nice AI to stitch things together and interpolate when data isn't arriving fast enough.
$20,000 is a bit out of the reach of most people, and reserves this for business use or desperate need.
I could see a bunch of execs getting this installed in their home offices as a company perk, and then using it for personal reasons too.
Even for business use a price with 5 digits would make it confined to a few executive offices.
I count at least 5 waves of 3D technology starting in 1851 with the Brewster Stereoscope. Each time there's a surge of popularity driven by the legitimately amazing initial experience. And each time people slowly stop caring. People were incredibly excited about Avatar, and many thought it would change the movie industry. But how many people now go out of the way to see something in 3D?
In that respect I'm very excited for Starline and related technologies--"trying them" will finally just involve being in the line of sight of one rather than having an attendant fit some goggles onto your head.
Personally I though VR glasses would take off when I had a go with them in the late 90's.
Today with remote working I am on the end of a microphone without a picture of myself or my colleagues in the chat.
Yet I am looking forward to being in the office.
I see what Google are trying to do but we have wave after wave of this. VR is a classic, if only we can solve the motion sickness!
On the family level those zoom calls with my niece are now plain telephone calls. Or WhatsApp messages. We stopped caring.
Replicating the experience of in-person communication much more closely than video and 2D displays will ever do. That's a noble research goal if nothing else, I don't get the skepticism.
There are several reasons 3D content and previous generation displays didn't take off, but there's no reason to believe a revolutionary new approach and product couldn't change this (e.g. electric cars were invented in the 19th century and are only now becoming popular). AFAICT the real time photogrammetry they're using here along with the no-glasses 3D display is a major leap forward. If they can get it cheap and reliable enough to mass market, it would be a game changer.
I certainly know what kind of display and teleconferencing software I want when the next pandemic hits, and it's not what we have now.
That's not a problem people express much, at least not in ways where "3D" points to a solution. When I want to see people in person, it's not because of a lack of stereoscopy. I want to hug them, to break bread with them.
> there's no reason to believe a revolutionary new approach and product couldn't change this
There is indeed! In specific, the many times we have already had revolutionary new approaches and products that were met with great enthusiasm in the market for a few years.
I'd add that the telephone was not only a very successful technology for a century, audio calls still remain very popular. (I'm not sure what your work calls are like these days, but quite a lot of people turn off video in mine.) The lesson I take from that is that people mainly self-generate the feeling of interpersonal connection, and they can do it with very little in the way of cues. To me that's another strong indication that no new 3D technology will make much of a difference.
I will say though that people tend to assume that each new technology will replace the preceding technology (text->audio->video->VR/light-field->...), but in fact it tends to end up just supplementing the existing tech.
Of course, I'm not saying this will replace physical communication (I should've said "simulating" instead of "replicating"). But it's a clear step forward for traditional teleconferencing solutions. What do you think is the next leap from 2D video and displays? We're at the point of diminishing returns as far as increasing resolution goes, most consumers don't have a need for 8K or higher res displays. VR/AR is chugging along, but we're still a few generations away from mass market adoption.
> I'd add that the telephone was not only a very successful technology for a century, audio calls still remain very popular.
I don't understand. Video calls were never meant to replace audio calls, they just added a new sensory experience. It's perfectly fine for both technologies to co-exist for different moments and preferences. In a similar way this 3D approach is an extension to traditional 2D video conferencing if people have the equipment and prefer it. Judging by the expressions of the people in the demo and some of the comments here, you're underestimating how impactful this could be, especially if it's polished and cheap enough.
I don't have much reason to think there is any near term "next leap from 2D video and displays". 2D renderings are more than 40,000 years old. They have improved drastically in resolution and fidelity. Computers added being dynamic and interactive to that. it's really not clear that 3D rendering adds much.
> Judging by the expressions of the people in the demo and some of the comments here, you're underestimating how impactful this could be
I am not, because that kind of novelty-driven excitement has driven every wave of popular 3D rendering technology for 170 years. VR/AR has been close to mass market adoption for 25 years. We've just been through an unprecedented period of demand for at-home entertainment, and the hardware that many said was finally, finally the thing turns out once again not to matter.
People have had those excited faces every time. There were people jazzed about the possible impact every time. The Brewster Stereoscope. The ViewMaster (with the US Defense Department purchasing 6 million reels on the theory it would revolutionize training). 3D movies in the 1950s. VR in the 1980s and 1990s. 3D movies again this century. 3D TV for 2 CESes. And then the latest wave of VR, which you agree is still not there despite fantastic investment from companies floating in cash.
Could it be different this time? Maybe! But if we keep measuring it by novelty effects, we're setting ourselves up for the exact same failure that keeps happening.
Surely the improvements in manufacturing processes, faster hardware and better screens are partly responsible for that. The iPhone as a concept has existed since the 1980s, and revolutionary ideas like what General Magic tried to produce in the 90s were just too early to be successful. When Apple tried it again in the late 00s it was a massive success, but technology finally reached a point when it was commercially feasible.
So it doesn't take much to push a product to mass adoption. Just the right industry circumstances, a manufacturer willing to take the risk and capable hardware and software existing to make it happen.
> We've just been through an unprecedented period of demand for at-home entertainment, and the hardware that many said was finally, finally the thing turns out once again not to matter.
Are you dismissing the potential of VR/AR as well? The current innovation wave we're on is much bigger than whatever we had before. Headsets are becoming cheaper, more comfortable and accessible, and the visual tech we have now is leaps and bounds better than previous generations. Once we get to being able to put on sunglasses and experience different worlds, though likely sooner than that, the market adoption will likely go through the roof.
> People have had those excited faces every time.
I think it's different this time. It's not just it being 3D, but the merging of new generations of light field cameras, face/eye tracking, powerful ML algorithms, low latency networks and revolutionary displays is miles ahead of previous attempts. You can't just compare this to the ViewMaster and last century VR. The improvements here are much more substantial, and if they can make it cheap and reliable enough it could be a ground breaking product.
Every one of the products I named was greeted at the time exactly like you are now. The new technology was amazing! The potential was unlimited! And for the repeats like 3D movies and VR: It's different this time!
I agree it might be different this time. Nobody's denying that. Aliens might land tomorrow. What I'm saying is that because of the clear pattern of "OMG novelty! OMG possibilty!" around 3D tech that has failed repeatedly for 170 years, you can't just uncritically make the same arguments. If you want to be persuasively realistic, you have to explain why the 3D novelty effect isn't the major driver this time. Because the long evidence is that 3D displays just don't matter enough for people to stick with them.
Do you remember the Virtual Boy? Or the old cheap red/green paper glasses, and recently plastic glasses that are uncomfortable, darken the picture and give you headaches? These are all issues that better technology can solve, thus reducing the barrier to entry. A display that shows a 3D image without glasses to every viewer with a head tracking effect can potentially solve a lot of them. With similar improvements in camera technology, networks (5G anyone?), ML, etc. and all the pieces are starting to fall into place for what could be a revolution in how we communicate electronically.
Or Google might just axe it as they've done before ¯\_(ツ)_/¯ (Still bummed about Project Ara...)
Obviousness is a feeling people have about ideas in their head. Feelings can be useful or misleading. Every single person who got behind previous generations of 3D thought that it was "obviously" the next step. Many thousands of people had that feeling each time, buying in to a new platform. Great sums of money were invested by smart execs. All of those people were wrong each time. All of them. That you have the same feeling is not proof that it will be different this time. Indeed, the history suggests you should distrust that feeling.
Color video is a great example, so thanks for bringing it up. Color TV and color movies were quickly and widely adopted despite the extra cost and complexity. But 3D movies and TV have failed. Wearing a pair of glasses is not a major burden; 64% of Americans do it every day. Millions of people tried 3D movies and gave a collective shrug. The pretty obvious lesson to learn from those waves is that people are drawn to the concept but actually do not care in practice once the novelty wears off.
Another way to look at it is that people don't even care about stereoscopic vision much in actual life. Humans have a lot of mechanisms for extracting spatial information from the world, and the stereo-ness of it doesn't matter much. About 10% of people don't have it; they can still drive just fine. My grandfather, for example, was blinded in one eye as a kid, and nobody ever noticed. You can try it yourself; go out for a walk and keep one eye closed. Your 3D perception will be basically unaffected except for relatively close objects.
So sure, as I've said repeatedly, anything can happen. I'm just saying there is good reason to believe this will not happen, and excellent reason to not just assume it will. To see this not as a technological problem, but a problem of demand.
As to citations, I'm not sure what you're looking for. I've mentioned the Brewster Stereoscope twice in this thread. Ditto the ViewMaster. What do you need that isn't in the first page of Google results for those?
I've got a hearing problem where I struggle to make out what people are saying on a phone but with a video call I can add lip-reading and visual cues which helps me keep on thread.
I hate this example, and it's like one of the most common ones on HN.
As said by thousands of people and many documentaries before me , the electric car had numerous real conspiracies working against it, some of which were the most powerful financial groups in the world.
The 'electric car' wasn't made popular and possible by recent technological strides -- although it was made better.
The success and popularity of the electric vehicle was made possible by financial shifts away from petroleum exploration, facilitated by dwindling profits and increased scarcity of oil, and encouraged by a movement towards sustainability both from the social culture of the world and the various actions of government from country to country.
Yes, range has improved. Yes, the cars are more intelligent and better to drive -- but these improvements have been seen across the automotive industry since its' inception with ICE based vehicles included.
The real motivating factor behind the electric car is the environment that now exists that allows such endeavors to be profitable -- an environment that not only includes technological improvements like you hint towards, but more importantly it's an environment that fosters development of such things due to the existence of a profit incentive and increased governmental-body cooperation.
All that said, unless Cisco is even more evil than I realized (woah..), I have a hard time presuming that video conferencing has been held back by the same sort of conspiratorial under-handed back-office dealings that slowed the progress of EV adoption.
I mentioned that example because the previous two comments were dismissing this attempt at 3D teleconferencing on the grounds of it being old technology with past failures. But I think we agree that it takes a certain industry environment along with a technical leap to make a technology truly popular. Even if that never ends up happening in this case and it remains a niche product, we should applaud the technical merits here instead of being dismissive.
For that matter video teleconferencing period has only just really hit critical mass even though there were videophones at the NY World's Fair in the 1960s and camera systems have been around in conference rooms for a few decades.
What really happened was that it became more or less accessible to anyone with a laptop and an even marginal network connection for basically no cost. And, oif course, the last 18 months really pushed it over the finish line if it wasn't already.
Tell me a problem WhatsApp solves. It's just a fancy SMS.
Tell me a problem X solves. It's just a fancy Y.
There's no such plausible story for 3D video calls. It's not like people are already demanding 3D displays for any of their other 3D stuff. The 3D first-person shooter, for example, has been around for decades. But 3D displays have never been popular despite being available for at least a decade.
Another issue here is that this is being sold as like "being there", but it's more like "being there at a jail" where you can see person but can't get close to them, can't touch them, can't hand them anything. I have immigrant friends who do calls with their parents basically daily. They do it with mid-grade consumer tech, even though they could easily afford big screens and high-res cameras. That suggests to me that image size and video quality are not as important for this market as one might think at first blush.
And yes OF COURSE I predicate buying on it actually delivering to the extent people describe it in the marketing video, it's ridiculous I have spell it out.
That to me demonstrates that, contra your initial assertion, there isn't a big market for this.
As to the last part, you've gone from "I will get it" no questions asked to what sounds like "I will get it if it checks out". But that's a big jump. You've gone from an early adopter to a mainstream purchaser. From one of those people that buys things on Kickstarter to the much, much larger group who want to see proof of value before they buy.
I think that's very reasonable, but it's exactly the kind of reasonable behavior that has killed 3D over and over in the past. By definition novelty doesn't last, so by the time mainstream purchasers might be ready, the social proof just isn't there.
But I've done a lot of customer development over the years. People say all sorts of things. The question when doing market analysis is what they'll actually do. And the better guide there is what they're actually doing , not what they say they would do.
So when you say that "everyone in immigrant communities" will buy it, I'm going to be skeptical because what people are actually doing is nothing like that. They could already move in this direction with existing tech. As far as I can tell, they aren't. If you have evidence otherwise, I'd love to see it.
I also can't find evidence of third parties competing with shared higher-quality video call setups, which is what we'd expect to see if the demand were there but the price hadn't fallen enough yet. That's the pattern we saw with video arcades and internet cafes/wangbas, for example. Wangbas are still getting by because they've shifted to gamers, who are willing to pay up for better hardware and connections (and room for team play). But I can't find mention of any similar shift for video calls. E.g. India's PCO network seems to be in rapid free-fall, not reinventing themselves around high-quality video calls. That suggests what all the other market data suggests: to the extent people want video calling, relatively low-quality gear like smartphones and laptops are in practice sufficient.
I gather it mostly solves that SMS is expensive in a lot of contexts. Personally I never use it because most of the people I text with have US phones. And the one person who doesn't, we use Facebook.
If it's as seamless as it looks in the video that would be truly novel and exciting.
The big commercial need it turns out isn't so much realism as it is flexibility to accommodate people dialing in from different systems - phones, laptops, different types of telepresence setups from small single room to big conference rooms or even telephone connections etc.
Many years ago, we were all wowed by the life-size realism and had people come into offices for it . Nowadays these meetings have lots of people crammed in across multiple screens dialing in as they please and all the better for everyone :-)
There are demos of binocular 3d conferencing done with a lenticular display (lookingglass), although those large displays are extortionately priced (1/4 of the size of this google one is $3000...) - keeping them out of the hands of most devs.
No doubt google are doing the same, but can afford these larger displays.
You can easily find examples (and research papers) by googling for the relevant terms. Google claiming they have invented a "new technology" just shits on all those folk who dont have the publicity/funding of google.
The marketing for it is pretty terrible though, because I initially came away with the same impression as you.
 See image here: https://www.engadget.com/google-project-starline-191228699.h...
It seems more like an art project than a tech usable in the coming decade.
Sounds like a monster data rate.
> the 8K is their Input Resolution.
> That resolution is then divided into the 45 viewing directions:
> project (he has a Phd Princeton
Why is this part relevant? Was the PhD in light field technology?
How does this relate to advertising and the necessary surveillance to support it.
Will this "product" be set up to phone home and "update" by default. Will the price be "free".
Commercial viability. How much would someone pay for this.
It would presumably work exactly the same way as Google's existing videoconferencing hardware:
In other words, you pay market value for it, it doesn't include advertising, and Google is contractually obligated not to snoop on your content.
Which is why even Google competitors use this hardware, because the legal and technological protections are strong enough they don't worry about Google stealing their IP.
My mom and dad for the past 2-3 years have mostly lived in two separate countries (due to work reasons) and I could tell they miss each other quite a bit.
I got both of them an Echo Show 10" device each and set it up for them. I don't think I can explain how our lives have changed for the better, just based on this simple piece of tech.
The Echo Show is now pretty much constantly on video call for 14-16 hours a day in the living room of both houses and it has become an extension of the one another; a window into the other house. The audio and mics are good enough that at times you genuinely forget the other person is not in the same room. This has truly helped them, especially during Covid times.
It's actually good enough that my parents have their morning tea together practically almost the same way they used to when they were physically together. They've told me multiple times "I don't know how we could've lived without this Alexa thing".
If Google can eventually get the prices down to reasonable levels, I really think people would be shocked at how fast this thing becomes part of our daily lives.
>It's actually good enough that my parents have their morning tea together practically almost the same way they used to when they were physically together.
I don't understand why this sort of persistant 'portal' isn't more popular. But 10 inch screen is so small! A 55 inch TV is what, $300? Put it portrait mode and that's a life size portrait. The feeling of presence when the person's head is the correct size is so strong! I know Facebook sells something for the TV but can it be left on all day like that?
I want it feel like one half of living room is a shared space, at all times. Like the couch is split in half by a portal. Then it just cycles through a list of friends and family with the same setup to find someone else watching Netflix or whatever, and you can casually join them just like a real shared living space.
I don't need 8k or 3D tv. Just a 55 or 65 inch TV on the wall at the end of the couch!
Although I think as I live in a city with small apartments, wall-mounting is much less common. I guess with a decent wall mount perhaps it is trivial to rotate it?
There's another portal model that's as big as a small mirror.
I started working fully remote two and half years ago. Company has a couple of meeting rooms fully set up with Zoom, which I and a few other remote people used to connect to with my laptop for standups together with the in-office people. That's not the case everywhere but quite classic.
The company also had a dedicated TV+camera+24/7 Zoom meeting setup in the open space itself, so that remote people could connect to and say hi anytime. It was surprisingly nice.
Then one day on a whim I got a wide-ish USB webcam plugged into my Xbox One†.
The difference in perception is insane: instead of having this wall of people faces, like you're on several phone calls at once, I got a portalesque window between the office and my place. Instead of connecting people, it connected places††. But it only works if both sides are sharing a space instead of a selfie angle, and I think the way sound was picked up by a fixed element in the room played a lot as well.
So I'm a bit partial about this Starline thing, as it seems to be a step forward in "definition" but also feels like a blown up selfie angle.
† There's no Zoom app on Xbox (which is really a shame) so I used Edge, for which Zoom on the web is more limited and barks on webaudio (which works! Zoom just support only Chrome so I had to dial in on a phone). Also, Xbox+Kinect truly had the potential to be something amazing beyond games. I believe the marketing was botched alright but it was also way too early for the audience.
†† Which ends up connecting people on a deeper level. At some point I caught myself intuitively wanting to hand over physical objects through the TV.
I remember when I used to be in a long distance relationship with my now wife. We used to have the laptops on Skype for hours just to have each other's company, even if we were studying or doing something else. It was very unique at the time.
I'm quite jealous cuz back then we didn't have Echo Show, it'd probably be used just like your parents use it.
The Echo device just makes a sound that someone is dropping-in and gives a 10 seconds heads-up (of course this only works if you enable the permission for family members. Else it's turned off by default).
For me, the problem with video-calling isn't the image-quality. It's all the much more mundane technological problems - high latency, lag-spikes caused by bad ISPs, failed noise-cancellation for people who don't use headsets for audio, bad wifi routers cutting out, etc.
First thing I did when I realized we were going to be WFH long-term was buy myself a $100 gaming headset. Next thing I did was get all my home computer stations wired with Cat 6.
That stuff is far more fundamental and far less interesting than 3D telepresence, but it's the real unsexy problem that so many people are suffering through this pandemic.
Even simple things like latency make simple, natural reactions agonizing. Talkcover and crosstalk is incessent and I've developed a filthy habit of just talking over people because otherwise it's a solid 20 seconds of "you go no you go" caused by awful latency. I've had to defuse angry reactions by co-workers who feel they're being interrupted by other co-workers and explain to them that the latency makes interruptions feel worse than they are.
I've tried to push friends to join me on my private Mumble server where the latency is near-nil and the call-quality is excellent, but there's always one person who doesn't have a working headset and wants to just use a laptop or tablet mic with no feedback-cancelling that destroys the conversation through echos (plus Mumble's auth system is needlessly bewildering).
Then with video, problems are similar but less impactful - cheap cameras, poor lighting, compression artifacts, poor sync with the audio, etc. And it's infuriating because every person has a wonderfully powerful camera in their pocket right now - and there's software to connect them but it's just too tricky for most people.
Good on Google for taking an interest in the subject, but I feel like they're decorating the apex of the technological pyramid while most people are pushing stones around at the bottom.
Solving gigantic scale problems requires a completely different kind of innovation than what you can achieve by pushing the pinnacle of what's possible.
(NB: I work at Google, but this comment has nothing to do with my work.)
In the end Google tried to innovate around the hard work by burying cables around 5cm deep or so in stead of a meter, which turned out to be short sighted.
edit: nvm, found it :)
The issues with connection stability and latency are very real, but I don't know if it's reasonable to expect Google to fix it; the issues there are probably more political than technical.
edit: Also, I think they did mention using AI for noise cancellation while videoconferencing in the keynote today.
In India, ISPs already advertise low latency and high speeds for "PUBG Gaming". The market evolves to solve the needs of consumers. Advertising for low latency for gaming was unheard of in Indian ISP scene, just a few years ago.
So having this tech, would induce customers to get better hardware, wiring, ISPs etc, and would induce ISPs to provide better service.
And it's really a question of money. If you want to fix all those mundane problems you describe, literally every one can be addressed by renting a better pipe and buying and setting up better equipment.
This Google product is clearly designed for high-end offices that already have all of those things under control.
If I'm having connectivity issues to a person, I have no feedback as to what's wrong. If I'm having a connection issue talking to a person, I want to see my ping to their server, my ping to my router, my ping to my ISP, their ping to their router, their ISP, their server, how much packet loss... anything to help diagnose what's wrong.
Instead, when somebody turns into a slideshow with a robot voice, with no idea what's causing the problem.
Anyways, Google absolutely does provide that, as well as additional analysis/warnings in YouTube.
So Google's doing exactly what you're asking for.
Tonari is an indie company, that probably can't "innovate" as fast as google.
Google is probably going to use this to start generating free video data and innovate something else later. It can be used for so many things: gait recognition AI, realistic models for facial expressions, and release that at then google conv.
To words. Fuck Google with their monopolistic BS
If I could get video chat that felt like real life, that'd be worth running some wires for.
If this tech turns out to be too expensive (for normal people) we could still use it on a pay-per-use basis, like with a "video conferencing booth". You'd schedule your call and reserve a local booth for both participants through an app. And most companies should be able to afford having one of these in the office.
This is assuming that this somehow does allow me to interrupt someone's sentence — otherwise might as well do a zoom call.
>["Why are jail phone calls so expensive?"] (https://www.cbsnews.com/news/why-are-jail-phone-calls-so-exp...) OCTOBER 13, 2020
> ["A mom’s $6,000 phone bill in three months: The push to rein in Ontario’s costly prison phone system"](https://www.theglobeandmail.com/canada/article-activists-see...) JANUARY 30, 2020
> [Martha Wright Prison Phone Justice Act] (https://www.congress.gov/bill/116th-congress/house-bill/6389) 03/25/2020
The office use case is probably more realistic, but some other related products (Surface Hub, Jamboard) haven't become as ubiquitous as originally imagined.
1. Call for free from home/couch with a regular camera in the privacy of my home.
2. Go to some location and pay a third party to do a less private call, and have a better visual experience.
The most obvious choice would be number 1 in 99.9% of all cases.
In theory, privacy can be increased by creating a sound-proofed environment for your call, but in practice, that would easily become very expensive.
Edit: And even 0.01% of the video call market would be quite large. Naturally this idea wouldn't work if something like 10% of all video calls were replaced with these booths, they would be fully booked for weeks forward. ;)
Edit2: And we already know most people don't care that much about their privacy since they already use services like Google Meet and Zoom and whatnot.
I’d first go with an Internet cafe style booth booking. Book an hour slot and get your coffee included.
Setup cafes in major cities and I think people would use this - could imagine setting a meet with a friend in another country. Parents showing off their new babies. Etc.
It seems we're seeing a second coming of distributed private comms.
Most companies don't give decent webcams already, I doubt they'd consider paying thousands for such a system.
I wonder what the lag is like. I can imagine that's one thing that would break the illusion. I know with something like Zoom I've gotten used to managing the lag over time by taking turns with the other person. However, with the "live" feel of this, there could be an uncanny valley effect if the lag is subtle, but perceptible.
Another thought: this is being presented as ongoing research. I wonder what the corporate thinking is in presenting it now when it's still being tried out. Does Google want capitalize on remote meetings while it's still hot? If the pandemic wanes and we have more in-person meetings, this might not make as big a splash. I remember when I worked at Microsoft, I often noticed research announcements we'd make in public often wouldn't translate to actual product, so I got a bit jaded on any cool new thing that was announced without a product timeline.
To speculate, here are some reasons why keeping it secret longer might be hard:
- They're going to do wider testing within Google.
- They're going to start bringing outside testers in to try it.
- They're going to start working with manufacturers.
- Some newspaper got wind of it and is about to publish a story. (I think this happened with driverless cars?)
Also, it's nice for the people working on it when they no longer have to keep what they do a secret.
Edit: although, in this case, the timing mostly has to do with Google I/O starting today.
GOOG 411 was free slave labor for voice sample
Captcha is free OCR training first and now road sign detection and image training
AMP so that google can capture the web flow in the name of improving performance
This will be data for AI models specializing in video features of humans:
"AI" generated realistic human expressions which can then be used to do so many things
AI gait detection
aI partial face reconstruction
These all already have further applications which are harmful and will lead to tie in with Google
And in none of the other cases above has google published their compiled dataset. They "own that"
Google is cancer
Zoom has won in the sense that MySpace had won, or perhaps in the sense that Facebook has won.
This too shall pass.
This is a beautifully executed idea and if the demos live up to expectation the hype may even be warranted. But on a much more fundamental level (i.e. fancy 3D imaging and spatial audio aside), this also possibly suggests people would benefit from dedicated videoconferencing hardware. TVs and telephones do one thing really well (or at least historically they did), which is why even my legally blind grandpa could call his friends or watch^W listen to the news. There's a market for having a plug-and-play videophone now that we have the software to go inside it.
What are Zoom, Facebook or Apple waiting for?
the business trips i've been on either involved the installation of equipment, or were an excuse for somebody with budget to burn it on travel and expensive food/alcohol. or both.
i think plenty of business travel will survive just fine.
> What are Zoom, Facebook or Apple waiting for?
This is already a thing?
Facebook has Portal, Apple has iPad, Amazon has Echo, all of whom support Zoom and other video conferencing apps. The portal even has a moving camera to keep you centered if you're moving in frame, and the iPad does the same thing with an ultra-wide lens and some post-processing.
As far as dedicated videoconferencing hardware is concerned, Starline seems pretty late to the game. Although, I'll admit the fancy 3D imaging features is pretty insane.
Personally, I find that--for most people--the idea that working remote shoves a lot of cost onto employees vs. commuting probably off-base. (With some exceptions for people living in small city apartments near offices.) But installing a room-sized videoconferencing setup at home even for people with decent-sized houses is pretty silly.
Very few while it's $20k+. But I can imagine a lot of people would want one if the price was reasonable. I'm sure you still use the room for other things.
Nothing beats in-person. Nothing!
I hope not. Video chat can really never be the same thing as meeting people in person.
The only thing missing is the after meeting drinks and dinner, but there will inevitably be services to put us all in a restaurant/bar environment, pipe in some bar white noise, have food sent from a local restaurant, etc. for an "in-person" virtual happy hour...
A lot of people (those with less social desire/social skills) seem to resent it, but it is true: Networking and casual technical conversations that happen afterhours are the draw for many technical conferences. Talks can be good, and occasionally there are well-constructed lab sandboxes. But mostly, it's going and speed-dating with peers and sales teams to talk about your needs and architecture, and building a good web of contacts.
I also believe fully remote technical/collaboration work, without any periodic physical meetups, will be awful for a lot of people. Sure, those who bought into it pre-pandemic prefer it, and that's fine. But I really think there is concern to be had for the fraying of social bonds and teamwork that can be done, even (or especially) with people you have a tough time working with.
And it's even good that people who just go to sessions for the content will be able to do so--for a lot less money and effort. But I'm planning to go back to in-person conferences as soon as possible.
I honestly can’t believe this still sounds fun to anyone after a year of Zoom dystopia.
But everything else about virtual conferences has completely sucked and anyone running events is aggressively trying to get back to in-person. (With a hybrid component for presentations.)
Is "Starlink" going to be a Gmail or a Wave? Hard to say.
There was a time I felt that Google's search was head and shoulders above anyone else's and when I reluctantly used DDG I felt I was compromising some aspect of search value for privacy value. But now when I use Google search I'm bombarded with ads, find I can't trust search results, and even copying search links results in a mess of Google redirect.
What is happening to the core Google products or am I a curmudgeon for believing that the Internet was at its most functional and user-friendly best circa 2008?
I don’t think you’re a curmudgeon at all. Circa 2008, Google was innovating in a lot of different areas that were very valuable to the average user. The ROI on these services has gone way down.
Does Starline give you a different view when you change your perspective? It looks like it does to some extent. If so, it might work before long, but grandpa died about fifty years too soon for it.
Before you laugh or be prude, porn content was what made VHS and BluRay succeed (or are these urban legends?) and they were pioneers in stuff like online video streaming.
I know some girls who are prostitutes. They seem normal enough. Maybe they're wearing a facade and have hidden PTSDs, I don't know, but if that's the case, I think you might want to extend your crusade against the military.
The fact that you instantly associate what OP mentioned to women specifically is quite disgusting, not going to lie
FYI More people than just women suffer through that and taking the oxygen of such a discussion via weaponizing linguistics is just nasty and a horrible way to spend the opportunity cost of it
The (New) Nintendo 3DS has head tracking, but it doesn't change your perspective into the view port, which gives a very dizzying effect when you deliberately test the feature.
I would imagine its possible to extrapolate perspective if they had an array (N > 2) of cameras.
This is super cool tech, and can't wait to see an array of these installed in the secret undisclosed board rooms for the illuminati.
The device knows where the viewer's eyes are in 3D, so can display an image of the other end, as viewed from that point (within some constraint).
Unfortunately, even an already-set-up VR experience was too strange/unnatural for him so he never got to experience it. However, this looks easy/natural to use and set up and feels like it'd have a similar mind-blowing effect for many of the older generations, which I think is a good indicator of being revolutionary tech (assuming it can be made available/cheap enough for most people to try it out).
Edit: I’d love any recommendations of other authors people think of when discussing his books. Doesn’t necessarily have to be the same style but more just that level of quality and uniqueness.
It is 6552 words long.
Also, microsoft did this a couple years ago, without fancy volumetric display, just face tracking on their expensive 8k(?) tv for meeting rooms, and it was a complete flop.
That’s a big deal for anything which requires hardware you wouldn’t otherwise own. Once you hear “custom-built hardware and highly specialized equipment” that sounds like something you really want a commitment to not just begrudgingly patch but to continue to seriously invest in the product.
(It's probably also not true, when compared to Google's peers. Amazon and Microsoft similarly throw a bunch of stuff on the wall, and unceremoniously kill the failures, but neither their launches nor cancellations get this reaction.)
And its inevitable discontinuation.
We can pretend that this will be Google's magical fairytale product that will not be impacted by the above, but who are we kidding?
Someone suggested this is the tech:
But it still has no explanation of what the "proprietary light field technology" is.
How do you get each pixel to show up as a different color with 45-100 separate angles?
I had no idea this actually existed in production!
I believe there are two basic approaches.
The first is "lenticular lenses" -- the same technology used to make those shiny postcards that change depending on what angle you tilt them at.
Here's a site with an example GIF and kinda explains how it works:
Normally, the lenticular lens is glued directly onto a printed sheet of paper, but you can instead glue it onto a _screen_, which lets you change the pixels underneath. There are a lot of challenges here; for example, the pixels are usually located a bit behind the glass, so the standard lenses used for printing won't focus perfectly on a screen.
There's also another, simpler approach called "parallax barrier". This is what the 3DS used. Instead of using lenses to bend light, it _blocks_ the light coming out of certain pixels at certain angles. It's basically just an opaque sheet with periodic slits (the 3DS uses an LCD, so that 3D can be turned off by becoming transparent). Before I found a video of someone making one themselves using an iPhone and transparency sheets, but I can't find it at the moment...
3DS parallax barrier: https://www.youtube.com/watch?v=D-LzRT7Bvc0
Diagram from Wikipedia article: https://en.wikipedia.org/wiki/Parallax_barrier#/media/File:P...
A big disadvantage of parallax-barrier is that you end up blocking a lot of the screen, making the image darker.
Supposedly you can combine the two approaches to reduce the darkness.
There are probably even more tricks involved, and this is all a lot easier-said-than-done.
Here's a project that uses an LCD to make a dynamic parallax-barrier, that increases brightness by skipping the barriers wherever they don't matter:
Tangent: There's also been some attempts to do _the reverse_, and use distorted lenses in front of a camera to recover a 3D scene using only a single camera:
https://dl.acm.org/doi/10.1145/3072959.3073589 (link has a SIGGRAPH talk recorded which shows example results)
I’m skeptical on its impact on the in-person vs. remote debate for work. What really distinguishes in-person for me is the potential for informality. Bump into somebody, have a chat, walk over to a colleague’s desk for 5 minutes, decide to go on a walk for a 1x1 instead of sit in a room. This doesn’t do anything for any of that. In fact, it arguably increases formality.
Understood that many folks are not interested in those in-person artifacts - sharing what I see as key distinctions which aren’t in the solution space of Starline.
> What I’m actually looking at is a 65-inch light field display. The Project Starline booths are equipped with more than a dozen different depth sensors and cameras. (Google is cagey when I ask for specifics on the equipment.) These sensors capture photo-realistic, three-dimensional imagery; the system then compresses and transmits the data to each light field display, on both ends of the video conversation, with seemingly little latency.
> All of the data is being transmitted over WebRTC... What Google claims is unique is the compression techniques it has developed that allow it to synchronously stream this 3D video bidirectionally.
Well, at least we know that they weren't all white men; because if they were, this writer would sure have let us know about this very irrelevant information.
Still, the concept is exciting, and if the execution is there, it'll be one of the most important leaps in communications technology in decades.
And I'm looking forward to a company named something like InstaPresence (TM) applying filters and making us all photorealistic cat people.
The illusion only works for one viewer at a time, because the image is POV-dependent.
Google is apparently using this with a 3D TV. Can you still get those?
The main problem with doing this now is getting all the retro technology you need - a 3D TV and the good Kinect. Any good current GPU should be enough engine for this.
Interesting thing to note is that they don't show the participants touching the glass pane 'separating' them, whereas for that kind of situation it would be very very natural thing to do.
I guess doing so (reaching towards the 'glass pane') would make the imagery distort/degrade real fast as you would start going out of camera's FoV, which that would break the magic.
There is so much more we can do in terms of quality and immersion that we're not doing simply because bandwidth and connectivity are so low-quality and overpriced at most of our leaf nodes in the USA.
From watching the video, Google's conferencing setup is creating a 3D rep of the people talking and adjusting rendered view based on where the participants are seated. This is blending AR with videoconferencing. It would be interesting to see how their conferencing system works with multiple-people on each side. I know the video had a mother and baby in one segment, however is the 3D rendering based on the eye-level of the main participants?
Potentially a solved problem, just fix it in post. :)
I wonder how well it works, and how much latency (if any) it adds to the feed.
I'm also sad it isn't rolled out more generally, a very strange feature to lock behind a small-ish volume hardware product.
How could any of those technologies be what is used here?
E: Looking again, perhaps it could be some layer over a traditional screen. You see through some of the broadcasts but that could just be the digital far plane that shows through.
This implies, AFAIK, that it either uses lenticular lenses (which is the tech 3D-cards typically use), or a parallax barrier (screen tech from 3DS). There are a thus sectors from the screen to the viewer, and you need to have your head placed so that your one eye sees one sector, and the other eye sees another. What the reporter describes is when both her eyes end up in the same sector, which immediately makes the result 2D. Note that there might be more than two sectors, so that you can move further sideways and get a realistic view, but each eye must all the time be in a different sector. It can also use head tracking to achieve such correction of your view wrt. movement of your head, since it evidently constructs a full 3D scene of you and the other side, it can render that from any angle.
Track both eyes, and then project an image to each eye based on its image in the room. The part I don't really understand is how it's possible to target the image to each eye. Maybe we have displays now which are like the 3DS screen, but with variable focal locations?
Notice how the angle of her face changes as the camera moves: first you only see her left ear, at the end of the animation you only see her right ear.
Does anyone know if the display is similar to the display in the Looking Glass 8K holographic light field display?
But alas, reading further down, just the same old flat screen, with slightly better spatial trickery than what we've already had for a while now.
Computer, end program.
This is the first time, I felt this is going to be great. Something that should have came from Apple, the humane side of technology, for the first time ever came from Google.
Hopefully this isn't a one off thing or an outliner from Google. Apple desperately need some competition.
Map was probably one we had last time when it was discussed. Gmail came in as side project, unintended. Android was acquired and forced to react with iOS.
Interview about the SIGGRAPH paper here: https://blog.siggraph.org/2020/08/how-google-is-making-strea...
As long as you're willing to give up stereopsis, and you have only 1 viewer on each side, I think you could accomplish this type of immersive telepresence with an ordinary TV + software.
The video certainly plays up the software, but I've never used zoom or FaceTime in an 8k video call booth before, so I suppose I don't have a point of comparison.
If you've only had 480/720 on a small screen, with a mono microphone with poor lighting, lag, dropouts, then just fixing those things and making it 2k+ might be surprisingly good (ie will have an emotional impact due to the truer representation)?
I mean we used to marvel at things like Star Trek, but nowadays a smartphone is miles ahead of a lot of "day to day" things they showed in there. Foldable screens are coming too, and now this.
I mean I don't believe this thing will be commonplace at all in the near future, but it's still cool. I think it'll be integrated into smartphones within the decade though. It's already mostly possible with the front-facing camera + light field of iphones, plus AR, plus motion sensor technology, plus the load of processing power they put in there.
You could then use the camera perspectives to create a 3d image of the person you're conversing with and map the colour data correctly to that 3d image (Photogrammetry)
You could also likely use the information from the four cameras to map the orientation of the 3d image of the person you're speaking with to give you that sense of depth as you shift your position. https://www.youtube.com/watch?v=Jd3-eiid-Uw
If you had a speaker and mic in each corner you could also capture / emit subtle differences in audio to further enhance it.
I was ok with Gmail scanning my emails to finetune targeted advertisement. But they don't do that, which is a nice surprise.
Why would this be any different?
Imagine gaming on this.
I assume the porn industry will be early adopters (sorry, it's probably true).
Our relationship to chat bots and assistants is going to get a lot more unsettling in a few years.
Consider the difference in cognitive load and emotional burden between a human hitting you up in the Whole Foods parking lot to donate, and, the junk mail you deleted/recycled.
We have no defenses against autonomic-level mirror neuron empathic response when the uncanny valley is bridged...
Imagine the call center bot whose eyes flick to one side for a moment, that being the tell of when a call center human takes over to handle a corner case.
OH LAWD ITS COMIN
Of course this won't be the interactive feeling but it was pretty mind blowing to "feel" how intimate real 3d telepresence could be.
They have some demo's here:
But Virek’s wealth was on another scale of magnitude entirely.
As her fingers closed around the cool brass knob, it seemed to squirm, sliding along a
touch spectrum of texture and temperature in the first second of contact.
Then it became metal again, green-painted iron, sweeping out and down, along a line of
perspective, an old railing she grasped now in wonder."
Especially, if this is pre-recorded and only sent for viewing (so no need too much bandwidth on upload side). I guess it would come down to the hardware and how much it costs.
> Project Starline is currently available in just a few of our offices and it relies on custom-built hardware and highly specialized equipment. We believe this is where person-to-person communication technology can and should go, and in time, our goal is to make this technology more affordable and accessible, including bringing some of these technical advancements into our suite of communication products.
I could imagine Apple selling this in a re-imagined Apple TV with an Apple TV Facetime App. They could probably build something now with the FaceID sensor array / iPhone Camera system / M1 chip plugged into a 4k TV.
Reading this shortly before the pandemic spread gave me a really strange perspective on the whole remote working boom.
I guess we're going to get used to different kinds of compression artifacts in the coming years because we're switching to spatial information being transmitted as opposed to just pixels. Hair is so much harder to get right than a face.
Don't get me wrong, this is cool, but a research project becomes really cool when it gets well executed
Which type of autostereoscopy is it?
Can't wait for them (Google) to get there (large-scale production)
I thought it was some stereographic encoding on layered displays but the second they mentioned 3d mapped videos I started to see pixar like characters. Very odd.
1. Google shutting it down after sometime out of the blue. Just like that.
2. Google needing nothing less than a Gmail account always logged-in and no choice about it.
Calling it now.
I hope they don't start using this for "whiteboard" interviews, would be super stressful to be interrogated from behind a mirror with nowhere to go
People are already depressed as hell from the lack of touch, real family, community and friends in modern society.
That said, of course this tech could have its uses, but mediated by the largest corporations on Earth that collect, sell and mediate everything about you and your friends? No thanks!
Jokes aside, it is cool tech, but I fail to see applications. With my developer colleagues we mostly share a desktop instead of seeing other peoples faces.
Don't know how it is technically solved to simulate depth, but I image it being no different than conventional screens. The difficulty is probably doing a real time stereoscopy of the object displayed.
Would be cool to know if they only use AI supported imaging for that, or if they have some sensors, maybe invisible laser projections or stuff like that. There are probably restrictions how the cameras must be angled too, so a self-made home setup would be difficult to calibrate.
This is all made to evoke sense of closeness.
Quite a bit of the technology used there seems destined to get more affordable as it's getting more widely adopted.
And yeah, insane prices at the start funds development for everyone else, I remember when the first 4K screens came out and they where exotic, now they are just normal.
Same thing happened with phones and hell computers, I was the first kid I knew with a computer back in the 80s and we where not wealthy, that think cost more than my dad made in a week, an actual IBM PC was unattainable til I was I was 10.
Now I have hilariously more powerful single board computers shoved in a drawer because I can’t find the time to do anything with them.
Minecraft has never been demanding resource wise, I'm sure early versions had serious performance issues but running it on a cheapo laptop has always been totally reasonable - it's quite accessible (it was written in java even!)
But for hardware, it's almost always some expensive thing first. The internet was once very expensive to access, cell phones were initially very expensive, DVD players were initially very expensive, computers in general, etc.
Software seems less like technology and more like writing. The distribution cost, once the systems are in place, is marginal. The technology part is creating the systems that enable the software.
More seriously, I disagree about software being less important because there have been very real innovations for tooling accomplished in software alone. Email is a pretty classic example - but a more modern one might be Google Cardboard which can turn your smart phone into a rather underwhelming VR headset. There are plenty of hardware alternatives but the same basic functionality was accomplished on generalized hardware.
Additionally, all this technology is only really possible due to other technology - we don't discount a new shiny computer just because it's just a dumb oddly shaped box if you can't supply it with electricity - but the costs to develop software are generally lower than hardware so I think it's fair to have a general notion that hardware is more innovative - it's just that you're conflating two different variables - cost and medium.
Additionally Zoom isn't a novel technology, it isn't even particularly interesting technically when compared to other video conferencing solutions - but over the past year it's been incredibly important to a number of people.
I think the OC slightly missed the mark in mentioning "important technologies" instead of something closer to "technologically innovative" technologies or, more accurately (but less interesting of a statement) "expensive to develop technologies". Things that are expensive to develop generally aren't cheap to begin with, while things that are cheap to develop need to be cheap to compete with other market entrants and clones. Additionally hardware (a limiting factor on cost for a lot of technology) tends to get cheaper over time and that rate of change is accelerated by a large market of interest (leading to more folks deciding to try and iterate new designs).
No, Linux differed from Minix in utterly fundamental ways outlined in the correspondence between Linus Torvalds and Andrew Tenenbaum.
And basically anything Nintendo pushes as a console gimmick. It's not that the tech immediately goes from research to broadly accessible, but rather that they tend to take old tech that no one saw as having profitable consumer applications and find one for it. In that way, as far as consumers are concerned, it goes from unknown to widely-used without making a stopover in early-adapter purgatory.
And I see that Nintendo has apparently sold an extremely impressive number of consoles (https://www.nintendolife.com/news/2019/11/nintendo_has_now_s...). But even if everyone only bought one console each, that's only about 10% of world population.
This may be a little unfair, but I do wonder if there isn't a tendency to consider a technology to be widely available when it becomes available to you and the folks farther back in the line don't count or aren't relevant.
In any case, you said "important," not "widely available," and yes, Nintendo's products are hugely important. Many of today's technological advancements can be traced back to their proving that a given use case for a primitive version of a given technology was viable.
I will also note that my original comment was in response to someone who is "more interested in tech for the other 95%".
So, 12 years (back then) after the iPhone, it's reached a lot more than 5% of the world.