Hacker News new | past | comments | ask | show | jobs | submit | more krenzo's comments login

When I was in the Navy, "shipmate" was a negative term because you only heard it when someone, most likely a khaki (senior enlisted or officer), didn't know your name and wanted to chew you out. "Hey shipmate! Your haircut is unsat!"


I can't think of any time, aside from maybe bootcamp, that I was ever called "shipmate" by anyone other than a green khaki, as you say.


I imagine it's kind of like when someone called me "kid" in elementary school


Yep shipmate is only used against people too new to know you’re actually insulting them (in boot camp and earlier schools it’s more of a cute jest), and when someone is so fucked up that they’re not worth reading the name on their uniform.


I loved to ask my grandmother about this because she recalled listening to the original broadcast. She said the station would take commercial breaks to interrupt the story, and they reminded listeners it was a fictitious broadcast when taking these breaks. She didn't believe that anyone could be fooled by it, at least not to the extent that was reported.


Oh how times don't really change.



What exactly cost so much? You can build a retinal display using a $200 off-the-shelf laser projector if you turn down the power and add less than $100 in lenses. I've done it, and I've even managed to get it up to 8K resolution with some more time and money thrown at it. I've since formed a company to try to commercialize it: http://www.alphalux.io


Not sure what you are attempting to commercialize, but your website doesn't explain at all what you have made. Unless I'm supposed to guess it from your background picture of 3d rendered glasses with some sensors in them...

From most studies I've read on the subject, there are a couple big issues. Firstly, physics is hard. It's very difficult to get a useful amount of digital information presented that closely to your eye, where it is in focus, and comfortable to look at. Scaling up resolution beyond a low-res screen is even more difficult. Secondly, tech just isn't there yet. The closest thing I've seen in the market is Google Glass, and that was a huge flop. It also looked dorky as hell, while providing very little real value. Sure, you could read a text, if you squinted and focused your attention up and to the right, but at that point, pulling out your phone is just as easy. It it also makes you look like a strange android while taking your attention away from the real world.

For a product like this to work in the market it has to meet a lot of requirements. Resolution, invisibility (as in, it doesn't feel like you're wearing a clunky awkward device on your face), battery life, safety (you're shining light into your eyes), and provide real, useful, functionality.

Useful functionality, to make it worth caring about a device like this, is good augmented reality integration. And I mean very good. If you put the device on your face and the mapping of the real world stutters for a second, you're going to hate it and never use it - and no one will buy it.

Also, no, being able to project a little cartoon monster on the surface of a table is not "useful" AR functionality...


> Also, no, being able to project a little cartoon monster on the surface of a table is not "useful" AR functionality...

I would think being able to do that means you've solved most the hard problems you mentioned, so if it can be done well it means we've achieved a certain level of usefulness.

Sort of like how bouncing a white square between two movable white rectangles was a "useful" bit of functionality for consumer AV electronics. Pong isn't exactly blowing anyone's mind now, but it did mean they had to solve a lot of problems to make a machine that could interface with current televisions, deal with user input, do it and update the display within an acceptable time period so it was responsive, and hit a cost and form factor so the general public could make use of it.


> I would think being able to do that means you've solved most the hard problems

I guess the other hard problem, besides just creating the tech, is real-world application,

Reminds me of the Leap Motion device. The creators made some really cool tech, and it works pretty well for what it is, but most people struggle to find a really useful application for it.


You think Google Glass is the closest thing to ever be made or the closest you've personally experienced? The Hololens and the Magic Leap execute much better on the idea of a HUD.


Hololens and Magic Leap are worse than Google Lens in the "looking dorky" and "looking like a strange android" sense.


Even Steve Mann couldn't get past the dork factor, but having waypoint visual reminders is a wonderful option.


I don't know if you have a working project, but if you do, let me throw an idea at you. Instead of trying for an 8k projector, give me something that can handle a couple of lines of text -- say 6 lines by 25 characters, and supports a simple serial API.

Augmented reality is a great dream, but an unobtrusive heads-up-display for simple information would be revolutionary in itself. Baseline applications like a clock/calendar/compass, maybe reminders, or a no-look note-taking tool. Next-gen involving real-life closed captioning, or, when supplemented with a camera and an offline database, a basic "who is this person I am talking to" / protocol officer. Further than that a very rough "am I facing the right way" waypoint finder, etc.

If the hardware can be made cheaply for this sort of application, use cases will emerge faster than you can shake a stick at. Overlaying reality, etc., are way less interesting without huge amounts of compute.


Unfortunately the hardware to do this is harder than a LCD or even DLP. Otherwise we'd have it already.

Galvo projectors as used in hololens etc. are vector, but tiny projectors aimed at retina are extremely hard to pull off, even ones aimed at glasses are very hard and lousy. Aiming them at a wall is ok though.


I'm not suggesting a vector display. The VT100 had 240 scan lines and 768 dots; half of this would be fine.


Without knowing more about your solution I can't comment on that specifically.

I do know that the mems, fiber coupling to the mems and the waveguides necessary for wide enough FOV and variable focal length are very much bespoke right now and thus costly.


Where do you get an 8k laser pico projector? I don't think those exist on the market.


You can get 720p pico projectors off the shelf, but to do 8K, you have to make a custom design with faster laser modulation.


Unfortunately laser modulation is pretty much the only solved problem.

At higher scanning rates the achievable angle of deflection with commercially available MEMS scanners (the type of scanner used in the 720p pico projectors) is too small to be useful.

The other class of laser scanner potentially relevant to practical high resolution projection are the solid state laser scanning systems, which suffer from limited angle of deflection AND a limited number of resolvable angles.

60fps 8k is achievable with a mechanical system using a turbine-driven polygonal mirror. Unfortunately, the precision machining, several tens of kilowatts input power and requisite hearing protection present some obstacles to commercialization.

As a general rule, if a raster laser projector claims to achieve much over 720p they are either lying, mistaken, or out of your price range. It requires an order of magnitude increase in scanning rate to scale from 720p to 8k. The next incremental milestone for this sector is "actually achieving 1080p instead of just lying about it".


I thought high resolution laser projectors were just using the laser as a light source and forming the image using a DLP or LCOS imaging element? Scanning 4k+ seems like it'd need some bonkers specs on the equipment.


And that vs 8k LED lit LCD projector with waveguides for depth. Same technology as used in VR goggles but with much more optics; and also smaller.


neat, how fast is the average time to complete a full 8k scan? I imagine it needs to be on the order of <1ms for it to look like a complete frame and not see rolling shutter-type effects.


I'm fairly confused - that would be an absolutely massive breakthrough that several companies worth billions are looking for. What's the downside?


This is cool. I would like to subscribe to your newsletter (seriously).


Send me your e-mail using the Contact button on my site, and I'll keep you updated.


Which site? Which contact button? Your profile here is empty. I'd also be very interested!


if the image doesn't sit superimposed in the center of the vision no matter where you look, that's not a retinal display


Seems like a nitpicky distinction to make. If I could buy a small cheapish device that would replace a large 4k/8k monitor that costs thousands I'd buy it in a heartbeat even if it didn't track which direction I was looking in.


In an interview, Jeri Ellsworth stated that when she was at Valve around 2012, they were exploring all types of near-eye displays for use in their eventual VR headset, and one of the displays they tested was a laser retinal projection display. It was specifically mentioned because she pointed out the safety concern they had with projecting a laser into the eye.


That is a failure that could potentially result in damage to the eye.

MEMS mirrors have built-in angle sensors, and when these report that there's irregular or no detected movement in one or both directions, the lasers are turned off.

The lasers in a MEMS mirror near-eye display operate in the microwatt range. Low power laser pointers operate in the milliwatt range, and the safety mechanism for those is that your eyelid is expected to shut to block off the laser in less than a second. Therefore, you have more time to shut your eyelids or remove your eye from the MEMS mirror display if there's a failure and the lasers fail to shut off.


How do you deal with ciliary muscle exhaustion from staring at a computer screen for extended periods?


> [In Nuclear Power School,] I was below the line enough to earn the distinguished dishonor of 25 additional study hours each week

My man, welcome to the club! Reminds me of the time I was punished for being just 30 mins shy of meeting my mandatory additional study hours because my friends dragged me out to go see Star Wars Ep 2. Nuclear Power School had a rule where if you were scanned in (everyone had to scan in and out of the building with a badge) for study hall less than 30 mins, all of the time was invalidated. I clocked out at 29 mins. It was not worth it!


We really do want that kind of attention to detail in our nuclear force, though. Its the combination of engineered safeguards and cultural safeguards that has kept the nuclear navy relatively incident-free all these years.

My default position of trust in a nuclear energy startup starts off very low specifically because I have yet to see that kind of detail-oriented and safety-oriented culture in the people behind them.


My previous project was an UWB indoor tracking system for VR. FPGAs did all the signal transmission, reception, and digital signal processing: https://www.youtube.com/watch?v=mYyFUQbWC1E

My current project is AR glasses. An FPGA is decoding a displayport signal and driving the display.


But in response to users disabling javascript, sites now only load the first paragraph and require you to click a button to fully load the page.


Isn't that an engagement metric? Lots of people load articles and don't read them…


Havent had JS on by default in years and I have yet to encounter that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: