Hacker News new | past | comments | ask | show | jobs | submit | achairapart's comments login

    “I choose a lazy person to do a hard job. Because a lazy person will find an easy way to do it.”
― Bill Gates

There are two kinds of problem solver: The ones that think that a complex problem requires a complex solution and those who try to solve complex problems with elegant solutions.

The first group transfers all the complexity directly to the solution, the latter tries to eradicate complexity by trying to simplify the problem first.

I found that engineers tend to be usually in the first group. Designers (especially UX's) tend to form the second group.

Example:

This is (popular) software made by the first group: https://www.bulkrenameutility.co.uk/assets/img-bru/mainscr.p...

This is essentially the solution at the same problem, made by the second group: https://cdn.osxdaily.com/wp-content/uploads/2015/05/choose-b...

See also: https://xkcd.com/538/


There are so many hidden/obscure keyboard shortcuts in macOS, from time to time a post with a nice collection (and usually some hidden gems) appears on the front page here.

But I always wondered if there is a place where you can find all of them, for reference.


Also pity that macOS makes very little effort to communicate these, so they almost feel like Easter eggs…



Yes, I'd love to see menus printed in OCR-A[0].

[0]: https://en.wikipedia.org/wiki/OCR-A


OCR-A looks cool, but my above post isn't saying I want the menus in a machine-readable font, but rather I want the human-relevant parts of the lookup to be both human-readable and machine-readable.


>...I want the human-relevant parts of the lookup to be both human-readable and machine-readable.

There is OCR-B[1] for that. It is widely used as the human readable part of EAN/UPC barcodes used for laser scanners in retail. The relevant thing here is that the human readable part is never OCRed in practice because the barcode can be read much more reliably. OCR-B seems to be recommended simply because it is a well specified (ISO standard), easy to use (no weird licensing), high legibility font. Which is interesting because, as mentioned elsewhere on this thread, you don't really need a special font for OCR anymore. So it is a commonly used font that no longer serves the original purpose of the design. If you trained your OCR only on OCR-B you would likely get get some amount of accuracy improvement, but that would work for any high legibility font.

So the problem would be convincing app designers to read both things, where one of those things is much more reliable. I guess that might make sense for things that require high security, where the false negatives were worth the bother.

Getting back to the original article, OCR-B only has the good OCR characteristics for the upper case Latin characters, like with the QR code size thing, and for the same reason. If you are identifying something you generally want to use the larger, easier to read (both for people and machines), upper case characters. The lower case glyphs were added later to OCR-B as an afterthought.

[1] https://en.wikipedia.org/wiki/OCR-B


I'm not sure I see the distinction. if the human readable part is machine readable, there's no need for a separate machine only readable region. I'm on iOS and selecting text from images is assumed by this point. I'm not sure SOTA on Android for that though.


"I would love to see menus printed in OCR-A" just takes a regular human readable paper menu, and increases the success rate if you were to painstakingly scan each page to make a PDF with selectable text. That doesn't solve an actual need when sitting with the paper menu.

Taking a QR code that takes you to a URL, and writing out the URL in text does not enable anything, as you still must have an internet-connected device to retrieve the content, in which case you would most likely be able to scan the QR code anyway.

The point is to have e.g. a QR code, while still having human-readable content for those not having or wishing to use their smartphone in this interaction. E.g., just having the list of things on the menu printed in plain text (e.g., on a wall, over the counter, paper menu, ...), but also having a QR code with images and the ability to order directly to your table - stuff beyond what text would get you.


> Taking a QR code that takes you to a URL, and writing out the URL in text does not enable anything, as you still must have an internet-connected device to retrieve the content, in which case you would most likely be able to scan the QR code anyway

I often see things that I want to "look up later" while I'm passing by, but they only have QR code. I don't know if it is worth the time to stop and scan but if there's a URL I can just read it and remember.


To keep an actual URL for later you'd have to write it down or take a picture of it (which also works for the QR code, either by following it later or keeping the tab open). URL's aren't human-readable or memorizable, nor is it meant to be.

"Easy to remember URLs" are just domains that mostly match the human-friendly name of the place. As you are not memorizing a full URL, you can only really go to the frontpage, and so you can just memorize the name of the place instead.

It's a different story if there's just a random QR code with no clear purpose of ownership, but... maybe don't scan that.


I always find it strange here when I share an experience and people want to tell me why what I experienced was technically impossible. URLs don't have to be human readable but they often are, and the frontpage is probably fine indeed but to get there I need to know the name of the thing or some search term when more and more often--and this is the point I'm making--the only thing I'm given is a QR code.


You misinterpret - I am not saying that what you experienced was technically impossible (you saw a URL, later recreating a valid URL at least partially), but that your interpretation of what happened is neither practical nor possible in the general case, nor is it necessary. Let's try with an example:

You see a poster. "Samsung Galaxy, for real this time. Tune in on 1st of April to watch the ceremony live where we subjugate the last planet in the Milkyway.".

Scenario 1: The poster has a QR code, and a URL: https://events.samsung.com/press-room/world-domination. You memorize the hostname (events, samsung, com), and half a day later you manage to pull up a page for events, select the intended one, and get to the live feed.

Scenario 2: The poster has a QR code, no URL. You memorize "samsung galaxy", or "samsung event", or even just "samsung", and half a day layter you type this into your browser's address bar, which gives you as the first result the live feed you were looking for, or at the very least to samsung's event page.

"memorizable URLs" is not human-readable information, but computer-readable information constructed with certain rules to mimic human-readable information - e.g., the company name mangled to fit URL syntax. The original, unmangled information is easier to remember.


Scenario 3 the poster is a cool looking image with people doing something that interests me. There is a QR code and no other information.

Scenario 3 the poster says something is happening like "Neighbourhood dinner, Sunday at 7 -- Want to help cooking? Scan this code" -- There is a QR code but no information about who is organising it, where it is or how else to contact someone about helping to cook.

Yes, a name serve just as well as a URL, the point is that people begin to believe that a QR code is more convenient than text. Give me a link, give me a name, give me a search term, just give me something more than a QR code.


Then we agree.


Yes a URL is much handier than a QR code because I can read and remember it.


... no.

If you need to add human readable information (which is your scenario), a URL is never the right answer. Write a name or a sentence. Computer-readable information is for computers to read.


Just because something is OCRable doesn't make it structured data that can be used immediately. A table at a restaurant might have a QR code that takes me to a menu with the table number already encoded and pre-entered into the order page ready to go. An OCRable table number does not give me that, and an OCRable URL like https://fragmede.restaurant/menu?table=42 might work for HNers, but most humans won't recognise and understand their table number when going up to the bar to order.


"Fragmede.menu" costs $35 a year, which is roundoff error cost for a restaurant, and is a short-enough domain for a customer who wants to view a menu and order. No need for the "https://" which is implied. Adding a "?table=42" could be optional but isn't necessary, as the website in addition to simply presenting the menu could provide a means to order and if so have a little html input box when ordering to put their table number or whether it is pickup.


Sure it can be done, but there's no denying that a simple scan of a QR code instead of manually typing a URL would make life easier, as would some kind of alternative encoding technology that is more pleasing to the eye.


"if the human readable part is machine readable, there's no need for a separate machine only readable region"

So why are physical venues in 2025 presenting me with QR codes? If there is some randomish number (like another commented pointed out QRs may have a UUID number) or a checksum, then encode those randomish bits as a dotted rectangular outline or a line underneath, so a big QR doesn't ruin my human experience.


I have no idea which restaurants your frequent but take it up with them, not me. having a url written out http://restaurantname.com/menu on a sign that you can take a picture of and click on is functionally the same as a having a qr code that links to the exact same url


Having a short url like restaurantname.menu is not simply functionally the same as a qr code. A QR code is almost meaningless for me to look at...all that it says is that that there is a code hidden within these dots and I maybe can infer that the code contains a url to a menu (and maybe contains random tracking stuff). But restaurantname.menu in text communicates to me that these letters are almost certainly a url for a menu for restaruantname, and it is something that I can verbally say to the other people at my table and remember in my head.


So they can change the menu online without having to re-print it.


They can do that with a URL.


Scanning a QR code is far faster than typing a URL, and you need some sort of a computer to access the URL anyway, so providing a human-readable URL doesn't achieve anything.


A relatively short url like restaurantname.menu is something a human can say and remember, so it does achieve something more useful than a QR code. I might even be able to type or speak a short url quicker into my phone than for me to find the QR code reader feature in my phone and point and hold my phone still.


That's fine but it's moving the goalposts from the specified "reason for using a QR code" that I was responding to.


If they get you to use that QR code, they will get you to visit a URL and maybe show you ads, or sell your information to trackers. I mean with your tracking cookie, you visited a physical location, that's gotta be worth something.


In theory, if every QR in the building was different and they had the right sensors, they could also try to pair your browser fingerprint to your Bluetooth Mac, WiFi Mac, and your face. That pairing would have value on its own.


How is that different than a text url?


A simple short human-readable text url that a human can say and remember is not going to have a bunch of ".php?q=trackingcode&..." nonsense (though of course other trackers like cookies may still exist).

I don't know why the parent is getting downvoted...they bring a up very good point that hidden inside these QR codes are a bunch of tracking bits and the QR's ability to include tracking stuff may sadly be a significant reason why we see QRs way too much.


Both the human and the QR-encoded URL will be shortened and use a redirect to the URL with the tracking info.


this is not the point

Scanning a multi-page whole menu with a camera does not make it machine readable. And frankly, the menu is not in the qr-code, it is a link to the menu. What sounds appropriate here is to write also the content of the qr-code with plain letters, so one can type it, alongside the qr-code, for those who do not have a camera or whatever.

Now, the problem of requiring a mobile device to see the menu is a problem on its own, and while this is faciliated by having a qr-code vs having customers manually copy links on their phone (either writing them or with OCR), it is 2 separate issues for discussion. Moreover, OCR-A is not needed for any of that anywhere in the 2020s that we live.

The problems with QR code is that sometimes people have older smartphones or camera do not recognise them or, frankly, it being a means of obfuscating sth from humans. I have been in many situations with queues of people struggling for indeterminate reasons to scan a qr code to go fill up sth. If there was a simple link in plain text people could have even shared it between each other. Blindly trusting technology that can easily fail and is unnecessary for what one does and without redundancies is unwarranted imo.


you missed mine. if there is the text http://restaurant.com/menu.pdf I can scan that just as easily as I can a qr code, thanks to advancements in OCR technology.


It's 2025, regular english is machine readble


There is machine readable and consistently machine readable in a limited time under non-ideal lighting conditions with part of the code obscured using only cheap cameras and processors. Barcodes didn't just stop having a purpose in 2025.


Another common hangup is a legible font needs to unambiguously distinguish between lowercase eL (l), capital eye (I), the number one (1), and the pipe symbol (|), or at least for instance only deal with capital letters and numbers.


This! Upvote for ST SSH remote development, currently using ST for local dev and VSCode for remote.


IMO remote mounts is a feature of the OS.

For Linux and macOS, you can mount ssh directly.

Unfortunatley, Windows makes it a little more complicated.

But there's hope. You can use yasfw with dokany (dokan fork).

https://github.com/DDoSolitary/yasfw

https://github.com/dokan-dev/dokany

Or mount from inside WSL.


In principle I agree, in practice I haven't found an OS based filesystem mount that works as reliably as vscode. In particular, I mean the connection is relatively robust, reconnects automatically most of the time after an outage and editing is totally asynchronous, i.e. there's no noticeable pause after saving before continuing editing and no lag (other than what's induced by the electron) when editing.


Can you also provide a simple RSS feed?


Hey! Great suggestion :) I will implement this feature and let you know


Isn't Wi-Fi Sensing built-in in next-gen Wi-Fi specs? It doesn't even need to interact with your own phone, so no luck with turning it off.


Suddenly I remember this movie from the 90s where people drugged themself with some kind of minidisc. “Strange Days”, maybe? Anyhow, I always found the plot weird, but maybe they actually were onto something…


The discs had -in the movie- the memories of another person, and you would experience that memory and sensations as if you were living it. So, e.g. someone would record themselves doing something risky and you would get the adrenaline rush from watching it.

So... Maybe in some way one could argue that social media gives some sort of connection were you get some feelings from what others are doing/showing. I mean, technologically it's quite a leap, but in a conceptual way... it's still a bit of a leap but maybe not that big.


>> some sort of connection were you get some feelings from what others are doing/showing. I mean, technologically it's quite a leap, but in a conceptual way... it's still a bit of a leap but maybe not that big.

Play that VR game set within in the shark cage. The adrenaline rush is definitely not much of a leap from the real thing.


Sounds like Brain Dances (BDs) from Cyberpunk 2077.


Yes, which originally came from Cyberpunk, the first sourcebook for which was released in 1988, with Cyberpunk 2020 releasing in 1990 complete with the idea for pre-recorded replayable memories/full sensory experience, ie:Braindance.

Strange Days was released in 1995.

Maximum Mike was, and is, a prophet right alongside Gibson.

edit: Although almost certainly this wasn't the first place people imagined being able to record and playback memories.


Made me think of Total Recall, which was adapted from "We Can Remember It For You Wholesale," looks like from 1966.


Wiki tells me there was a Cyberpunk 2013 released in 1988. Feels like a millennial cult that keeps missing it's big day...

Cyberpunk 2013 - join us! Jack in choom

Cyberpunk 2020 - oops sorry, had to reschedule

Cyberpunk 2077 - crazy story, anyway we've got a new date

Cyberpunk ???? - this time, we promise!


Simstim from Neuromancer (released in 1984) is the first mention of such a thing that I know of.


Brainstorm (1983) did it before Neuromancer. The movie is about a device that records and replays sensory and emotional experiences, and a central plot point is that it records the dying moments of a character.


I thought the central point was the porn played on a loop. Maybe I was distracted and missed the real plot. Also maybe mixed up by the fact that one of the principle actors died in real life while the movie was being made.


The porn thing showed that the device could be harmful to the viewer. This adds another dimension of risk to the later scenes where the Walker character is experiencing the death tape.

The actor was Natalie Wood, and the event is shrouded in mystery about how she died. However, the character who dies in the movie is played by Louise Fletcher.


The central point was like Lawnmower Man, the military / government were going to misuse the tech for evil purposes.

The porn and the vicarious near-death-experience were just plot points.


The military stuff is a McGuffin-type subplot. The real plot is the main character's obsession with seeing Lillian's vision of the afterlife.

The author of the screenplay, Bruce Joel Rubin, is a self-described spiritual teacher, and "transitional journeys" is kind of his thing. His three most well-known films (Brainstorm, Ghost, and Jacob's Ladder) are all about characters experiencing the afterlife in some way.


This is exactly the parasocial way my girlfriend's niece and friends experience life. No relationships of their own, it is all celebrities and their lives, ingested on their phones. I don't have the heart to tell them that 95% of it is stuff created by PR firms.


playing devils advocate for a minute... isnt that similar to what our parents said in the 80's/90's about our generation? all that "tv and phone" brain rot


Yes. And what the previous generation said about rock music.

Celebrities and “socialites” have been idolised for years - Paris Hilton certainly isn’t the doing of this generation, neither is Jackie Kennedy.

If you think that what we’re doing with mobile apps and social media is new, take a look at the 20th century a little harder.


1. People were clearly wrong about music. Audio only is clearly not as addictive as video + audio.

2. People did say that about TV and TV maybe had the potential to be like this. However, TV failed in many ways to be a hyper addictive device. Some of the many reasons: i. Just less content. There wasn’t that much TV content at all. YT probably adds more content in an hour than all the TV content ever created.

ii. You couldn’t choose what you wanted to watch beyond a few dozen channels at best. So you always had opportunities where you were forced to do something different at many times.

iii. The TV wasn’t available to you at all times. You had to go to the den to watch it and you couldn’t take it to school with you.

iv. TV couldn’t specifically target you individually with content to keep you watching. The most amount of targeting TV could do was at maybe a county level.

v. You couldn’t be part of the TV. Social media and phones today make you an integral part of the “show” where a kid can end up having a video of them popping their pants on a playground shown to millions of people. Even in a more ordinary sense, a kid commenting on a video or sending a message to a friend makes them part of the device in a way TV never could outside of extraordinary situations.


> 1. People were clearly wrong about music. Audio only is clearly not as addictive as video + audio.

Or they weren't and addiction wasn't the crux of their position; and I say that as someone who loves a lot of rock derivatives.

The influence pop icons with broken lives had on teen generations was horribly deleterious (and I'm not even talking about hippies), mainly because malleable and unproperly taught minds rarely see that an artist's respectability is completely separate from his output.

The ancients had the concept of muses for a reason.


TV certainly could target their audiences. Television shows would share their viewer demographics with advertisers: age groups, income levels, race and other social indicators, related interests.

The shows had target markets often driven by the need to reach certain demographics, though actual viewer demographics sometimes were surprisingly way off the mark.


They could not do this at the individual level, nor did they have ways of reaching people to persuade them to watch (notifications from mobile apps, emails about posts).


The key is limits. In the past, even if celebrites were idolized, we had a limited amont of information compared to now. The fluid variable is the increase in information, which makes the situation different.


You might need to recall just how crazy it was e.g. literal shrines to boy bands were just normal. To cover every inch of your bedroom walls and ceiling with photos of a celebrity crush was not unheard of. At school, every conversation could be about these obsessions. Folders/files would be covered with pledges of devotion.

No comment on how it is today, but looking back it was terrifyingly nuts - full on religious fervour to the point of mental disorder. When bands broke or people married/died, there would be full on breakdowns and sympathy suicides.

The lack of information might have helped exacerbate the religious mystery and make more space for imagination, fantasy and faith.


> take a look at the 20th century a little harder

Effectively unlimited content is huge, though. IMHO that pretty much overshadows everything. There were only so much records, magazines and other content you could consume before the internet.


And they were right. But we would watch TV usually together and only for around 4-5 hours a day. Do you know how much screen time are people having ? 8 to 10 hours are not uncommon. And alone.


And our kids will warn their kids about how the ‘direct to brain’ type interface they will use is rotting their brains. Each generation will have been a little correct along the way; the harm at each step was just always gentle enough to not scold the frogs too quickly.


I do think TV was, and is, harmful. I do not have one for that reason and I think it was good for my kids (as well as myself).

I also think social media is a lot worse.


No, they hung out with each other in person too.


> Maybe in some way one could argue that social media gives some sort of connection were you get some feelings from what others are doing/showing. I mean, technologically it's quite a leap

That technology exists; it's called empathy, and the extremely powerful form of it innate to humans is arguably our singularly defining characteristic. It's our tech moat, so to speak.


Or the Star Trek: The Next Generation episode The Game [0]. Every time I watch that I get this eerie sensation that we're essentially giving our free will up to the masters of the games and social platforms we're addicted to.

[0] https://en.wikipedia.org/wiki/The_Game_%28Star_Trek:_The_Nex...


Wow! The game in this episode has been living in my head since I was a child and I could never find where it was from!

I need to watch this episode again


Darn, I forgot that episode. That's a very eerie parallel to some of what we have today.


"If you just let the game happen, it almost plays itself." The quote from the episode certainly makes me think about the "idle games" genre that has emerged in that last several years.


Offhand the only drug-like thing I remember from that series is the nutrition bars that had 0 calories that most of the school got addicted to. Or maybe the cheerleader that got bee pheromones and started controlling the rest of the students.

Aside - I just learned a month ago that there's an official followup miniseries that brought back several of the original actors, titled "Echoes", with hopefully more coming since it's called Season 1. Came out over 2022-2023: https://www.youtube.com/playlist?list=PLHGrvCp5nsDJ1qSoKZEmm... (the trailers are at the bottom of the playlist)


Dangit tried to delete this when I realized this is completely unrelated, just a similar name, and was seconds late. Got the delete link then it denied me.


Straight to the dungeon for you.


Best OT I've seen in a while : )


The minidiscs in that case where full-sensory VR recordings of people’s experiences.


In Serial Experiments Lain, they have a drug that makes your brain think really fast.


> Suddenly I remember this movie from the 90s where people drugged themself with some kind of minidisc.

With Ralph Fiennes. I think that, although strange, it's actually an underrated movie.


That movie was awesome. I remember the first time I saw the trailer in an actual movie theater. It was mindblowing.

“Have you ever jacked-in, wire-tripped..”

“Santa Claus of the subconscious”

https://youtu.be/8RoOs-S_JVI


Brainstorm (1983) had the tape version of that.


Found this old Wayne Ratliff' interview in a 1984 PC-MAG issue[0].

    PC: What is this “big picture”? 

    RATLIFF: I have to be a little careful about what I answer. It's probably safe to say Artificial Intelligence.

    PC: How would you define Artificial Intelligence?

    RATLIFF: One way to define Artificial Intelligence is "making computers easier to use." However, we don't Just want to make them five percent easier to use, want to make them dramatically easier to use. We are looking for a breakthrough. Eventually, what we want Artificial Intelligence to do is to take over mechanical duties, to free people for non-mechanical things. I want to see computers in my lifetime — preferably in my hand — performing chores in a human, nonrigid, easy-to-use way. I'd like to be able to tell the computer, "Go and total all the checks I wrote in the last 10 years for medical expenses." That's a nonrigid request.

    PC: Do you foresee that dBASE II will be a nucleus for an artificially intelligent system?
...

[0]: https://archive.org/details/PC-Mag-1984-02-07/page/n135/mode...


I don't know how I fell down this rabbit hole tonight, but here is another interview[0], I think circa 1985:

    Susan Lammers: Have you ever explored the field of artificial intelligence?

    Wayne Ratliff: I was really involved in AI at the start of this business. A little over a year ago, I turned to AI, because I thought that was the future. But I've grown away from it.

    AI has a future, but it's not very immediate. First of all, there's the problem of natural language. If you have a natural-language system, you buy it and bring it home and put it on your computer. Then you have to go through a weeks-long, maybe months-long process to teach it what your particular words mean. The same word has different meanings in different contexts. Even what would appear to be a straightforward word, like "profit," can have a variety of meanings. It needs to be very explicitly defined, based on which business you're in and how your books are set up, and that sort of thing. So this long process necessary for training the machine kills AI, as far as it being a turnkey product.

    But the other side, which is very interesting, is expert systems. My prediction is that within the next two or three years, expert systems will no longer be associated with artificial intelligence. That's been the history of AI: when something starts to become fairly well known, it splits off. Pattern recognition used to be considered AI, but now it is a separate field. That's the immediate destiny of expert systems. I think expert systems are going to be very important in our industry, analogous to vertical applications. 
[0]: http://www.foxprohistory.org/interview_wayne_ratliff.htm

I think it's from this book: https://www.goodreads.com/book/show/2092682.Programmers_at_W...


Also, there were a ton of very powerful third-party libraries too!


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: