Hacker News new | past | comments | ask | show | jobs | submit login
Flintstoning your way around hard technical challenges (grid7.com)
75 points by scrollinondubs on June 9, 2020 | hide | past | favorite | 29 comments



I have seen this pattern multiple times: (1) tell client completely underdefined problem can be solved automatically, (2) tell developer to implement it, (3) developer spends months trying to automate it, (4) days before the deadline the code does something that would barely count as acceptable, (5) an intern is hired last-minute to solve the problem manually, (6) "we will do it by hand until we have fixed the code".

The client will never know (5). The intern will eventually do it manually full-time. The code is of course never fixed.

Months later, the developer finally has enough understanding of the problem to find out that it has been a research topic for decades, with no satisfying results so far.


Some years ago,a guy started what later became a successful startup selling tickets to events. Obviously now the software does all the magic but it wasn't th case in the beginning,so the founder was making physical trips to ticket offices and purchasing tickets so he could then send to the app users.


This is actually directly advocated in The Lean Startup book, with the example of a shopping app where the employees would go themselves to pick up food before building the actually shopping app. This sounds great for when the products works in the end, but I always wondered if this is really what we should be doing if we find out later that something actually can't be done or scaled well.


In situation where automation is 'just round the corner+ $1B budget' it's probably better not to do but if the solution is know and can be realistically created,then why not? I use this at work quote often when people ask for automations: send couple of emails each day first to see how it works,and if it does, I'll then make an API for it.


There's that xkcd...

https://xkcd.com/1425/

They're going to have to update the comic, since the GPS tech tree that enabled the first one can not be combined with the AI tech tree to make the second one simple.


Yep. This has already happened. iNaturalist has an auto-identification function that can identify pretty accurately down to the genus level most of the time and some times species level.


The simplest solution to the problem presented in xkcd 1425 is to assume every photo is a photo of a bird, and just let billions of users be mildly annoyed / extremely frustrated by that assumption.

Then we can write thousands of blogs posts titled like "Falsehoods Programmers Believe About Photos".


So you're saying "everything is a bird" + https://xkcd.com/386/ = training data :)


I'm not a software developer by any stretch of the imagination, but from an outsiders perspective:

That does appear to be the AI / Machine Learning approach ;)


We have a website where customers of our company can upload data sets for various domain-specific modelling software, and we can do various integrity and quality checks on the data for them.

If it gains traction we'll maybe automate parts of it some day, but for now it's lovingly referred to as AaaS, or Arnold-as-a-service.


Or as it's called in the industry, "AI/ML".


A friend told me a story of a job he got with a consulting company working for a phone company. Their job was to move all of the data from the old system to the new system. They had a multiyear contract where they charged millions to move this data.

When he got there, two terminals were set up - one the old system and the other the new. His job was to read the data from one screen and type it by hand into the other. He was one of hundreds of low paid contractors hired for typing skills.

After a few days of that drudgery, he decidedly said screw this, ^C the app, and ended up at a shell. He spent the next two weeks figuring out how to copy the data file from the old system to the new, and then used a handful of shell scripts to successfully convert the majority of the data.

He showed his boss what he'd done, and was swiftly fired. If anybody found out, they didn't want to lose their lucrative multiyear contract. They'd rather type it all in by hand.


This is referred to as the "Wizard of Oz" method (or "experimenter-in-the-loop"). It's very common in HCI research, as well as in product prototyping in various industries.

https://en.wikipedia.org/wiki/Wizard_of_Oz_experiment

A variation of the technique is also common in CS user studies, where the novel tool under study works, but is too computationally intensive (i.e. slow) to actually use in the study. In this variation, the tool's results are precomputed, and the tool's interface is mocked up so that it just retrieves precomputed results (or it delegates to a human researcher playing the "wizard").


There was some research at IBM, which I have tried to find a reference for, that tested: "What if we wanted to build a better word processor, after having completely solved speech recognition?" They devised an experiment where the subject, given the task of say creating a document, would speak and "the system" would respond by entering text and the usual things that word processors do. "The system" was a CRT and a human confederate behind a curtain who would type stuff and otherwise respond to the subject's commands.


Thats the Wizard of Oz approach in research.

https://en.wikipedia.org/wiki/Wizard_of_Oz_experiment


There's another aspect to this. Sometimes that human isn't 'in' the system but can be the user of it.

I recently made a script to document the code paths from any GraphQL or REST endpoint to code lines taking a database lock. It was a hack with false positives. I 'fixed' it by making it an interactive app instead of a script.


My favourite story along these lines was after the invention and widespread adoption of the telephone.

"At this rate everyone will have to be a telephone operator." Thus rotary (then DTMF) self-dialling was born.


“Wizard of Oz prototyping” is a better name for this idea.


I am always worried about solutions like this -- what if one day, the human makes a mistake and deletes newest photos instead of the oldest ones? What if the VA I have chosen misinterprets my instructions and does nothing?

With programming, as long as I take reasonable precautions, I will know that either the task will get done, or I'll get notifications that something is wrong. With humans? Not so much.

(Let's just not forget "reasonable precautions" part -- there is a surprising amount of people who apply the sloppiest programming for the dangerous actions. People who think that "let's use date parser which auto-detects date format" and "any error code means we can delete document" are good ideas)


How hard can a script to delete documents based on timestamp possibly be?


The saying "can't see the forest for the trees" comes to mind here.


@jim The issue is it's in Adalo which is a nocode platform currently without the ability to execute scripts and with no concept of a "cron-like" process that can run periodically and do garbage collection. It's also not possible to expose Adalo collections externally and run this logic via external process so he's left with the prospect of doing a kludgy hack trying to imitate wp-cron in his app to make something that will delete old photos periodically. It's just easier to have a VA do this once per month instead.


Regardless, the website may change, so the script may also require upkeep


I have always heard this as to "mechanical turk the problem"


Yeah right, mechanical turk it, that's easy enough to say, then you have to find yourself a dwarf.


Hand me the pliers.


Yeah, I bet you're all set for that.


There's an XKCD for this: https://xkcd.com/1319/

Also, anecdotally, I literally automate to prevent RSI[0]. There's only so much the human body can manually do with a computer

[0] https://en.wikipedia.org/wiki/Repetitive_strain_injury


classic xkcd, one of my faves. esp the alt-text.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: