Hacker News new | comments | show | ask | jobs | submit login
Ask HN: What's the hardest problem you've ever solved?
464 points by jamestimmins 28 days ago | hide | past | web | favorite | 422 comments
Could be engineering, interpersonal, strategic, etc. This is purposefully open-ended.



When I was a CS prof, many, many years ago, our undergraduate lab had Macs with floppy disks. I asked the University to pay for installing 10MB Hard Drives in the Macs. I was asked to present my case to the deans council. At the meeting, I said that the students used the floppy to load their development environment. I said that, with a hard drive, it took 10 secs to load and be ready. With the floppy, I said it took 30 seconds. Then I said, "That does not sound like much difference, but this is 10 seconds ...." I paused and looked at my watch for 10 seconds. Then I said "And this is 30 seconds" - again I looked at my watch. At the 20 second mark, the VP Academic (chair of the meeting) said "You have your money".


I've used this trick many times when people underestimate how long a few minutes can be in certain contexts. It works astonishingly well. Happy to see I'm not the only one!


There's an anecdote [1] about Steve Jobs and the original Mac's boot time similar to this story.

One of the things that bothered Steve Jobs the most was the time that it took to boot when the Mac was first powered on. It could take a couple of minutes, or even more, to test memory, initialize the operating system, and load the Finder. One afternoon, Steve came up with an original way to motivate us to make it faster.

[...]

"Well, let's say you can shave 10 seconds off of the boot time. Multiply that by five million users and thats 50 million seconds, every single day. Over a year, that's probably dozens of lifetimes. So if you make it boot ten seconds faster, you've saved a dozen lives. That's really worth it, don't you think?"

[1] https://www.folklore.org/StoryView.py?story=Saving_Lives.txt


I guess that trick would be harder to reproduce for my problem - bringing down compile times from 4 hours to 1 :)


In 4 hour duration, you can do some (or many) other thing. In 30 seconds, you can not, those 30 seconds are pure waste.


Yes, I procrastinate.


This is the best thing I have ever read on HN


I was going to say the same thing. We really need a HN hall of fame for many of these types of comments. Pure gold. Made my Sunday.


Here's somebody's list:

https://danluu.com/hn-comments/


I've made htts://www.reddit.com/r/HNDepthHub to share interesting comments, feel free to add some


Your link is missing the p in https. An I agree, hnbestof sounds like a better name for the sub. Would love to see it become more popular.


It's a reference to /r/DepthHub - great Reddit comments


that comment would be a better fit for hnbestof


Are people kerping (private) lists for these? Would be great if they felt like sharing!


Some people keep "favorites", which are public.


I have to steal that idea next time someone talks about time delays and says "it's not a big deal!"

Awesome!


This is 50ms. This is 200ms. doesn't quite work as well. Depending on the situation there has to be some theshhold of diminishing returns.


If the request you are talking about actually gets executed once (maybe it gets cached or something), then you shouldn't be pitching the time difference anyway.

If it gets executed 100s of times in a day per person, you can say this is 50 ms * 100 = 5s, 200 ms * 100 = 20s. And that's just per user.


Yeah, actually doing this math tends to surprise people. We had an automated background process that took ~2s to run, which doesn't sound bad at all considering it includes an API call over the internet. But multiplying it out to the number of backlog items we actually had at the time, 30-40 hours doesn't sound so good.


You can demonstrate that scale of difference using audio/video sync.


Seinfeld describes the difference between first place and second place: https://youtu.be/xK9rbwM3omA?t=65


Fair enough. Unless you're talking about lags for automated trading systems/ algos where even that single "I really can't measure this" sec difference counts.


I mean, sure, the precision matters for HFT but at the scale the point would be moot since the time is so minuscule. Unless you hyperscale it: “on 1,000,000 trades the 50ms difference becomes very pronounced and could cost us $z” or something of the sort. But I still think it loses “the spirit” of the method — best way I can phrase that.


you can use something like quickly flipping through some pages of what ever kind to find something, whet the page change is delayed by that much

if it's about computation you could make a bunch of objects load one after another


Interesting. By coincidence last night (while playing with a new streaming-stick) I did a little quick calculation. If I watched, on average, only 18 minutes of advertising per day, and did that for 40 years, that adds up to 6 months of my life.

What else might I have done with that 6 months? I will never know. But I know this: I won't be doing that any more.


So simple yet so perfectly executed. Curious but did you have anything else planned in case that didn't make your case? :)


Can't recall - too long ago!


> When I was a CS prof, many, many years ago,

Curious what you do now.


Left university and joined a startup, switched jobs to a different company, that company was acquired by Apple, worked there for about 12 or so years, now retired, but work on an App.


Working on anything with 10s of seconds or even forbid minutes of delay is infuriating, yet unfortunately sometimes unavoidable.


Not all heroes wear capes :)


Absolutely brilliant!


I somehow decided I needed to cheat to pass a certain exam because I was basically crap at memorizing stuff. So I used an analogue wireless headphone, an induction loop around my neck and a mobile phone. Since I lacked an accomplice to dictate, I read aloud the hundreds of pages and recorded myself, careful to preserve and properly serialize things like complex formulas.

This was before the era of iPods and SDCard players, so I had my mobile phone in a setup where I would call back another phone connected to my Pentium MMX 233MHz at home, that ran a sort of audio directory that would playback a certain lecture recording I would select from the menu, using DTMF tones.

I had a small keyboard sewn into my sleeve that connected to the customized mobile phone via a DB9 plug and then to the numeric keyboard, allowing DTMF codes to be sent by gently and invisibly moving my fingers. The whole setup was hideous and had it's own dedicated jacket, with wires, phone, keyboard, audio amplifier, neckloop, the earphone... a complete cyborg for academic fraud.

Back on the PC side, I wrote a C++ application from scratch that would capture audio via the soundcard using Windows Wave API, decode the DTMF pulses using a couple of IIR filters then navigate the menu and playback the required file to the mobile phone connected to the soundcard. The C++ program and menu system was scripted using an .INI file that defined the structure with links to various ADPCM-compressed .wav files that represented the menu headings or the leaf content itself (a good structure was necessary to quickly access the correct lecture after receiving the exam subjects).

Work-wise, it was a lot more difficult then putting the effort in memorizing the stuff, but I rejected memorisation on principle, that's not what an university should be about. The whole thing turned out to be a massive learning project, but I obviously could not speak about it at interviews. It's the first time I mention it to anybody.

I used the setup for two exams that I aced, was never caught but it was nerve-wracking to use in close proximity to a teacher.


One thing universities should teach is efficient problem solving. If it was more work to cheat than to memorize, I'd say you might want to re-evaluate whether "not memorizing things" is a principle you should cling so dogmatically to.


I'm sure he learned much more useful stuff by coding up that monstrosity, and in the real world he can just google to look up those formulas he didn't memorize.

The fully elaborated version of the underlying principle is probably something like: "I have much better ways to spend my time than on memorizing formulas that I could easily look up later".


If an individual's philosophy is that you can always google stuff so you don't need to know anything, I don't see why that person should waste time in school.


There's more to learning than memorization.


There's more to exams than memorization too, and cheating because you don't like memorization is childish. If the exams really test only memorization, drop the class.


Some classes are on the critical path to graduation. I wouldn't suggest delaying an entire degree because one or two classes have bad exams.

Cheating to avoid memorization is childish, but let's not forget that students are usually children, or still in the process of maturing into adults.


It's already terrible enough the extent to which higher education has become an infantilizing scam in the US.

Failing to learn critical coursework should delay graduation, and cheating should result in expulsion. These are good things.


In a perfect university teaching environment, yes. However, in real life sometimes cheating is necessary. Real university environments are flawed, rules and requirements often stacked against the student. They have to pay up, get a massive loan, endure narcis/dumb teachers and stupid administratory requirements, and educate themselves despite that environment. After a lot of invested time and money donated by state/parents, if faced with particularly nasty course exam they are unable to pass in a legal way, it would be crazy to fold and waste all that investment.


University students are legally adults and we’re considered so until recently. I don’t thin infantalizing young adults is healthy.


In the US the legal age of adulthood is 18. This is very far from the actual maturity point. It's not infantalizing to be accepting and unsuprised when a 18-22 year old does something childish.

Edit: Being a child, and acting childish is an important part of the life cycle that allows development. We seem to be bumping into some biological limits on how late we can extend it, but if it were possible I would.


When I worked in baseball concessions as a teenager, the employees who were the most responsible and dependable were the university students. The older workers tended to be more experienced, but some of them were a little sketchy and a couple were fired for showing up drunk.

There is a tremendous variation between individuals. Part of the point of the university degree as a credential is to identify those people who are responsible enough to complete the program. I graduated at the age of 21 and immediately became an Engineer in Training, working on code that could kill people if it behaved improperly. I took that responsibility very seriously, and I'm thankful that my employer and coworkers took me seriously in return.

I fear that a focus on age tends to lead to prejudice. If you just want to ensure young adults can be allowed to make mistakes without ruining their lives, I would encourage you to expand your goal. There are many adults who could benefit from a bit more compassion and forgiveness. We don't need to restrict it to the young.


I strongly remember a mandatory English (literature) exam in Highschool where you had to write an essay in response to a question that was known beforehand, in relation to one of three or so books you had studied beforehand (You didn't know which on the day). The way this test was practiced was to get every student to write the full essay for all 3 books and then to memorize that. Most other English tests I had to take in Highschool were similar to this, but not quite as bad. I think you have too high of expectations from education systems to think that all tests test more than memorization or that people can simply "drop the class".


If a particular exam has more to it than memorization, then even open book you shouldn't be able to ace it without knowing the material!


I dislike open book exams so much that I took nothing into my last one. It was a unit without much memorisation that focused much more heavily on applying concepts, and I feel that having it as open book was a trap for ill-prepared students.


One of my highschool teachers mentioned that a book wouldn't help much before her first open-book test, then openly stated that afterwards that it was a small trick to see who wouldn't study. She was among the best and most-liked teachers in the school.


I'm not defending what they did, but it's worth pointing out that electives aren't exactly universal. During my university I was allowed to choose exactly 4 classes out of 6, in my last semester. Every single other course was compulsory. I never cheated, and looking back on it I'm glad I didn't, but I certainly did my share of useless learning.

Edit: that being said, in a system where there is such a thing as non-compulsory courses, and the critical path to graduation is sane, cheating on an exam on the critical path should definitely not be acceptable, even if it's "only" memorization. Some things really do need to be memorized.


Not every academic system allows to freely drop classes, to drop a class without an economic penalty, or to drop a class without falling behind in your studies.


There is not more to exams than memorisation thats their only function, they don't teach then exam.


if only if it was that easy to just not take class


Knowing and memorizing are not the same thing. I went to school to learn, not to memorize.


Yes, although a lot of thing can be learned much better by yourself rather than go to school, I think.


I had a programmable calculator during A Levels (UK sixth form college). Was talking Computer Science, Maths, Further Maths, a couple of others. These were not AS Levels, full A Levels.

I also had the seemingly good idea it would be nice to program the calculator, initially just writing algorithms like bubble-sort for CS that could be referred to, then extending to Statistics showing working for various statistical methods (the exam required steps showing working steps, learnt a little about data structures myself for that, then extending to matrix algebra and curvature). I also didn't go to most classes, only attending the ones that would be more useful than time in the library.

Typing character-by-character on a numerical keypad, getting a series of working systems. Was pretty fun.

And in the end, having working 'cheating systems'... I could do it faster in my head and never even took the calculator to the exam.

That's learning, I suppose. Did very well on the exams.


I also wrote programs to run long equations on my calculator. This wasn't forbidden at the time because it was too early, I suppose. Regardless, you had to show your work to get credit anyway so they just functioned as a way to check my work instantly.

I had a lot of fun writing little programs to help out on long arduous problems though. I wish I had all my TI programs.


Unless the exam allows notes, it can make sense to forbid using programs that are already stored and require to write the program during the time needed for the exam if you want to do that (even to program it to show the work if you want to add that into the program too; I think someone once did this), then it can be OK to allow it, I suppose.


I wrote a raster graphics editor for my Casio programmable graphing calculator. I also wrote it exclusively on the calculator. Did it to one-up my class mate, who had done one previously--mine had better features (no dot in the center of circles, could also do ellipses, had a spray function), and it was half the LOC.

Casio BASIC had subroutines in IIRC 36 registers ([A-Z0-9], conditional jumps and labels.

I wish I had that source code. I can't see how I would ever do something like it again.


its not work if its fun, which im sure this was


I would 100% talk about this in interviews depending on the interviewer. I can think of a couple interviews I had in SF that would probably be appreciative of this level of effort because you principally reject memorization.


Admitting to significant academic fraud during a job interview is a very, very bad idea. Furthermore, I would question the ethics and morality of any company that said "Wow, you solved an incredibly tough problem in order to commit fraud. Boy, are we the place for you!"


> Admitting to significant ... fraud during a job interview is a very, very bad idea.

After serving his time, that's exactly how Frank Abagnale got a job in the FBI.

https://en.wikipedia.org/wiki/Frank_Abagnale


Well, you need criminals to outsmart criminals.

I'm not sure why you need that trait in software. You're (usually) not trying to beat other programmers. So I suppose the parent comment should have added "except in adversarial situations".

Which means it does apply to cyber security. You need hackers to outsmart hackers. If that's the kind of industry you're applying for, it probably is a good idea to mention this actually.


Many EU startups won't be interested in your academic records but such a project would make them hire you on the spot.


That sounds interesting. Are EU startups known for less interest in uni credentials than US startups?


I'm not sure - but it became a meme in EU because other fields typically require good academical background, whereas IT is seen as done by punkers.


He’s the exception that proved the rule. The FBI never repeated that hiring process on anyone else.


That's different. According to that page, Abagnale served 5 years in prison (6 months in solitary confinement), and had to work for the feds for a while "without pay". Then when he first got legitimate work, he was usually fired.

I'd be willing to hire someone who cheated and was caught and served time, but not someone who cheated without consequence and boasted about it.

The commenter here doesn't sound like they segued that into a career catching academic cheaters. They just used it as a stepping stone to their next personal accomplishment.


> After serving his time

So there was nothing new to admit to, they already knew. Entirely different situation.


I find your point of view very narrow minded. I even admitted cheating in my bachelor's during my Master's thesis presentation.

It was in Switzerland, and culture is different; maybe we're less black and white.


If you cheated on an exam, it's a very bad negative signal for integrity there's no question about it.

This is not very grey.

If you were caught you would probably be kicked out of your Uni, this is a serious thing.

I understand there are extenuating circumstances, people are young, they do crazy things, we all have. But it's not something we run around talking about, certainly not in an interview.


> Admitting to significant academic fraud during a job interview is a very, very bad idea.

Sure, it's a no go if you're looking for a job at an academic institution, and a big company will reject you because fraud of any kind is a huge HR liability, but I bet nose people would be impressed.

It might be well received at a startup, though. As long as you recognize your action was highly unethical.


Like, I don't agree with you but I do see your point. I'd hope that a company would ask "and how do you feel about your choice to cheat?" open up a conversation about ethics and if you learnt from the experience. It's a can of worms and could result in a great interview.


They wouldn’t actually say that of course. They would just think it.


Hey, Kevin Mitnick still gets hired today, doesn't he?


As a public speaker and author mostly. He doesn't get access to state secrets and hasn't been employed by any other company. He wouldn't get a call back from McDonalds if he applied.


He's hired as a pen tester and gains access to companies, their secrets, etc. I have a sysadmin friend whose company hired him to pentest, and they kept in contact.


I would allow it (if these kind of skills to design all of this system is the skills that is useful for this job). You are honest to admit it, and if you can make such thing, it is good. Of course that was cheating, but that does not mean that such thing is only to make fraud; you might do similar thing with other thing too.


Once you admit to fraud, though, they have no idea what else you might be lying about -- including the admission itself. They can't exactly call his old professor to confirm that he built and used a cheating machine.

Plus, after you admit to doing it once, if I weren't the sort to reject you outright, I'd still call every single person and institution on your resume to confirm all of those were real. Right out of the gate, you're proving yourself to be a lot more trouble than anyone else I'm interviewing.

It sounds like mostly luck that they weren't caught before. Are they going to try again, at my company? I don't want my company's legal footing to rest on one person thinking they'll never get caught.


Of course, that is true (although it is also possible to lie without admitting it!). I am only saying how I might do (depending the kind of job).


Would be a fantastic answer to the YC interview question on how you've handed a system, though


Eh, probably some places (e.g. Uber) would love that kind of thing.


I wouldn't talk about it, because it's a project that happened 10+ years ago. It's better to talk about more recent successes.


It adds a human component to your interview. As the saying goes, programmers are lazy. Based on the components you describe, I think it's safe to say, that you're not an entry-level hire and you have experience in your field. It'll make for a great story and demonstrates passion.


Amazing as this tale is, folks for anyone else who is sure that memorization is too hard don't do this. The real tip is:

Spaced repetition. Load up Anki or Mnemosyne and stick the info in there.

"Craig prepared for Jeopardy! by studying the online archive of past questions maintained on the J! Archive website. Using data-mining and text-clustering, he identified the topics most likely to occur in game questions,[9] then used the spaced repetition program Anki for memorization and tested himself using his own program.[10][11][12][13][14]

Craig played quiz bowl as a student at both Virginia Tech and the University of Delaware.[15] Before his Jeopardy appearances, he played numerous Jeopardy scrimmage matches against his friends with quiz bowl experience.[14]"

https://en.wikipedia.org/wiki/Roger_Craig_(Jeopardy!_contest...


I was always tempted to modify a calculator to add Bluetooth, then I could stream over whatever data I wanted from a phone somewhere without any suspicion. Graphic calculators make this trivial as they often have a serial port, I just never got around to building it. I guess I didn't want to cheat that badly


That is unbelievable! How did you index the audio? What did you have to do to access them during a test, and how long did it take? Did you find that you had accidentally memorized some of the lectures after having read them aloud?

> I rejected memorisation on principle, that's not what an university should be about.

The lengths you went to to avoid doing it the easy way are incredible. You’re not alone, but I have to say I think this idea - that university should not involve memorization - is a common misconception. If you think about it, a good education requires memorization. What we’re learning in all fields in large amounts - even in math - is some degree of human history, and some degree of human notation and convention, neither of which can be derived by logic. Think about language, becoming a fluent speaker, especially the first time, is almost entirely learn by rote.


You rejected ethical behaviour 'on principle', you even seem proud of that. Have you kept doing that in life? Do you feel like an impostor, presumably with your degree that you didn't think you could acquire without cheating, or couldn't be bothered?

A surprising number of the replies to your comment also seem to think cheating like that at university is perfectly fine.


My humble opinion. University is about learning (not, for example, status). Learning is something strictly personal. There is nothing fundamentally universally wrong to cheat at an exam beside the fact you may be harming yourself.

Now since we need a little order for various secondary reasons, we promoted exam cheating to illegal and that's ok. We need order. But there is nothing someone should feel guilt for imho, assuming the person knows she may be harming herself.


I agree with this. I would further suggest that the modern conflation of two orthogonal functions (teaching and certification) in universities is rather misguided. These would be better served by two separate types of institution.


> These would be better served by two separate types of institution.

For a long time, they were. A technical school certifies someone to do something. A university teaches you how to think about hard things (supposed to, anyway).


I wasn't talking about law, I was talking about ethics.

"There is nothing fundamentally universally wrong to cheat at an exam beside the fact you may be harming yourself."

I don't understand that, I have no idea where you got that. Or what those words "fundamentally" and "universally" add. I say it's wrong, you say "Oh, but it's not fundamentally, universally wrong".. As if it's clear what that means.

For example: You may have harmed the people who didn't get good enough marks because you cheated your way into higher marks. Then you may harm people in your career that you're not qualified for, besides stopping properly qualified people from doing their jobs. I don't want an airlane pilot or doctor that bought their degree or cheated in exams, thanks. Anyway, it seems ridiculous that I have to explain to people why cheating's bad. Well, I don't know, maybe you are in a country where it's normal, perfectly fine, accepted, everyone does it. Where I come from, people don't have to have it explained to them why it's bad.


Yes, cheating is bad. But if you want to be interviewed to the job you need to see if is qualify to the job; some people can be good even if you have not went to the university or other schools, and even if you answered the exam it does not mean you are better at that particular job than another candidate but only that you know the answer of questions (or successfully cheating without being caught) and can be good at examsmanship.

If you are good at mathematics, you will invent a new theorem! If you are good at music, you will compose a new music! If you are good at chess, you can win! If you are good at exam answering, you can earn some more marks (guessing at answers if you do not know the answer)!


I guess you are right. It's ethically wrong but not morally wrong.


I guess you are right. It's ethically wrong but not morally wrong.

Huh? I don't see a significant difference between 'ethically wrong' and 'morally wrong', no idea why you would say that.


'Cheating at an exam' is transgressing a rule that doesn't "exist" in nature, it only exists in our social (if that's the right word) system. So if I cheat at an exam, I'm breaking a rule that's in the system that we made up (together), and therefore I can judge that, in some circumstances, I can break the rule without feeling morally wrong, without feeling guilt, because I, in some sense, made the rule myself. I'm breaking my own-making rule.

Another example of that could be: I want my kid to go to bed at 9pm. Sometimes I will break that rule. To some extend, because of the reasons I've advanced, I claim that cheating at an exam follows the same characteristic as the "kid go to bed at 9pm". Just not in the same magnitude if you will.

I then guessed that it may draws the limit between what's ethic and moral.


Hi again. I don't see how that draws any limit/distinction - it was 2 examples of rules that can be broken, not sure how that helps explain the difference, or why you said that. I wouldn't say bedtime is a moral rule/principle, or that breaking it is unethical. Maybe could make it clearer for me which one was supposed to illustrate what, if one was meant to be ethical, one moral, or something, I don't know. I really have never heard the words used with much or any difference. (I'm no expert, but have read dozens of ethics books, studied ethics/moral philosophy at uni etc)

I'm just guessing here, but maybe you have a religious value system, with absolute moral commandments or something? All I have (as an atheist) are ethical/moral principles exactly like 'cheating is wrong'.


You say 'cheating is wrong'. Fair enough, you can see 'right or wrong' as binary. Or you can live in the real world and understand that things are a little more complicated than that. With all the information you have out here (more than your hypothesis) you can make a fairer judgement. And you don't do judgement without introducing the living anyway because only what's living can judge and be judge-able. It's a social thing to judge right or wrong. So you have to take into account all the system. The living.

Now I'm not going to do the math for you.


Yes. Learning is yourself. (University is also supposed to be about learning, although this is only partially true as it is implemented.) They have exam you can test, but that is difference from learning, although if you know the answer then you can see if you know the subject being learned. But you do not have to learn only that way; you can do many thing such as to read a book, figure out by yourself, or in this case, to attend the lecture. If you have a question because you do not understand the lecture, then you should ask, and that is how you can learn. Also, examsmanship is not same as learning the other subject, but, still you can learn examsmanship too. Noticed I mentioned before, I might to accept if the stuff you did for the cheating is subject of the class anyways such that it mean is good at it, then perhaps you can earn "SG" (meaning that, you pass regardless of what is your mark).

But still, memorization is not the same as learning. That is one thing that the test is not always so good; whether or not is "ethic" and/or legal is independence from that, because if you understand, then you can do, but if you know the word but don't actually know what is the significance, then you might answer the question same like that one but not the difference question that you can actually use.

(I remember once on one exam, the last question I did not know, and cheated off of someone who also didn't know and was cheating off of me (I don't know if they were cheating on other questions too or not, only that it is for last question), so in this case we cheat off of each other. Of course it is no good compared to all of the effort they mentioned above, but still you can see, you can be cheating off of each other the same question. I think this is the only instance of cheating on exams I have done, although once I tried to use the "coughing code" (without telling anyone!!!) just to see if I can, and not because I actually wanted to cheat, because I don't want to cheat.)


It was undoubtedly unethical behavior. But it's like... well it's like the story of Toshihide Iguchi, who accumulated over a billion dollars of trading losses and hid them successfully for 12 years. Sure you throw the guy in prison, but you have to hand it to him that what he managed to do is pretty damn impressive. After his release, finance CCOs might even want to hire him to audit their traders.

You can get off your high horse about not being able to acquire your degree without cheating, too. I never once cheated at any point in college, but if I had spent more time working on sick electronics projects and less time memorizing the years of significant 15th century battles, I'd be more qualified technically, not less. The ethics of it are problematic as I've said, but you seem to be going farther and implying something about his ability as well.


Why isn't it fine, though? What serious damage did his actions cause? Which important societal construct rests solely on the assumption that no students cheat and, once this is violated by a single student, comes crashing down? I would suggest that if such a construct exists, then maybe the problem is its very conception and this is what should be rethinked.

I think the implication that he should feel like an impostor for cheating on two exams is a very large over-reaction on your part.


"Which important societal construct rests solely on the assumption that no students cheat "

Society does not depend on wether or not a few students cheat.

But society definitely depends on the social and moral contract that we do not cheat, and that it is wrong. If most of us cheated, the system would definitely fall apart.

Have you ever visited a developing nation? Where outside of small villages it seems like everyone is cheating at everything, all the time?

It's like pouring sand into the gears of an engine: everything starts to break down.

I would not hire someone who admitted cheating during an interview unless they talked at length at their remorse, how they learned from it, how they grew from the experience, and there were exceptional circumstances.


> It's like pouring sand into the gears of an engine: everything starts to break down.

While true in general, cheating in that environment is also sometimes necessary to get on with your life unharmed. And sometimes cheating the system is actually regarded highly by the fellow citizens, because the people do not believe in the imposed system. Cheating is not beneficial to the system, but it often is to the people. Now, what is more ethical, doing what benefits the people or doing what benefits such system?


"And sometimes cheating the system is actually regarded highly by the fellow citizens, because the people do not believe in the imposed system. "

If you live a totally corrupt system, maybe.

But functional societies are based on the notion that cheating is bad, immoral, and nobody should do it.

"Now, what is more ethical, doing what benefits the people or doing what benefits such system?" - ha ha ... this sounds like how psychopaths in prison justify their crimes!

"I robbed the bank because the bank is evil, now who's more important, the people or the evil bank?"

No way around this one: cheating is bad.


People who rob banks are not necessarily psychopaths. For a good (fictitious but convincing) example, see Toby Howard in the film https://en.wikipedia.org/wiki/Hell_or_High_Water_(film) Robbing a bank is universally illegal, but it is not regarded as unethical in general.


"but it is not regarded as unethical in general."

Yes it is.

Maybe you live in an interesting developing nation?

Robbing banks, stores, anyone - is considered basically immoral in all of civilization, thankfully.

Yes - not all bank robbers are psychopaths - probably not the majority.

But prisons are fully of people bending reality to justify their crimes.


Robbing average Joe is universally considered wrong. Robbing bankers, insurance companies or other powerful subjects, I don't think so. It depends on the circumstances - for a good example, see the movie, that is, if you like such a mental challenge.


Robbing banks, bankers, insurance companies and other 'powerful people or companies' is wrong, basically universally.

There is no ned to see any film about this.


>Robbing a bank is universally illegal, but it is not regarded as unethical in general

Erm, no, it is regarded as unethical by almost everyone except bank robbers.


Have you heard of the Robin Hood legend? It says the man was robbing the rich, and was very popular with the populace. I find it hard to believe that his robbing activity was regarded as 'unethical' by 'almost everyone'. Or a recent example, the sci-hub and libgen projects. They are robbing publisher shareholders of their profits. Do you think those projects are regarded as unethical by 'almost everyone'?


Ethics is not something that gets decided by the Dean and laid down in a code of academic ethics. The University is just a part of society and it has it's own perverse and quite unethical practices; for example, the tendency to grab one's money and time and not provide any substantial education in return. Furthermore, the university might be perfectly aware that it provides crap services and that fraud is rampant, yet enforce it's own rules selectively just to the point where fraud is hidden away and not damaging to it's reputation.

So while I agree with you in the abstract, I believe it's hard for people in the western system to relate and make moral judgements about the experiences of a student in post-communist Eastern Europe, where the rules were often gray. Fellow students were cheating on a grand scale, and up to that point, I had the same principled attitude towards cheating as yourself. Part of the reasons for creating "the system" was my revolt towards what I saw around me - the tacit acceptance that those with good cheating skills, which I lacked and did not develop early in my academic career, should be allowed to cheat their way to a degree, while I was supposed to memorize garbage.

Sure, my "rebellion" was unethical and compounded the social problem around me. But what could I realistically do, could I have changed anything? At least I saved me from myself, because otherwise I would have dropped-out. In the grand scheme of things, impeccable ethics is often a luxury and could have the exact opposite effect, letting only those with no ethics whatsoever to graduate.


If you sign a code of ethics swearing you won't cheat and you do anyway, that's unethical. Even without an actual signature, the social contract is implicit. The fact that the university exploits athletes and postdocs is unrelated. You can break out the "I did what I had to do to survive" argument but then you're just conceding the point that you abandoned ethics out of necessity or simple mercenary desire for a better position.

I don't think it's a huge deal and wouldn't personally let it negatively affect a hiring decision provided you made the right noises about how it was dumb in retrospect and you were young, etc, but your comment here is pretty much the opposite of that. Who could possibly employ someone with your attitude? Even a gas station wouldn't want to hire a cashier who steals money from the register when they can get away with it just because the petroleum industry is destroying the environment, or whatever.


As I anticipated, it's hard to relate to somebody living in a profoundly corrupt society. My gripe with the university was not that it was exploiting postdocs or killing the environment; rather, that it failed to setup and enforce a system of rules where my extant academic ethics mattered.

The social contract includes not just what's written or implied, it's also what people do in practice, what is acceptable. If cheating is acceptable and required to get a tech job, then I can do that, in fact I can do it better than most. That says very little about my profound sense of ethics.

To counter your analogy: what if you are indicted of a crime you did not commit in North Korea? You surely accepted their rules when entering the country, but would you trust their judicial system to do the right thing? Would you bribe your way out if you were given the chance? Would you ask your country to put diplomatic pressure on your behalf, a clearly unethical advantage no Korean has? Would you consider escaping from prison if wrongfully convicted?

In a narrow definition of ethics as "whatever the current rules are" (typical, I would say, for somebody living their whole life in a state with strong rule of law), the only ethical behavior is to subject yourself to whatever abuse NK decides for you.


That's throwing the baby out with the bathwater. It's true that law and ethics aren't always aligned, especially in authoritarian regimes. But basic things like integrity are essential for any human society.

Also, I'm not convinced that growing up in the former Soviet bloc necessarily means that honesty isn't the best policy. If everyone around you is corrupt, being the one dependable person around is a differentiator. Maybe I'm being naive, but it seems like that reputation could have some real world value to you.

By the way, it doesn't matter what your classmates are doing. Comparing myself to the average was horribly destructive to my college education. It turns out, everyone around me putting in average amounts of studying and getting average grades ended up getting average jobs with average pay! The one or two people in class that always aced everything, about whom I thought "well I don't have to be as good as them" - those were the ones who ended up having a real shot at grad school or dat $100k Facebook new-grad signing bonus. It's just like when you move from high school to college and suddenly you're not the smartest person in the universe anymore. Your cheating classmates are bozos - stomp them with harder work and keep moving up. Don't sink to their level.


> If you sign a code of ethics swearing you won't cheat and you do anyway, that's unethical

If you do not sign such a document, does it make a difference? Of course not. Getting massive advantage by extraordinary cheating is still unacceptable. However, cheating a little when stakes are very high, or cheating for 'greater good' may be acceptable. Cheating is a personal decision and whether it is acceptable depends on the situation the cheating person is in, and the person doing the judgement.


Thanks for explaining further, that's a finer reply than I deserved. Well, I didn't know where that was, or the wider story. What do I know, I've never been in that situation. Good luck, I hope to read your book one day. :-)


[flagged]


Wow. How that comment hasn't been flagged yet I have no idea. People who don't agree with you are 'nuts', yet you have a nuanced view of ethics? That's the most condescending, deluded thing I've heard in years. "For them.." blah blah.. you don't know us; in my case at least, your assumptions couldn't be more wrong, but what do you know or care; you've decided who the in- and out- groups are here.


Personal attacks will get you banned here, regardless of how bad another comment is. Please review https://news.ycombinator.com/newsguidelines.html and don't post like this again.


Oh.. I just noticed this. I'm not sure why you are complaining about my comment but not the one I was replying to! At least I think you were talking to me. I'm not sure what in what I said was personal attack. At least, it seems you were implying something I said was a personal attack. I strongly criticized his view, words, assumptions, decisions. I'm kind of amazed to be threatened with banning about that. [reviews guidelines] Ah maybe "please reply to the argument instead of calling names" - I guess you mean "That's the most condescending, deluded thing I've heard in years." That seems to me a fact. From memory, I don't think there was an argument, just a string of false assumptions presented as fact. (Can't read his comment now) Or maybe it's a "shallow dismissal"? Not sure. What particularly about my comment deserved being threatened with banning?

I have noticed that the worse the comment, the harder it is to reply to decently/politely/etc; maybe it was impossible for me to reply to this one, or say what I thought without arguably breaking the guidelines, so OK, point taken, better not to reply in such cases. "Someone is wrong on the internet!" is real. I really did try to stay on-point, not hard enough apparently. I mean, I really do think it was the most condescending, deluded thing I'd read in years, definitely while on here. (I guess you've read much worse!) Again, not sure if that's the part you're objecting to. Apologies if you've read 10,000 replies just like this one. Kind of surprised (and depressed) I'm the problem here though.

ps I only saw your comment by chance, it's pages back in my comment history. Is there a way of getting alerts for replies on here?


I don't think I saw the parent comment; it's about as bad as yours was, though yours might have been a notch worse, since that one was attacking in the third person while you did it in the second.

The main thing to realize is that even if another comment is bad/uncivil, you still need to refrain from replying in kind. Believe me, I understand how hard that is! But our community depends on it.

Sometimes I think that HN threads are a big experiment in all of us learning how not to get triggered. If you give better than you "get" (by reading someone's comment), you'll be on the right track.


Ok thanks.


If you'd told me this in an interview, I'd have hired you on the spot. The biggest issue with cheating, IMO, is that it can indicate laziness, but in your case it definitely doesn't - the story is great and says you're a highly creative person who gets things done and doesn't have to follow norms.


How did the teacher not notice the headphones and question you about it during the exams?


The headphone was a very small analogue receiver that had it's own micro-battery, an inductor and amplifier - all about 4mm wide and 8mm long. You would place it deep inside the ear channel and it was invisible even to a person standing right next to you. The signal was induced by a coil you would wear around you neck and you would pipe raw audio into it.

Similar systems are even now on sale: https://www.olx.ro/oferta/sistem-de-copiat-casca-invizibila-...


Personally for theses kind of things I used to use one earphone in my left hand, wearing a long sleeve shirt and laying my head over my left hand, covering the whole operation :)


A scriptwriter could make a movie out of that!


To work as a (commercial) movie, the stakes would have to be phenomenally high. I can't imagine the scene without the viewers saying "he did all that ... just to pass a test he objected to?".


That's creative AF.

IMO if you can get around a system that's as good as going through it. It usually takes a certain kind of ingenuity to be able to do, so I can't be mad at it.


Damn, man. You're hired.


In the workplace, no amount of technical ability can compensate for a lack of ethics.


It's interesting to me that everyone here seems hellbent on ethics yet at the same time all of the big companies that we work for are some of the most unethical beasts in modern society.


Cheating on an exam does not always mean one has 'lack of ethics'. It may mean the opposite - say, "my parents aren't paying out of their veins for me to waste time on this nonsense."


So drop out of college then. No-one is forcing you to take the exam.


I am not talking about myself. Young people are strongly forced to pass the exam by multitude of strong incentives, including societal pressure and prospects for their financial well-being.


I'm probably going to regret asking but how long did this take you? It is quite incredible.


A few weeks in total, but it was a labor of love so it was quite fun. The hard part was reading all the lectures out loud, it took about a week during which my family, probably hearing my from my room, was impressed of my new found and never since matched learning zeal.


This is one of the most interesting comments I've read in a while, thanks for sharing!


Crafty. Well done.

Similar (kind of) trick depicted in 1965 Russian comedy movie: https://www.youtube.com/watch?v=KDn174Mf9PY


If it is exam about computer programming, then I might say, that is very good and I will not disqualify you for cheating in that way that you have managed that (although still the answers must be given correctly, like any exam).


That's quite impressive setup. I've used programmable calculator with text storage capability for my chemistry exam. I was correct that I will not need chemistry knowledge in all my life.


I was also cheating on my chemistry exam, but as your life is over yet I wouldn't be so sure. As I'm getting older it's getting more and more important for me to understand the biochemical processes that are going on (or going wrong) in my body.


>I somehow decided I needed to cheat to pass a certain exam because I was basically crap at memorizing stuff.

...

> but I rejected memorisation on principle


If only the prof had said "Good news, everyone! I’ve decided to make this an open-book test."


This is incredible, but seems more time consuming than studying. Cheers to you!


It might be a good test of a company to see if they were impressed and amused enough by this story to hire you because of it, rather than rejecting because of it.


This was more of an introspection, but i used to be rather depressed. When I started reflecting on life and all I could contribute or succeed at (this was actually years ago, not specifically this question). I realized that if someone 1000x more intellect / capable / knowledgeable came along, everything I could add or solved from an intellectual perspective would be useless.

After meditating / contemplating this for quite some time, we are talking the order of months. I realized that even if someone knows 1000x more, they still may not know your niche. More than that, everyone has niche expertise from their lives, which no one else can access (from their experiences).

I don't know how I got to that conclusion, or how to explain it. It was the hardest problem I solved, because it's something that had to do with a change in perspective and personality. It also is what helps me listen to others in debates and to be much more open minded. I wouldn't say it's that big of a deal, as I felt it was part of growing as a person, however many around me don't seem to recognize that. When I hear "they are naturally gifted" or "I'm not smart enough", I feel those are people who haven't had the same realization (i.e. experience).

Anyway, that was probably the most difficult and profound problem I solved for myself. What motivates me.


> When I hear "they are naturally gifted" or "I'm not smart enough", I feel those are people who haven't had the same realization (i.e. experience).

They may simply not agree with the conclusion, or the end. It's pretty hard to have this discussion without crashing into the nature-nurture debate, but one could be interested in a particular result, not merely _any_ result, and they may perceive a lack of something blocking their way towards that.

Not to mention, we do not really live in a world / society that recognizes this, and social approval + hireability dramatically affects one's psyche, more so than mere rational thinking.


I had one similar realization but more focused on learning. "What's the difference between an expert and me?" Like a tennis pro, or a recognized programmer (besides muscle memory) "Information" Even emotions could be seen as a state of certain information in your head. Even your ego. "We are not that different. I can learn anything."

This doesn't count efficiency and optimization. It was just that given enough time and resources we are capable to "do something".

It's hard to learn that hard work will beat aptitude for a skill and that we actually have limited time in our lives though. But as you mention it makes a difference in the mind of people. The mental blocks many people have against learning new things really hinders how people interact with this rapid changing modern world and with their social groups. Also, there's these kind of realizations that humbles you (or lift your spirit up) and helps you being less egocentric, more empathic and forgiving.

I think bringing down these walls is a really hard problem.


Awesome. I just finished reading Man's Search for Meaning and it has a very similar message.


I once told a room mate in college I would repair their laptop's charging port if they were willing to pay more per month on 50 megabit internet (this was a lot of bandwidth back then).

I ordered a new port online, waited for it to come, then spent like 12 hours trying to get the factory solder out of the original port. I ended up accidentally frying part of the power board. By this time it was the middle of the night and I had class the next day, and my room mate expected a working laptop in the morning.

So I started trawling Craigslist for similar laptop models, emailed every single one of them that I would buy the laptop at 6am (enough time before class). I found one with a broken screen, and got a decent deal, I think I still spent $150 or something (almost a whole paycheck at the time). I brought it home and tore it open, took the entire power connector module out of it and into the new one, it fit thankfully.

I handed my room mate her working laptop and never mentioned the ordeal... It was still working 2 years later when I moved out of that house. The problem wasn't so much technical as a problem of desperation and saving face :)


A geek trying to save face is the best worker. Well done.


A geek trying to get more bandwidth is the best worker.


But a geek trying to save face and get more bandwidth at the same time is the very best worker.


Fixing a SRAM 11 speed GX rear shifter on my mountain bike. I was on a downhill MTB holiday and fell off, snapping the thumb trigger on my rear gear shifter.

Replacing the shifter is usually like a £30 part, but I was in an alpine resort in summer so the shops that stocked them were charging nearly £100!

I had my tools and some super glue so sat down to repair it. The snapped part fixed in two minutes but getting that part in place required the complete disassembly of the internal mechanics of the shifter which was all tensioned with multiple hidden springs, the internals literally burst apart half way through carefully taking it apart.

We had no instructions, no YouTube video (since it is usually cheap enough just to swap out) and no diagrams. It took me and a random guy staying in the chalet 3.5 hours to put it back together, we essentially had to spread everything out and think from first principles of how a shifter works, the specific features of that shifter (rapid fire, mutiple down shifts with one leave arch) and build up how those peices would match how it operates.

Must have been 30 part, all small and all tensioned with three(?) springs.

6 months later still working fine but man did we get a mental workout that day.

Next day I had to fix my 3 axis gimbal but that was a lot easier.


I empathize! These kinds of shifters are SUPER hard to put back together. Quite literally, so many moving parts in such a tight space. If you inadvertently bend the springs or coils too much, you could end up ruining the whole thing.

I had a similar situation happen to on a Shimano Deore shifter, which is even less complex than the SRAM you mention.

For others who have never had this "joy", it looks like there's at least one video at least partially showing what the OP had to go through: https://www.youtube.com/watch?v=nrfKQfXJgd0 Jump to ~2:00 to get a sense of how finicky this stuff is.


That video was released a month after I fixed mine! Could have done with that.


That's pretty funny and I'm glad to hear something other than another barely believable "I did this god mode hack" story... The mechanical realm doesn't get enough love. Indexing shifters have become better, but progressively more miniaturized (and ironically flimsier IMO) over the past 30+ years. Good work!


Curious if your hourly rate at work works out at more than £28.5 per hour, as that's what your time would have cost. Saying that I imagine the satisfaction was worth it. (I do a lot of my own bike repairs and a 5 minute youtube video usually works out as closer to an hour or two when I try.)


I see where you are coming from but since I was already on holiday it would have been pure cost. It’s not like I could have spent that time earning money, I would have just come back from France feeling over charged for something that it turned out I could fix myself.

Very satisfying to do as well.


Are you making YouTube videos of your mountain biking adventures or just videos for fun? I would be keen to see them as I am getting into it now myself.


We had been planning on doing a couples MTB channel as my wife was looking for something outside of work to make up for a lack of interesting projects at work. In the end she landed up getting a job at Aston Martin and now I just use the gimbal for fun with our friends when riding.

I recommend BKXC and Seth’s Bike Hacks as good channels that avoid all the Gnar downhill stuff


Those are a couple of my favourite channels! Mountain biking for the everyman. Singletrack Sampler and BCPov are a couple of my recommendations. BCPov does some pretty awesome bike treking videos too, with camping and all the other stuff involved with multiday rides.


I decided to dismantle my Saint shifter one day. Good fun. That big coiled spring got me.


I started with the glimmer of a hope that perhaps network sync could be made stateless, went down a months-long rabbit-hole of research, and ended up writing a novella-length article about CRDTs: http://archagon.net/blog/2018/03/24/data-laced-with-history/

So many days spent thinking, sketching, trying to swallow a concept that seemed far to big for my jaws—only to suddenly find myself on the other side, with this arcane knowledge fully internalized.

Going from a few wayward thoughts to a working proof-of-concept was the most professionally satisfying thing I've done. It felt like alchemy.


That might be mine as well, https://medium.com/@raphlinus/towards-a-unified-theory-of-op... . It's very much unfinished work though. Certainly wrapping my head around the OT and CRDT literature took huge amounts of brainpower exertion, comparable to what I did for my PhD.

Also, I have your essay in my queue to read more thoroughly, it's high on my list of stuff to study as we possibly rethink the way this stuff works in xi-editor.


Hey Raph, thanks for the vote of confidence! I spent a lot of time looking through your research on OT and CRDTs (plus your Rope Science series) in the course of writing my article, especially as documented and implemented in xi. Really fascinating stuff. I love how many different ways there are to approach this problem.


Thank you for that article!

I found it a few months ago, read it through the end, and shared it with my team. It's the best survey of synchronization techniques that I've come across.


That's awesome! Thank you for the kind words!


Thanks for the article, it’s a classic for me now.


Dont know if this is the hardest, but it is one of the most impactful. During the 2010 recession I had a client that had about 700K in receivables past due and we were doing about 120K/month in additional work. I started reading their 10Q and 10K and decided that they were at risk for bankruptcy as their cash position was not great.

I told my sales guy that we had to collect our money somehow. He was like they are a big public company, there is no way they will go bankrupt. I insisted.

I told him we needed to start working their procurement and AP hard. We bought gifts, took them out to golf, bars, lunch, dinner, and told a ton of sob stories about how we needed the money. We managed to get our AR down to about 120K which we decided was acceptable losses.

They went into bankruptcy and I was contacted by a receivables purchase company who offered to buy our 120K for 40K, so I sold them immediately.

Many of the small companies that we worked with there went out of business as none of us could take such a huge receivables loss.

To this day I try to get my finance team to send gifts to the AP team of our clients.


> To this day I try to get my finance team to send gifts to the AP team of our clients.

1. Many, many big companies require vendors to agree to codes of conduct that explicitly prohibit such gifts.

2. If the customer were to file for bankruptcy protection, the trustee (or debtor-in-possession) likely would characterize recent payments as "avoidable preference" payments and seek to claw them back — with claw-back being the presumption and the vendors having the burden of proving that they're entitled to keep the payments (which is somewhat of a PITA). [0]

3. If the recipient of such a gift happens to be a foreign "official," then the criminal penalties of the (U.S.) Foreign Corrupt Practices Act might become salient. [1]

[0] https://www.nolo.com/legal-encyclopedia/pre-bankruptcy-payme...

[1] https://en.wikipedia.org/wiki/Foreign_Corrupt_Practices_Act


In my experience, the vast majority of companies will not be reporting 'dinner' as a gift, probably not even golf. Maybe 'material gifts' i.e. things that are directly gifted that have value, but even a small basket might not get reported.


When I was 14 or so, my mom installed one of those "telephone locks" that blocked outgoing calls and only allowed incoming calls, she did this to limit my dial-up internet. I wanted to see if I could bypass it without my mom noticing.

I noticed that the case for the lock could be pried open easily because it was just a plastic cover with 2 tabs. I examined the circuit and saw that if I could bridge the cables around the lock via a toggle switch, I could have an open phone line during the day and then toggle it at night when my mom came home and she wouldn't know. And so I did. I felt like Kevin Mitnick. I remember documenting the process and posting it to hackaday.com

Funny thing was that my dad found out but he didn't tell my mom. He would just go "hey can you make the phone work, I need to make a call". I later found out he was on phone call restrction too.


That's pretty good. With the device is there some sort of emergency way to split the lock in case of emergency? It would be unfortunate to be unable to make outgoing calls if you needed to call 911.


I eventually solved a bug that took about 1.5 years to figure out, since we were not able to reproduce it, and it only happened on a customer's system. Long story short, it ended up being a by-product of sending a (256*N)+1 byte packet through the system, and the fpga asserted a signal 2 clocks later that did not assert in time on those sizes. This resulted in a single buffer leaking, but eventually it built up exponentially. Many days and nights of a logic analyzer and other equipment led to a single accident in the lab where I was able to reproduce it.

It's hard to describe the feeling of reproducing something like that, after haunting me for so long.


I feel you. A bug in our recovery software would result in the live CD being unable to open the Windows kernel file after having located it on disk. No amount of logging we added to the code made sense. In the end, after more than a year, there was a customer in the UK that could reproduce the problem each time on a clean Windows 10 install (so no private info). I mailed him a new SSD to replace his old HDD and paid for him to overnight me his hard disk. Problem solved the very next day.


That's awesome!


I think that's the craziest one I've heard so far. I can't imagine trying to reproduce a hardware bug in an FPGA with a logic analyzer.

How much time did you spend on the bug? Was it something that you ignored for a long time, and then you decided to dive in and spend a couple of weeks on it?


It was about 8 years ago, so my memory is a bit foggy. At one point we had everyone on our entire project working on it, which was about 20 people. We had daily calls with the customer since it was causing frequent outages on their network. I was full time on the bug for a while, and most of that time was spent making meticulous documentation for what happens in every clock cycle in the fpga.

I visited the customer's site about 3 times during that period with a very expensive logic analyzer. By that time I'd added a ton of debug code in the fpga, and the logic analyzer was set to trigger if any of those counters hit. A big breakthrough was we were counting packets at each point in the pipeline, and that debug was showing that at a certain point, the counters were off. This was huge since it was the first clue into what was actually happening. Prior to that we just knew that there was an exponential increase in retransmitted packets until the whole thing came crashing down.

Looking back on it, I'm really happy I ended up writing a large post-mortem with screen shots of the logic analyzer and everything. I can go back and see exactly what caused it, which is really interesting even today.


Wow, that blows my mind. I don’t even know how to ask the right question, but is there some kind of “formal verification” or “type-checking” that could have caught this issue earlier? Or is that just too difficult, and reserved for medical equipment, nuclear reactors, etc.?


It was a problem with a clock domain crossing with IP from another company into ours. There were testbenches/simulations for verification, but they didn't test every packet size. In hindsight, everything can be caught, but it's a matter of how extensive the testing is. For example, passing every packet size through would not have showed the symptom. Everything looked normal, and all packets succeed. The key was that the meta data saying whether a buffer was free after transmitting was incorrect and never showing that it was free. So to really see the issue you had to send enough packets of that size to totally deplete the buffers. With a standard packet distribution, such as imix, you never hit that size.


Thanks a lot for sharing this story! I started reading about clock domain crossing and metastability issues, and for the past few days I've been thinking a lot about circuit designs and FPGAs.

I posted on /r/FPGA on Reddit [1], and I asked about the best development board to start with. I was wondering if you had any thoughts on that? I think I'm going to order a snickerdoodle black [2] and a breakout board, and start learning VHDL or Verilog. I think it would be a lot of fun to eventually design my own soft-core processor and compile a Rust program that runs on it. And after reading about clock domain crossing issues, I think it would also be really interesting to experiment with asynchronous circuits and clock-less CPUs. Thanks for introducing me to this!

[1] https://www.reddit.com/r/FPGA/comments/9yutk8/best_100300_fp...

[2] https://www.crowdsupply.com/krtkl/snickerdoodle


Sorry, I can't help much with the dev board side of things. I mostly work on sw these days, and the fpgas we use at work are some of the largest you can buy (stratix 10). I can ask around at work if that would help.


Also I just found this hardware description language called Clash [1]. It's based on Haskell and compiles to VHDL.

That sounds really awesome to me! I'll learn VHDL as well, but this seems like a nicer high-level language with a really good type system.

[1] https://clash-lang.org


That looks really interesting. I've never used any high-level languages, though, so I can't comment. Most of the fpga developers I know somewhat despise those higher-level languages since you lose a lot of the control you have writing the HDL. They also tend to take up a lot more resources in terms of gates and memory, but things may have gotten better.


Wow, such an arcane bug causing such dramatic problems. My god man.

There should be a blog for this kind of stuff.


I mentioned in a previous comment that I actually have an entire detailed write-up of the bug. If my work lets me release it, I wouldn't mind transferring it to a blog.


I’m surprised and slightly disappointed that my memories don’t bring up a clear answer to this.

The hardest problems I’ve faced were never exactly solved, just moved past or muddled through. Things like loss of close friends and family, acknowledging my own limitations, and accepting the inertia of flawed institutions.

Any problem that eventually found a solution I remember as feeling relatively simple in retrospect.


Agreed on the technical problems- the "hard" problems in my career have been massive degradations/failures with no error messages, which are typically solved with liberal application of strace, gdb/pdb, iproute2/ss and Wireshark. Once you find the smoking gun in the kernel or on the wire then it becomes simple again.

I guess the hardest problems I've ever "solved" (mitigated) were mental (depression/anxiety/impostor syndrome). But those aren't one and done happy endings- they're a bundle of general anxieties that are never fully resolved. You just learn how to manage.


Finding a solvable hard problem is itself very difficult. If the problem is within your confort zone, you will never feel that is an hard problem. If it is too far from your confort zone you will not be able to solve it.


And if you obtain the knowledge required to solve it, the problem will start to look easy in hindsight.


I was working on a calendaring system. We supported recurring events, and each event could also have a piece of equipment associated with it (one or more). Events could also be changed, either the current event or this and every event. Event start/stop times were stored in UTC, but the event was displayed in the users timezone, and could cross daylight savings time boundaries. This was done with a mix of javascript on the front end and ruby on the backend.

When first implemented, the equipment was reserved for the entire time of the event. A feature request came in to allow for partial reservations of equipment (we want to reserve space A for four hours, but equipment B for only the last hour and equipment C for the first two hours).

Solving it on initial event creation was pretty easy. The UX was difficult. But handling updates, especially across recurring events, in a way that was maintainable and correct was the hardest technical work I've done. I wrote a lot of tests.

The difficulty was compounded by the fact that this was a startup and I had no technical peers to discuss the issue with. The code implementing the work wasn't the cleanest either. I could have reached out to friends, but they wouldn't have had the understanding of the issue.


How did you handle the database storage for recurring events? Did you have just one entry that represents the recurring event as a abstract whole, or a database entry for each instance of the recurring event?

The former seems "better", but you also run into a whole lot of complications:

1) The user can delete specific instances of a recurring event. Eg: delete the event for thanksgiving Thursday but leave it intact for all other Thursdays

2) The user can make instance-specific edits. Eg: edit the event description with custom meeting-notes that's specific to that week's instance

3) The user can invite/dis-invite people for specific instances. Eg: invite Dave only for the event this Thursday, but not for the following weeks'

Creating a database entry for each instance avoids the above complications, but comes with its own drawbacks: if a user creates a recurring event with no end-date, you have to populate a large number of instances, all the way until some arbitrary MAX_DATE.

I was asked this question in an interview once and I recommended the latter option as the lesser evil, and the interviewer was visibly displeased with my recommendation. I'm wondering what the better solution would be.


For our to-do app, Matterlist (https://matterlist.com), we went with the first approach: we represent a recurring task as a separate entity. Here's a quick explanation: https://matterlist.com/recurring.html

Matterlist's recurring task descriptors ("Schedule Items") essentialy define infinite sequences of recurring events, which are not represented in the database, but are visible and editable via the same UI, just as the regular, non-recurring tasks.

Here's how we deal with the problems you outlined:

1) When a user deletes a specific instance, we create an "exclusion" record for that day, so that the function that generates an endless sequence of recurrences doesn't generate one for that day.

2) When a user edits a specific instance of a recurring task, we also create an "exclusion" record for that day (because otherwise we would have two tasks on that day, the edited instance, now with a database record, and the virtual instance), and we create a database record for the edited instance, so it just becomes a regular task.

3) We don't have shared tasks yet, but that would be handled via the same framework, as outlined in 1 and 2 above. We'd just de-virtualize an instance and add/remove users to it.

Creating a database entry for each instance of a recurring task was unacceptable for us, because we wanted infinite forward visibility, and because when a user edits the parameters of a recurring task (e.g. changes it from daily to weekly), we'd have to delete all the old records and create the new records.

BTW, we have finally released an Android app, so you can see our recurring tasks in action: https://matterlist.com/#apps


I went with the latter. We limited the total number of events to something like 5 or 10 years to avoid it [edit: the max age problem]. The only real issues that came up from that approach were:

* UX was hard when people want to change individual events that are also part of a recurring event (who wins)

* daylight savings time is an issue when you store datetime in UTC (you need to shift the UTC value in Nov vs in June to have it be the same "8am every Friday" event)

* saving a change across all of a large number of events created a large, slow transaction that caused slow responses (in some cases triggering heroku timeouts)

I was lucky that we didn't need to implement inviting guests, as that wasn't needed.

There were times when I wish I'd implemented the RFC standard: https://www.ietf.org/rfc/rfc2445.txt but never got to that. (We also evaluated just building on top of Google Calendar, but I was worried about building on top of an API that we weren't paying for, and there was the issue of tying in child equipment events.)


Some very quick thoughts.. Have something to resemble the event series as a single instance - then have 1...n relation to actual occurrences for occasions when there has been a change, added notes or cancellation. These should be created when the change/added notes/etc. are made.


I would say store a default event together with the pattern it repeats in. Then create exceptions that hold additional info or a cancellation when needed. When they finally add an end date in one way or the other, add the end date to the default.

On load/view time, generate events from the list of defaults and amend with the exceptions as needed. The user should not see the default/exceptions system, just events and the pattern.

I’d say working with exceptions in this way is more efficient than storing many copies of the same event.


I would have one meta event, and if you modify it, you implicitly clone it and modify that clone


This is a good read about recurring events: https://martinfowler.com/apsupp/recurring.pdf


Outlook/exchange uses the former approach, a pattern with a list of instance-specific exceptions. One reservation system I worked on used the latter approach, with every instance saved to the database. I was tasked with building an outlook plugin to smoothly integrate the two, and it was probably the most difficult piece of software to get working reliably I ever wrote. The prototype took a few days, the fully reliable version compatible with all outlook versions took two years. Outlook’s internals are pure madness.


I have worked with recurring events, and the answer that I've seen is generally both. You have one record which is the "template" and then populate the database out with specific instances to some arbitrary date. Every so often then run a cron to populate further and further out.


That's pretty much what I did, but the original record wasn't stored in the DB, just the specific instances. Our use case was such that folks were moving around events periodically (it was a scheduling app), so we rarely had anyone "run off the end" of their repeated events.


If it makes you feel any better, there is a large section of this book dedicated to a team struggling to implement recurring events properly:

https://www.amazon.com/Dreaming-Code-Programmers-Transcenden...

Great read, and shows how difficult it was and is to compete with office.


The hardest thing I ever worked on was a calendaring system for scheduling ads on/off. The UI was great, but I just told them "oh, send me a cron format string for scheduling". It was easy for them to do, but very hard to handle on the back end. Cron format strings aren't fun to reverse engineer. Why did I do that?

File under "things programmers assume about time". It kept doing "weird" things from the user perspective and I ended up writing more tests for that subsystem than anything else I've ever done.


My first big project was working with a back-end developer on a custom implementation of a calendaring system.

It was one massive several-month nightmare. Totally relate...


Once in a ie6 era, I created vertically and horizontally centre align login page that also worked in Mozilla.


You, sir, are a God.


Years ago I wrote an emulator for the Intel 8086 processor in C++. It's deceivingly difficult because the instruction encoding is complex and the emulation of each instruction has to have a very high fidelity. In a sense, software at the CPU instruction level is a chaotic system, as each instruction can influence the system state to a critical degree, so if there is a slight deviation from the spec/hardware, a snowball effect of deviations happens that leaves the system in a completely botched state where your emulator won't boot at all. Because the deviation can happen anywhere within an execution path of millions of instructions, it's very hard to debug, too.

Eventually I got it working and I could boot DOS and play games.

The aim of my project was to create a programmable emulator that could be used for the semi-automated analysis of malware, and sell it. Eventually this didn't really go anywhere as this was a too ambitious goal to do all by myself. See here [1] for a demonstration where I load tweets from Twitter and send them to the DOS text editor by triggering a keyboard interrupt via a Python API. Fun...

Later, the Unicorn CPU emulator framework implemented my idea much more effectively by creating Python bindings to an already mature emulator (QEMU).

[1] https://www.youtube.com/watch?v=XwPZH8LAVIY

[2] https://www.unicorn-engine.org/


Trying to do the same project but for MIPS architecture as a bachelor thesis project!


How to be happy.

For me it turned out to be jettisoning all of the stuff from my life that didn’t feel meaningful and reorienting my life such that all of my major time commitments positively impact things that I consider to be personal values.

This includes: hobbies, non-profits, and my job.

This does not mean I live an ascetic life. Instead, I live one that I personally find meaningful.


What were some of the things you jettisoned?


Probably not hardest, but one that sticks in my mind was that I downloaded an old copy of Jedi Knight II onto my computer and was excited to play it again... but it crashed on launch. Or rather it would throw a modal dialog (complaining about graphics something-or-other) and go into a loop of system beeps.

So I attached to it with gdb... and started going down a rabbit hole of stack traces and assembly code. I don't remember ultimately what the problem was (I think the code had been (partially?) stripped of symbols) but I do remember ultimately flipping a bit in some conditional that finally got it to launch!


You reminded me of this: "How I Fixed a 10 Year Old Guitar Hero Bug Without the Source Code"

https://youtu.be/A9U5wK_boYM


Thanks for sharing that! It does look very similar — I wish I had documented my process as neatly as the author of that video.


One of the hardest challenges I have had in my career is convincing our company to move to Continuous Delivery. 90% of the challenges weren't technical, but emotional. Shipping software comes with lots of feelings, fear, politics, etc. I had to personally work with various leaders across the org to help them through these feelings and perceived blockers.

We aren't 100% there yet, but we are shipping numerous times per day across 20 or so services and quality has gone _up_, not down


That is interesting because I am still trying to figure out how people do CD. Of course I know CI, we have it all set up. But we still work in sprints with manual testing and release every 2 weeks.

I could spend time on marking features 'frontend only, low impact' which we could deploy pretty much the same day. Still there are quite some features that need bigger amount of work where they might be 'done' by dev but I am sure they are not actually done, because security, because error checking. It of course is usually that one dev has his not tested feature merged to develop and then also some other dev has production ready one, then if one feature is production ready, I would have to put also some time to make release and pick only changes for accepted change. I am not sure that additional work to check what we can release ' right now' will pay off vs just taking time for fixes on acceptance by people who were working on code and then release it (after 2 weeks o 1 depending how fast it is done in sprint).

So are you having people who work only on picking changes that are low impact, or working on making stuff production ready by picking from develop? Maybe you pick changes by yourself, or you just defer manual testing to end users and rely on automated tests unit/integration?

p.s. Funny thing with automated tests is they are good at keeping old stuff working but not for testing newly developed features where actual tester can test new GUI/ new features. If you have a lot of GUI changes you cannot automate first tests...


A couple obvious things stand out from your situation. First, only allow merges of code that's been reviewed and has enough tests along with it. Second, have a pipeline that automatically runs all the tests whenever you merge, and if they pass, then goes on to automatically deploy. It's really that simple. It's not easy to get to that point, but it's simple.

Mostly it comes down to organizational changes and everybody getting used to what constitutes "enough" tests.


It was pretty important to work with the Project Managers and "scrum-masters" to decouple our release cycle from our sprint cycle. In our case, like yours, they were coupled together for no technical reason. It's hard to sum up all the changes we made to allow it to happen, but mostly it boiled down to a few technical decisions -

* Every PR is treated as "production" ready. This means if it isn't ready for user eyes, it gets feature flagged or dark shipped. Engineers have to assume their commit will go into prod right away. Feature flags become pretty important * Product Owners and QA validate code is "done" in lower environments (acceptance or staging, or even locally during PR phase). This helped us decouple code being in prod and "definition of done" in our sprints. * All API changes and migrations follow "expand and contract" models that makes the code shippable. e.g. even if we are building new features, we can ship our code at anytime cause the public api is only expanding. * More automated quality checks at PR time. Unit tests, integration tests, danger rules, etc. These vary from codebase to codebase. A key part of this is trusting the owners of that code or service. To a degree, if they are happy with their coverage, then they can ship. (Within limits of course. Not having unit tests at least would be a red flag)

Also, we still ship numerous times a day without a full automation test, we just make sure each release is really small (1-3) commits. The smaller the release the easier it is to manually QA it. So nothing fancy is needed, just smaller releases.

So to answer your question, we don't pick and choose work that is "low" impact or "high" impact, all code gets shipped the same and with the same cadence. It is our responsibility to ensure that when it goes to prod, it won't break anything


Facebook transitioned from a weekly release cycle [1] to continuous delivery while I was there. It was a big project. There's a nice writeup here:

https://code.fb.com/web/rapid-release-at-massive-scale/

[1] commits were pushed to employees on commit, that's how we tested, but not to the public


2017? Wow! My company shifted back in 2011!


I have come to be against continuous delivery - in 'whole product terms' there are vast hidden expenses.

In particular - documentation and support.

Using complicated interfaces is very challenging sometimes, to the point of obscene - Google searches for help turn up a variety of outdated answers - and it's impossible to know what's what.

I'm using Facebook Ads right now quite a lot - it's complex system that just shifts like quicksand under your feet.

Users make incredible efforts to learn the product, only to have it shift away from them like a ghost.

Documentation may or may not be up to date.

Locations of things change.

And who can you ask? Where do you search for support? Facebook has never really provided me answers to many questions in their documentation.

Tiny example - an advisor used to know how to list people that have commented on a post, so that you could 'invite them to like your page'. But it's changed and now he can't find it. One small thing now that he doesn't know how to do, a tool list from his tool-chest.

And who really gains from all of this? Seriously? I don't think anyone.

Is that 'new feature' really that important? It needs to be rolled out 'now' instead of in a major/minor delivery?

These changes are seldom well communicated either.

I suggest the total opposite might be better: release major iterations every year, minor iterations quarterly, and patches as needed.

Every time there is a release - provide users friendly release notes - something they can read at a quick glance and get up to speed. A little 10 second video for every change: "Oh, now you can do XYZ like this"

This way - you have predictability and consistency so users can know how to 'keep up'.

Also - the ability to use the OLD interface where possible for at least 1 year, or something like that - that we users are not forced onto the quicksand.

"Ok it's Jan 1 - FB Ads 2019 is released in 1 month - let's go over the changes - Mary, you can be responsible for highlighting the major changes and communicating them to the rest of the team, and highlighting any risks for us"

Otherwise, you're trolling along, these companies make changes that could feasibly have major impact on your business and you're out to lunch.

Maybe internally continuous delivery could be a useful thing ... but for the world at large the downsides are real and the upsides are limited.


For my thesis project back in 2011, I wanted to create an interactive installation, the kind you see at museums. Since multitouch tables were insanely expensive to buy or rent, I decided to build one myself. Learned the basics of woodworking over a semester, and built a table after two failed attempts. Then I bought an LCD TV, took it apart, connected it to a playstation camera with a modified infrared lens and developed an Adobe Air application. The thing was so unresponsive to touches, I decided to ditch the whole multitouch table idea. Started from scratch again ... learned objective-c (iOS 5 or was it 6), and created an iPad app for my thesis. Don't think I've ever worked so hard in my life.


There's actually a quite nifty principle that makes it relatively easy to build your own multitouch table: Frustrated total internal reflection.

The idea is based on internal reflection: Whenever lights hits the boundary between two materials with different density it gets refracted. If it goes from dense to less dense and the angle is flat enough it will reflect back. This is the principle behind fibre optics. It's also the reason why the water's surface looks "silvery" (like a mirror) when you're submerged and look up at an angle (i.e. not straight up).

Now imagine you have a plane of glass and you put LEDs around the edges that shine into it from the side. The light will zig-zag trough the glass and come out at the other edge. However, if you touch the glass you inhibit that total internal reflection because your finger is alot denser than the air and so the light will "leak" out the glass where you touch it, illuminating your finger. If you look at the glass from behind you'll see a bright spot.

Use a camera to detect that spot and you basically have a touchpad. To make it a screen you can put a translucent sheet behind the glass and project an image onto it (and use infrared LEDs).

See e.g. http://wiki.nuigroup.com/FTIR for some helpful images. Just google "FTIR touch screen" or similar for build instructions and blob-detection software.


The hardest part was definitely creating a sturdy wooden frame. Glad I'm a designer/developer.


The hardest thing I’ve ever solved is finding 30 years of motivation. I co-founded a family business, or it co-founded me, because I was only 14. From inception to exit, it was an interesting intellectual and emotional challenge to keep 50-100 people motivated at any one time, including me. Coding was fun but talking to an angry customer was less so. Eventually, survival instincts kicked in and taught me that creativity and learning was the solution. Everything happened for a reason, and that reason was usually hidden under multiple layers—for staff, customers, or me. I found I became motivated by avoiding reaction, and instead seeing that there was already a motivation behind every interaction, like boredom, anger, and ambivalence. Understanding these individual motivations provided clarity as to what needed to change to maximize team motivation. This made hard problems palatable (and motivating) for an old-school techie, like me.


>> Everything happened for a reason, and that reason was usually hidden under multiple layers—for staff, customers, or me. I found I became motivated by avoiding reaction, and instead seeing that there was already a motivation behind every interaction, like boredom, anger, and ambivalence. Understanding these individual motivations provided clarity as to what needed to change to maximize team motivation.

At the risk of over generalizing it sounds like you learned to empathize. Sounds like you found that to be a key to success!


That’s insightful.


To learn to listen to other people, instead of just waiting to talk - that's an ongoing problem.

To understand what "controlling behaviour" means.

To empathise, and see oneself from the eyes of others.


If only we were all as self-aware of this as you are.


I had designed a PCI chip. Our driver team reported that the PC would hard hang under certain conditions. They suspected a bug in the chip.

Armed with a PCI analyzer, I figured out that it was a race condition where the firmware on the chip would raise an interrupt to the PC. And they would clear the interrupt right at the moment where the firmware wanted to issue a new interrupt.

If performed in the wrong order, the PCI interrupt stayed high even after the PC thought it was already serviced.

The solution was to just switch around 2 lines of code in the firmware, but it took two weeks to figure that out.

Not that long, but it was an insidiously subtle mistake.


This isn’t the hardest problem I’ve solved but I was damned proud of solving it when I did...

Was about to tape out a chip when we got a last minute change order... a 100kohm resistor needed to be added. This was an already laid-out, routed and optimized chip design that was ready to be fabbed.

A few hours later (including at least an hour of shouted obscenities) I found that I was able to snake through 100kohms of poly and diffusion resistor into our design without moving anything, violating any design rules or changing any operating points even after parasitic extraction.


I don't know if it's the hardest, but very recently I had a problem that drove me nuts: I was doing some consulting for a company that designs chips, and they assigned me a project to take a nearly completed design and augment it with additional wires to route extra power to some parts of the chip that were running on the hairy edge of not having enough current. So I had to read in the design spec, search it for available space, and generate new wire geometries for the power grid. This sounds straightforward, but there were two things that made is incredibly hard. First, the design that I was reading in was the output of a routing tool that was provided by a third-party vendor, and it was producing some really crazy shit. Not even the senior design engineers could understand some of the things that the router was doing. And second, there are hundreds of design rules that the new geometries had to conform to. Most of them are not relevant, but about a dozen or so are, and some of them involve incredibly complicated interactions of multiple geometries, occasionally across multiple metal layers. Trying to keep track of all that and make it all run in a reasonable amount of time was quite challenging, to say the least. But the worst part was that the development cycle involved a manual step where the output of my code had to fed into yet another third-party tool to see if I had violated any of the design rules, and the output of that tool was graphical. And I was not able to use the tool because of licensing restrictions! So I had to take my output, give it to someone else, have them run the tool, look at the results, and send me a report of how many rule violations there were, where they were, and what rules were being violated. That took several hours, sometimes multiple days.


1. Debugging a non-deterministic crash in a huge and complex C++ app. It took me a few weeks of super-focused hard work (kudos to my then team & management at Sabre office in Kraków for giving me this time, and understanding and appreciating what I was doing) to solve, with lots of low level debugging down to raw disassembly, as well as code reading and intense thinking and thinking and thinking and thinking. Eventually I managed to narrow it down to a memory race in advanced template code. Some time later I realized it scarred me to C++ (my first love in programming) for life, also opening my eyes to what horror Undefined Behaviors in C++ mean and how they loom ominously over every single seemingly simple character of C++ source code anyone writes. I'm now working in Go and appreciate it so much, as well as Rust etc. It also was one of things that made me appreciate what good management can mean to an employee.

2. Debugging Go runtime in pre-1.0 era, to find out why it was not working stably on Windows. It involved a lot of digging through Go internals (also in Go's old C compiler & assembly), as well as WinDbg disassembly-level debugging. I eventually found out that the Windows call convention was not preserved in some aspect, in that one of the registers was being mangled on return. I see it as my most important contribution to Go, though formally I was not listed as a Contributor through it, though I got a Thank You mention in a commit message I think. But some time later I did some much simpler contribution which got me a then coveted by me (and still valued) entry on the Go contributors list.


* Creating a prototype of a low-cost alternative to braille displays for PCs for my thesis (without access to modern machining resources): https://www.youtube.com/watch?v=bPwgkf1aZ9I then filing for a patent (that I didn't get): https://patents.google.com/patent/US20130203022A1/en

* Improving that design to make it work reliably and conveniently enough for mass production and deployment in 3rd world countries (still haven't figured it out :(

* Reverse engineering the APIs and roadblocks to port iMessage to Windows: https://neosmart.net/blog/2018/imessage-for-windows/

* Porting an entire automated Windows PC system repair suite from Windows to Linux (still to fix Windows PCs) in two weeks when Microsoft pulled the rug out from under us and abruptly informed me that they would no longer be licensing Windows PE to ISVs: https://neosmart.net/EasyRE/ (now powered by FreeBSD)


This is really amazing stuff! How do you keep moral and focus when tackling harder projects?


Thank you. It's an ongoing struggle that I haven't solved yet. It can be very tempting and fun to just switch to a new project when you reach the point where you know what needs to be done but it's an insane mountain of work that you either dread (e.g. you need to rewrite an entire project from scratch to take a different approach) or can't even do at the moment (e.g. you need precision machining beyond what is available to you).

I get a thrill out of besting myself, and find that helps extremely well when I've all but given up hope but conversely that makes it really hard to motivate yourself when it's something you know you can do (perhaps you've done it before) but just aren't looking forward to. Also it's really hard to do something that takes so much insane time/effort (literally years) when you know there's a good chance the world just won't care.


When you're younger, the problems are harder, and the success sweeter.

I earned my spurs (well, perhaps an enlisted stripe) back in the 1980's when I inherited a non-functional mess of an HP-85 Basic program to control an antenna test system. Many days in the Hanscom AFB RADC EE Propagation shack to get the RS-232 mag tape and IEEE HP-IB/GPIB/IEEE 488 custom hardware working. We kept bumping up against the HP-85's program storage capacity, so I had to remove most program comments, and actually shorten the variable names to get it all loaded into memory.

Then we got to fly that sucker on a pair of C-141B's from Wright Pat AFB, to Elmendorf AFB, then a flight over the North Pole, then to Thule AB, Greenland (9 days in November - do not recommend), then to RAF Lakenheath.

30 years on, I still make a living fixing crappy code.


Joining my current company and saving a relationship with a key client from the brink of termination.

The previous engineer on the account had overpromised and under delivered... in a rather big way. Not only that, but he also managed to misconfigure the enterprise automation software we use and accidentally delete the customer’s entire network share, then managed to get one of those cryptolocker viruses on the share after it was restored from backups, prompting a second restore. Regardless, the customer was pissed as hell and was on the verge of firing us.

After I was hired and put on the account (which I knew going in), I spent two days taking stock of all the shitty stuff the previous engineer had built, deciding what could be salvaged and what had to be discarded. Then, for the next two months, I went over the original contracts and built (or rebuilt) the originally promised system. I would basically work for 10 hours everyday, 7 am to 5 pm, then write an extremely detailed report to the customer explaining what I had done that way, which would take another 2 hours.

For the first week they ignored my reports. Then the CFO started taking interest, first by nitpicking, then asking questions, then praising. The rest of the customer’s team slowly came on board as well, especially once I started delivering the bits that were user-facing.

Today this customer is our flagship account, and has helped us land many other large accounts. But those two months were probably the most stressful of my life, not just because of the technical challenges but also due to having to simultaneously navigate the tricky political situation.


I solved some interesting technical problems this year. One was optimizing a script that took over eight hours to run, to just twelve seconds (validation code brought that back up to several minutes, but still a lot better than eight hours). Another was taking several map pins whose coordinates were based on the same physical address (just different suites/floors, and having the same exact coordinates returned from the API) and staggering them. I ended up using a spiral pattern to do the staggering.

An ongoing soft-skill problem I am learning to solve is the effective customer development.

An interesting UI/UX problem I am currently thinking about is how to allow people to draw sun/shade patterns, wind current patterns, and essentially contour lines that show elevation on their property (the last is to ultimately learn how water flows on their property, and where there are drainage problems) in a fun, minimal-effort way. If anyone has any suggestions on this, please feel free to share.

The last two types of problems are for my side project, AutoMicroFarm.


IMO the most fun way to enter elevation data for your property would be to ride around on your farm bike with an altimeter lashed up to a smartphone, gathering data as you go.


Thanks for the idea! I found a variety of altimeter apps for the phone; I'll have to try just walking around my yard with one (or several), then see if it's possible to get elevation data on a map from such an app.


I'd suggest getting an app like Sensors Multitool for Android which gives you the raw output of your phone's sensors. I tried one of the altimeter apps a few years ago to see how tall a big hill was, and the value that the app reported turned out to be far off the actual value. There are various techniques for mixing and cleaning data from the GPS and barometric pressure sensors and a premade altimeter app (if it's not open source) obfuscates what's really going on with the values you'll actually end up seeing if you create an app for your microfarming project.

Also be aware that different phones have different levels of support for non-GPS location services. I was surprised to learn recently that my Pixel 3 can use data from the US, EU, and Russian satellite geolocation systems for better coverage/precision.


Thanks for the tips! I don't need accurate readings, just precise ones (so the map can be internally consistent; here's what I drew manually for my property: https://i.imgur.com/2ZGmB1M.png). I'll keep in mind what you suggest when I am ready to develop that part of the app.


Well, I tried both Accurate Altimeter and Sensors Multitool, and I must say I'm disappointed. The values varied by 20 meters! In reality, my property varies by 2 to 3 meters at most.

I'll keep thinking about how to solve the problem.


I was working at an IT agency as an ops guy. One Saturday morning our alerting went of for one of our customer's sites (a somewhat high profile public sector company). Turned out we were being DDOS-ed.

The hosting company was trying to mitigate, but the attack was "real traffic" so hard to just black hole. I dove into one of the Apache logs and was looking for some way to sift out the bad traffic and keep the real traffic.

Then I noticed a lot of Nintendo Wii user agent headers. That was suspicious. Who in hell is using this service 100 times per second from their Wii from Pakistan and Russia?

I wrote a quick regex that blocked all IP's from requests with Wii user agent headers. This took care of 99% of the DDOS traffic. After 2 hours the whole attack stopped.


Getting well with a genetic disorder while the world harangues me and accuses me of making that up.

Next up: Figuring out how to talk to the world about it effectively. I'm not sure that's solvable. "Curing CF" seems easier.


It's just an hypothesis, but I think saying CF when you actually have an atypical form of it is a bit dishonest. I have no doubt that diet and lifestyle do wonders for many conditions including yours, but the CF everyone thinks about is the one discovered in infancy and the full set of symptoms, not the one with late onset (at an age which many CF patients do not even reach) and milder symptoms.

If you were willing to lift up that imprecision, I bet you wouldn't have such a hard welcome.


It's a case of damned if I do, damned if I don't. People with CF don't like me saying "mild" CF. They have a cow about that. I spent years on CF forums and got schooled on that detail quite thoroughly. There are more than 1600 alleles that can lead to a diagnosis of CF and the exact presentation of the condition varies from one person to the next in part because of that.

My condition is not late onset. I have had it my whole life. Being diagnosed late doesn't mean it "came on" late. I spent years being treated by people like I was some kind of hypochondriac and asking doctors "Can we test me for something? My body doesn't seem to work normally." and being blown off.

It's not imprecision. I'm as precise as I know how to be or you wouldn't know that my actual diagnosis is atypical cystic fibrosis. I in no way hide that.

So from where I sit, that just looks like yet another BS excuse or justification for people on the internet to be jerks to me.


I knew it because I looked up your bio for the first time You don't hide it, but I don't remember reading it in your posts here, not as much as the complaints about other people's behavior at least.

>My condition is not late onset.

And the only person I knew with CF had been diagnosed as a toddler and had a 20 y. life expectancy because of genetic bad luck. That you could live 30 years without heavy treatment sure says that your condition was not as bad as that person, and maybe there's a state of affliction that cannot be conservatively managed.


DoreenMichele, how did you do it? Give me technical and medical details.

>Next up: Figuring out how to talk to the world about it effectively.

My guess is that you will be unsuccessful. (from similar accounts that I have read )


I don't think I can really do that in a nutshell.

I've spent years trying to figure out how to explain it to other people. I actually joined HN in hopes of learning to program so I could write a simulation to more effectively convey the information, but that hasn't happened. Getting well and other life drama has taken all my time.

Trying to explain it with mere words seems to be something that needs like a million of them and still isn't really adequate.

But, in a nutshell (or an attempt at a nutshell explanation):

First, I did things I knew were helpful from before having a diagnosis and tried to then go figure out why it was helpful. Having gotten diagnosed late in life, I had my own mental models and ideas that didn't jibe with our current official understanding of the condition. I respected my innate knowledge rather than just agreeing with the experts.

Second, I tried to understand the pathology. We know the mechanism behind the condition: A malfunctioning (or sometimes missing) cell channel called the CFTR that handles trafficking of certain molecules into and out of the cell. So I tried to understand how we get from there to all the symptoms typically associated with CF, such as malabsorption and lung infections. That seems to be a different approach from what medicine currently does.

Some of my general conclusions:

Malabsorption leads to malnourishment. Many of my issues were rooted in malnourishment. Improving my nutritional status was a major cornerstone of my efforts to get healthier.

I found ways to compensate for the defective cell channel. One thing that is known is that CF involves misprocessing of fats. I got very picky about what fats I eat and tried to develop some understanding of why some fats are helpful and others are problematic. This seems to be rooted in chemistry and at least partly in how complex the molecule is. My body generally handles long chain triglycerides poorly and I favor medium chain triglcerides, though that is likely a wholly inadequate explanation of the phenomenon.

Most genetic disorders involve a misfolded protein. I learned what promotes misfolds. Chemical derangement in the cell promotes misfolds. I believe this is an important mechanism behind the progression of CF, where you steadily get worse. I think reversing the chemical derangement stopped the positive feedback loop (aka vicious cycle) and this was a huge, huge factor in being able to get healthier rather than just try to slow or halt the progression.

I'm currently trying to figure out what I want to say on my blog about my understanding of "the immune system." When we speak most systems, we can name specific organs. The digestive system involves the stomach, small intestine and large intestine. The circulatory system involves the heart and an extensive system of blood vessels. But I can't list for you the organs involved in the immune system.

So I think better understanding exactly what we mean by that -- what is going on when the body successfully fights off infection -- is a major thing and I am still trying to figure out what I want to say about that. I think we generally have a poor concept for how that happens.

That's probably several hundred thousand words short of conveying much of anything, but it's an attempt to reply to your question.


I don't understand. People try to tell you that cystic fibrosis isn't real?


They tell me I am making up the story about getting healthier. Or they say I don't really have it. Or both.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: