Hacker Newsnew | comments | show | ask | jobs | submit | skrebbel's comments login

A bit off topic, and I know I'm sounding like a condescending jerk here but I mean it: but why do people love bugs so much that they want to purchase an entire database program to track them? I mean, if you have more than 10 bugs that are highly important to keep track of, it seems to me that you have much larger problem than whether to purchase JIRA or FogBugz.

I mean, either a bug is important to you so prioritize fixing it, or it isn't important to you so you can just ignore it and see if it surfaces again.

Every single team I've seen that used a bug tracker ended up with hundreds of issues, status of 90% of which was unknown, maybe already fixed, maybe not relevant anymore because the entire component got overhauled the other day by Joe. What's the use of this stuff?

I got this from a blog post (by Ron Jeffries I believe, but I forgot), but basically his idea was that the moment you turn something that you don't want to have into a formalized process, people tend to bias towards getting more of it. If you are forced to choose between fixing a bug or forgetting about it, you're urged to fix it fast. If you can just "file the bug and go on with your work", the bug stays.

What's your development experience? I am guessing that you haven't worked on line of business applications for big companies which takes years to develop.

I think it's a mistake to think that Jira and FogBugz are only used for tracking 'bugs'. They are obviously used for that but everyone I know uses them to track everything related to their software. Track features/support questions/infrastructure changes etc.

To give you an example, I am part of 50+ developer team implementing straight through processing for a large hedge fund. This involves developing variety of systems which are highly dependent on each other. Jira tells me that I have 120 items which I need to implement by this year end. Mind you, this is just one system. There are at least 10 more systems like that. I can confidently say that there at least 1000 items in our overall list which we need to work on. You absolutely need a tool like FogBugz or Jira to track them properly.

I've heard this POV before, and I'm somewhat sympathetic to it. Sometimes, it's better just to fix things immediately rather than devote a whole process toward fixing them later.

The problem is that the flip side of fixing bugs immediately is interruptions. If, every time you discover a bug, you have to fix it or forget about it, it means that every time a new bug comes up you're going to have a PM interrupting an engineer. Eventually your engineer isn't going to be able to get any work done, and productivity grinds to a halt. It also puts a big damper on doing any large features, speculative features, hard features, or basically anything that requires sustained concentration for long periods of time.

Indeed, as a tech lead, one of the signals I use for when it's time to introduce a formal process and bugtracker is when the engineers on my team start complaining that they can't get any work done because they're being pulled in too many directions at once. If it's not a problem, then there's no need to slow yourself down with process to fix it. But if it is a problem, it's good to have a strategy that captures all the work that needs to be done and lets everyone coordinate who & when they'll do it.

You've got more than 10 bugs. You just don't know it yet.

But really, "bug tracker" is just a colloquial expression for the more accurate "issue tracker" which can definitely include things like improvements, new features, A/B tests you want to run, etc.

It is possible that their software isn't large enough to have 10 bugs... they could be implementing FizzBuzz.

Where can one get a job as a professional FizzBuzz developer?

I think if a program is big enough to devote a software developer to it full-time, it's going to have at least 10 bugs. Maybe they are small or unimportant or so obscure that none of the users actually run into it, but they're in there somewhere.

In the end its just project management. I want to keep track of the things that still need to be worked on, so I don't forget them. I can delegate work, others can delegate work to me, work that requires other work or sub-steps can be passed around multiple people. The work can be described in detail, more input can be gathered, actions that were taken logged for future reference.

All of that can be achived in many many different ways of course, but software based ticket systems can work quite well in my opinion.

Issue tracking systems are very important, even tracking a handfull is a pain over email or some other non-standardized process.

The way to keep the system effective is regularly close issues that have had no updates for, say 30 days. If someone cares about it, it will be re-opened. If no one notices, well it can be closed.

Good issue tracking system is just as helpful for the coders as it is for QA, PM and the rest of the team. If it isn't, you need to discuss improving it rather then abandoning it all together.

That's a really interesting idea. As a data point, lets look at a healthy, modern, high quality project, what does it use the issue tracker for?


They are using the issue tracker for all sorts of things, and it clearly has value.

Turns out Linus isn't too fond of bug trackers either:


and the kernel seems to be doing pretty well.

Theres not really "bug trackers", there issue trackers. All the micro tasks are created, organised and tracked though to completion. Often visually represented on a kanban board.

Which is a nice abstraction, but really fails with a little reflection. Bugs, shouldn't really have long lifecycles, dependency chains, target dates, etc... Features need these things but don't have wontfix, notabug, multiple branch targets, and discussions about reproduction. Which is why i always like running two different processes and joining the results at release mgmt/source tree level.

The relevant XKCD: https://xkcd.com/277/

Every JIRA product has it's own special markup.

I can never remember how to write code into comments or titles. It literally changes from product to product and yet all of the products orchestrate together so that if you do happen to use Stash, JIRA, Wiki, etc all together, then you encounter as many different markup languages as there are Atlassian products.

It really is hellish.

I hope there's some good reason why they can't provide one "comment markup" language to all applications.

So far as I can tell, they did have one common markup syntax for all applications... but then Confluence users wanted a WYSIWYG mode, so Confluence now uses an XML-based format internally, and Stash is a Git tool and 90% of the Git ecosystem loves Markdown, so Stash had to use it too...

These things are not mutually exclusive.

Markdown embraces HTML, and so a WYSIWYG mode based around HTML is compatible with Markdown.

Markdown's weaknesses for table design, image insertion, and complex layouts is all handled by HTML and WYSIWYG tools that edit that directly.

That would have solved the Git scenario, and the Confluence scenario, whilst having a single highly predictable markup across all of their platforms.

The problem really stems not from these problems being addressed per product as if they existed independently of all other products. But that's a terrible approach, as few people buy just Confluence without JIRA or Stash, people buy Atlassian because a consistent suite should work better than many myriad tools that don't quite know how to interop. Atlassian's strength is the consistent and integrated approach, so the UX should be focused on strengthening that.

That's because almost every Atlassian product started off at a separate company that was then acquired.

Was that really the xkcd you wanted?

I actually misread the URL as https://xkcd.com/927/ (having memorized that number like everyone else here, I'm sure) until I saw your comment. #277 is definitely not right.

#277 sure seems to me like a reasonable reply to "Whoever designed <such and such> deserves a special place in hell", which is what it was posted in response to.


There used to be the occasional outburst of suspicion of mean moderators with double agendas hellbanning great commenters. Dang's been very active presence here recently to explain decisions that might've seemed fishy, and that has already done a lot to remove that kind of thinking.

Nevertheless, the conspiracy theories and rebellious "<name in other thread>, looks like you've been hellbanned by the evil mods!" comments are still around, and I suspect that the 'vouch' feature will help kill the last of it. I really hope that it'll work as intended!

(assuming it's all unfounded of course, which I believe it is)

Although "<name in other thread>, looks like you've been hellbanned by the evil mods!" comments are bad, I'm not sure I've seen them. I have seen comments like "<name in other thread, you appear to have been hellbanned", but I think they were sometimes warranted. The moderators do sometimes make mistakes, and IIUC the normal way to resolve them is: user realizes they are hellbanned, mails the mods asking about it, and the mods, if they feel the user should not be hellbanned, unbans them. Alerting commenters that one thinks may have been banned by mistake of their ban can either speeds this process along or wastes comment space and moderator attention, depending on whether one is correct.

I agree that hopefully vouching will kill the "<name>, you appear to have been hellbanned" comments, though.

I don't think it's a "conspiracy," but HN has many means of opaque moderation (shadowbans, slowbans, arbitrary score penalties to control the front page), and they continue to be applied with no accountability. We play here at the mercy of capricious Gods.

What a spammy title. The ads are only "in your phone" if you use a Facebook app.

Breaking news! Ad-powered app shows ads in its app!

Does anyone know whether Facebook also tracks you into and out of WhatsApp, by the way?

> Does anyone know whether Facebook also tracks you into and out of WhatsApp, by the way?

It most certainly does. I have contacts in WhatsApp that I have no other contact with (nor do they with me, dont know my name etc.), yet after the first time I chat with them on WhatsApp, they are in my recommended friends on facebook. Pretty infuriating.

Does both WhatsApp and Facebook use (and were actually allowed to use) phone's built-in contact/address book subsystem?

Honest question, I don't use either app, so no idea. Just thought this could be a possible that WA had created a contact in the system-wide address book and FB saw it.

I have other people in my phone contacts that Ive never communicated with on facebook and I have never been suggested to be friends with them. So Im doubtful.

$19B well spent... ha

> The Vereenigde Oostindische Compagnie (VOC), or Dutch East India Company, is often considered the be the world's first multinational corporation. ...

Ah, trivia time! The VOC also had the world's first entirely privatized army. The "monopoly" cited in the README basically included a license to kill. Think "Google Armed Forces", but then scarier. The army was mostly used to keep a grumpy unpaid workforce "motivated", to keep the spice coming, and thus to keep that lovely 18% annual dividend payout reality.

Basically, the VOC made current evil multinational corps (e.g. the oil companies, monsanto, blackwater, etc) look like cute cuddly charities.

That said, it's been centuries, not sure getting worked up about the name makes sense now. It's a compiler, not a guidebook about how to traffic humans. I just thought the README section made the VOC seem a "little" awesomer than they were.

> "Basically, the VOC made current evil multinational corps (e.g. the oil companies, monsanto, blackwater, etc) look like cute cuddly charities."

DeBeers[1] and the United Fruit Company (Chiquita)[2] would be much more comparable to a modern day VOC and also still exist. Both just seem to be better at staying off the radar since the advent of the Internet compared to Monsanto and the others.

[1] https://en.wikipedia.org/wiki/De_Beers

[2] https://en.wikipedia.org/wiki/Chiquita_Brands_International

Britain had an East India Company first:



It's up for debate who was the first multinational. No doubt both set the bar very high for those to follow, but I do think blackwater spinoff dyncorp with child sex slaves will take some beating, particularly with respect to the prevailing moral climate of the time:


> Britain had an East India Company first:

Actually the Portuguese one was founded in 1549, but I will leave out the usual type of "products" that were traded in those days.

I absolutely agree. The VOC was a very bad company. Basically, they took the blood of the native people in the spice countries and made money from it.

The enslaved the people and killed many, when they did not cooperate.

When the VOC is seen as basic example of corporations, it is a real gruesome heritage!

> Bottom line: assume everything you download from the internet is malicious in nature and inspect it with every possible tool you have available. And even then, run it in a sandboxed environment wherever possible.

Ehhh. Ok that's not very actionable advice. I downloaded my entire OS from the internet.

There's a checksum for each package and for the CD. Back in the day, people cared about that.

How do you verify the checksum?

I didn't use rails since turbolinks appeared, but I really like the idea on paper. Why do you turn it off?

By default it's a global always-on feature. I think it's the sort of feature that might be great in certain parts of the application (like tabbed pages) but having globally enabled by default isn't ideal. For me the built-in browser feedback of loading a new page works pretty well, and having to rebuild this feedback in an application so that it works everywhere is simply unnecessary work.

Yeah, exactly. I think the very existence of this site also kind of sums up the problem: http://reed.github.io/turbolinks-compatibility/

I like the idea, but is "forgiving yourself" even a thing you can consciously do? Probably, but I'm not sure. The study did not ask people to actively choose to forgive themselves, just whether they had forgiven themselves. That's a rather different thing.

If I rationally decide to say "I forgive myself for having slacked off", I'm not sure I'll actually feel forgiven deep inside. I strongly suspect, however, that self-forgiveness can be difficult but can be practiced.

Does anyone here have experience to share? Did anyone ever consciously practice self-forgiveness? It seems like an interesting approach to me.


It's one of those things that's easier said than done, like "just letting go" (the word 'just' in that phrase is brutal). But I think it can be practiced. Imagine in an ideal sense the love a parent has for their child. It's an unconditional love, one that comes without strings attached or any expectation of greatness beyond the miracle of that child's very existence. Tough love, sometimes, but not necessarily contingent. The trick is to find a way to give that to yourself. It takes time.

Procrastination, like other problems of self-improvement, is multilayered. Some of my friends who constantly agonize over their productivity levels confound me, because often from my vantage point they seem perfectly well adjusted, if not for the constant feeling like they should be doing more. Other times procrastination is a response to stress or trauma that needs to be addressed on its own. Some kinds of distraction are arguably more of a reflection of changes in our culture than individual pathologies. Sometimes you just need an egg timer. Sometimes human beings are just garden-variety imperfect.

At the risk of venturing in cheesy territory, I'd recommend checking out Brene Brown's TED talks on vulnerability and shame, if you're not already familiar with them.



"Forgiving yourself" may not be the best advice for all procrastinators, but I think the idea is that when many people procrastinate, it causes them extra stress when the decision comes back to bite them. A part of that stress is simply because they have less time to do what they need, but another component is that they are kicking themselves for being stupid in the past. Forgiving yourself lets you remove the stress from the latter part, and thus would reduce the total amount.


Take a look at the book "Learning to Love Yourself". There's even an accompanying workbook with exercises.


The exercises will either feel very cheesy or move you to tears.


I'm the exact opposite. I used to slack a LOT more, and have cut down on it heavily since my school days. Back then I would regularly last night assignments and pull multiple all nighters to do it, because I _knew I could_. As long as I subconsciously knew "I can get this down in the delta between now->due date" I would put it off, even if that delta involved not sleeping/eating/etc, and "Forgiving myself" only made it worse since I'd just keep doing it.

It was when I got into industry, and instead was posed with the equation of "If I didn't rush, I would have done better work, and more effectively used the time of the people paying me", which had enough external variables that the need to change became pressing. I used (and still use) the feelings of guilt at wasting my bosses time, the feelings of falling behind in my skill/learnings, and dominatingly (as I get older) the feeling of "There just isn't enough time in the day" to force myself off of procrastination every time I notice myself doing it too much.

Take this with the context that I've never been one for positive reinforcement. Seeing my own flaws and failings drives me far more than getting a pat on the back. I think you hit the nail on the head re: rationality, as that I don't think I _could_ forgive myself even if I tried, I wouldn't really internalize believing it in a way that would impact my behavior.

(And aptly, I've now procrastinated enough in writing this, and need to be back to reading docs :) )


Try not to go with the negative - and especially beware of using guilt over wasting company / bosses time. They have enough leverage over you already.


A fair point; I try and take it a different way (as opposed to their having leverage over me), if my conscience is clear in terms of having done GOOD WORK and not wasted any time by procrastinating, they have _less_ leverage from my point of view, since I know I've been delivering 100% and can come from a position of strength.

In terms of motivating myself by negatives instead of positives, that's something I've put literally decades in trying to adjust and something I'm not sure I'll ever break myself of. It's too useful in other areas (looking critically at my own code, looking critically at problem spaces, being unbiased in introspection) that as with the OPs point of not truly internalizing something despite agreeing logically that "X should be this way", I'm not sure I could truly internalize looking at things in a positive light.


Well you seem to have made your peace with it - enjoy


I think changing yourself goes like this: thoughts->emotions->habits. You need a clear victory(albeit a philosophical one) in neocortex so that you can start to train your limbic system.


According to My Little Pony[0], just believe in the forgiveness of friends who believe in forgiving you...

[0] https://youtu.be/9hpPOZGpHFk


How is that dangerous?


It leaks private user info -- a malicious server could include a JS file confirmed to be highly sensitive/top secret, and measure whether the client already has that cached. If so then the user is confirmed a sensitive target.


No, there's a worse attack possible: you can attempt to include a resource with sensitive contents with SRI, and use the SRI to make a "guess" at the hash of the contents. If your guess is incorrect, the resource will fail to load, and you can detect this error and make another guess.

Obviously, this technique will only work if the contents of that resource are constrained enough that it's possible to guess them with brute force. Depending on how SRI interacts with the browser cache, though, it may be possible to make guesses very quickly -- it is likely that the browser will only fire one HTTP request for the initial attempt, and will load the resource from cache for all subsequent attempts.


you could just add a new public=true option to counter this. I think you can even already check that with an iframe (or js head inject & timing) anyway, no need for CSP for that.


Or require crossorigin="anonymous", maybe in combination with Cache-Control: public.


So to protect against a single malicious server who might discover that we had previously loaded a cached resource, we shouldn't implement a cross-origin cache and have to make repeated requests, guaranteeing 3rd parties (the CDN) keep getting GET requests?

You're just trading one problem (someone learning I previously requested a file) for another (leaking referrers to a CDN).

Also, if you're loading "highly sensitive/top secret" data with a <link integrity="" href=""> or <script integrity="" src=""> tag, you have bigger problems.


I see, thanks!


No, it was a very good point. Not everyone adds hashes to filenames, and to me it seems that you're right in that weird caching can break pages that way.

If indeed this is the case, subresource integrity needs a big warning sign about that. For me, your comment was that warning sign, so please keep posting while you're not awake yet.


Why would it need a warning? If the HTML provides a new integrity="" hash, then any cached version obviously wouldn't pass. Subresource integrity makes it easier to determine if a cached file has expired. The file can be permanently cached for any HTML that requests the same hash value(s).


The browser could do something with this, but I believe it doesn't. Instead the algorithm is just:

1) Load the resource specified in src (from network or cache)

2) If there's an integrity attribute, verify its hash



Applications are open for YC Winter 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact