Hacker News new | past | comments | ask | show | jobs | submit login

Nothing in computing is worse than software that knows exactly what you want it to do, then gives some shitdick excuse as to why it's not going to do it in an effort to get you to jump through meaningless hoops.



Gimp on Windows does this and it drives me nuts.

Going to File->Save As... and trying to save to .jpg brings up a dialog box telling me to exit the current Save As dialog and go back to File->Export to save as .jpg. Asinine.


Not only that, but it's a _new feature_.


What I don't get is that you can open any image type and modify it in gimp on the fly. The reason they are separated is that xcf can maintain layers and any other image loses that information. But anyone who uses gimp for a few minutes will realize this, and then you don't need 3 dialogs. Hell, have a warning message in the save as box if you haven't saved a project in xcf.


I think the new behavior is great. First of all because it remembers both where I'm saving and where I've been exporting in my workflow, and second because frankly saving in GIMP's native format and creating a flat lossy JPEG that approximates the work you've been doing are really fucking different things, and I find it comforting that they choose to make that explicit. I can how if you open GIMP once a month to make a LOLCAT how this behavior could be annoying.


What it should do is save the file in both XCF and the format the user actually wants. There could be a checkbox to turn off this behavior.

That way it would satisfy both the elitists and the people who just want to do their jobs.


You could submit a patch. :)


I seriously doubt that they'd accept any patch that altered the One True Way that they've decided on.


You could stop being an apologist. Neither will happen, though.


Yeah, except that you can open a jpg, crop it, and then try to re-save it over itself, and then: Oh, no you can't.

The work flow is really awkward then. You are exporting a jpg from a jpg? Oooookay....


You can just select "Overwrite foobar.jpg" in the menu. You can even have a keyboard shortcut for it. That should solve all you guys' troubles :)

cheers


Since you edited the jpeg, you might want to save the layer and other nfo in an xcf


It used to do that. I hope somebody reverts this.


The comments I've found seem to indicate that the Gimp developers have no intention of changing this back, and that anyone who wants it back to the Gimp 2.6 and earlier approach is not in their "target audience". In other words, they hold their most common users in the highest of contempt.

Edit: http://registry.gimp.org/node/26348


Contempt? What?

I think they value developers and people who use their software frequently. As a developer, I love the separation of save and export. They really are separate functions, and the XCF actually saves the export location relative to the original XCF location. It makes working on large projects so much easier.

If I wanted to just do a quick edit to something, Gimp isn't the ideal tool for that job anyways. If I wanted to make it so, selecting export instead of save is really fine, especially since it gives the option in the menu to overwrite whatever JPG you just opened.


I've used gimp since 1997 exclusively and I hate every bit of 2.8s interface ... from the impossible to find dockable tabs that hide every which way, to the laborious change in workflows, to the new way they've decided to do ranges ... making it really simple to choose numbers like 15 bazillion versus 10 bazillion but making it impossibly difficult to easily go between 1 and 4.

And they STILL only have 8 bits per plane (people were whining about this in the 90s ... when Bill Clinton was president), STILL can't remember sub-pixel rendering steps ... other than improved PSD importing, I don't see why.

I'm sure they've put a lot of work into this; but not in the right areas.

Their target audience should be highly technical people that need to do image work and want a tool like vim/emacs/zsh etc but for images; learning curves are ok as long as you can combine simple steps in composable tasks to do amazing things.

I say the same things about all my good tools: "I've been using it for 10+ years, I totally suck at it still, and I would never use anything else". You can say that about paintbrushes, driving, violins, maybe even photoshop or excel; but not gimp.

They were going down that road for a while in the 2.2 or so days, but then they did a 180.


I think they value developers and people who use their software frequently.

I use their software very frequently, and the biggest frustration with this is that it completely screws with my muscle memory for the Gimp and every other application. I continue using 2.6 because 2.8 is just too frustrating.

When working with images so much that you no longer think about the keyboard and mouse, but you think "Save" and it just happens, a change like this is like rewiring your hand so that all the fingers' nerves are reversed, and you have to get permission from your navel before you can lift your pinky.

If I wanted to just do a quick edit to something, Gimp isn't the ideal tool for that job anyways.

Gimp has always been the ideal tool to just do a quick edit to something, because it's the tool I know. I've been using it for more than a decade. I don't need some new developers to come along and think they know better than me how I use their software. That's Apple's billion dollar job, and even they get it wrong often enough.

If I wanted to make it so, selecting export instead of save is really fine, especially since it gives the option in the menu to overwrite whatever JPG you just opened.

Clearly you're not working fast enough for your fingers to be ahead of your brain. I don't need options, I need my tools not to talk back to me.


That's something Emacs has really gotten right: it doesn't force me to relearn keybindings that are so deep in my fingers they'll come out my elbows someday.


it completely screws with my muscle memory

I think this is very relevant: http://xkcd.com/1172/

Clearly you're not working fast enough for your fingers to be ahead of your brain. I don't need options, I need my tools not to talk back to me.

If you have any constructive feedback - or better yet, _code_ - make a polite post on their forum or mailing list. No one here cares about your specific workflow.


No one here cares about your specific workflow.

Actually, you're the only one here vehemently defending the Gimp's new behavior. The other comments seem to prefer Gimp 2.6's Save behavior. But this is getting unnecessarily petty and personal; it's obvious we disagree, and I'll just stick to 2.6 and stop recommending Gimp to people who would otherwise be happy pirating something more expensive.

Edit:

Contempt? What?

Exactly.


I hate it when software changes the workflow.

Don't take it upon yourself to speak for everyone.


>I think they value developers and people who use their software frequently.

I think people who use their software frequently would take offense to the software telling you to fuck off and go do it a different way.

The intent is obvious, both to the user and the developer. Why make people's lives more difficult for such questionable benefit? MS Paint doesn't have this problem. Photoshop doesn't have this problem. The user knows what they want to do, and they could do it before the change. Why are you getting in their way?


This very much reminds me of the attitude amongst the GNOME core developers that basically killed the project, per De Icaza's blog post a while back.


Yep. Also of the cavalier attitude toward breaking changes that the Firefox team developed. I switched to Chrome and so did many others.


Wow, the hubris in that thread is unbelievable.


It's as if everyone in the Gnome/Gtk/Gimp world wants to be a demanding dictator like Steve Jobs, but doing that and getting it right requires an insane budget, and even after all their effort, I still don't like Apple's stuff.


fork gimp! :-) They could still prompt users to save a xcf copy .


There have been multiple forks, but they all end up dying. The ideal solution would be an automated fork that applies the UI patch to the latest official sources, but obviously the Gimp developers aren't going to care if they break your patch.


Why? The new workflow is better for everything except for quick edits of files. And the GIMP isn't the right tool for that job. Use a smaller, simpler program for a quick edit.

If you really want to destroy your layers and history (which GIMP saves into XCFs) there is a menu option for that. But you almost certainly don't want to do that.


Simpler program? I usually just end up using Photoshop instead :)

When you save a layered file in a unlayered format photoshop just forces the "Save Aa a Copy" option to be checked and then saves a flattened version while preserving the open one, thus if you then decide to quit without saving a layered version you get a "Are you sure you wish to quit without saving document x?"

The way Photoshop handles it is far superior imho. With gimp iirc even if the file is not layered you still have to do export?

And for many edits, i simply don't want to save layers or history. there's just no point if i'm cropping, converting, re-sizing, or quickly fixing something. If i do then i'll save it as a layered file and layered is by far in the minority. I know what i'm doing i don't need the app to 2nd guess me, and it seems the new workflow is better for people who don't understand the difference between layered and un-layered files.


From what I recall, wasn't that the behavior that Gimp used to have? It's how I remember it, and I have used only Gimp for the past decade. I was frankly surprised and confused when the new behavior presented itself.


Back in the 2.6 era yes, this was the behavior. It was changed because in my opinion it seems the Gnome Developers since around Gnome 3 jumped the shark. People make suggestions, they argue some divine vision of how computers work, and a year later they are still hemorrhaging users and have little adoption because of their iron clad positions.


Use a smaller, simpler program for a quick edit.

So what smaller, simpler program (other than Gimp 2.6) has all of Gimp's features, but is suitable for a quick edit? Say I have a .png image stored in a version control system, with no need for layers or history, and I want to do a quick perspective clone to paint over something?


Gwenview, Microsoft Paint, among others.


So, neither Gwenview nor Microsoft Paint support the perspective clone operation. Krita does, but Krita crashes every time I try to use it.


Is there something like paint.net for ubuntu?



This little Gimp plugin will get you the old shortcuts back:

http://www.shallowsky.com/software/gimp-save/


Arrgh, that drives me nuts! I just upgraded from 2.6 to 2.8 and that bites me every. Single. Time I go to save a file.


Only today did i stumble upon this behaviour for the first time (i was used to Gimp 2.6). But i understood the distinction between "save" and "export" and got used to the ctrl+E shortcut immediately. I think it's good that these are two separate functions actually.


This is pretty the sole reason I don't use gimp anymore and have no plans to go back, ever. I used it on OSX under X11 and had no problems with the mental gymnastics using control instead of command because, well, it wasn't GIMPs fault.

I was so excited when they made it a native app, too.

If Command-E actually brought the window up in front of the rest of the windows instead of hiding it behind the main window, it might have been a different story.

Just my two cents.


Gimp on every platform does this. Combine the two already!


Gimp Save dialog has always sucked in one way or another.

For example, I open a jpg and go to save it as a PNG. I select PNG from the drop down list of available types and type the filename, but unless I also add .PNG to the filename it makes a jpg again.


Gimp's latest user interface is harder to use than Xfig.


This one on OS X bothers me:

>> "Another device on the network is using your computer’s IP address."

I know that the thing to do here is renew the DHCP lease. Most people don't. Just renew it for me, please.

The many procedures you can do to fix a network connection in Windows are even worse. Do these for me before you show me any troubleshooting messages.


No. It should prompt you to renew it, but not just do it, since IP address conflicts probably mean one end is badly configured.


It doesn't presently prompt to renew — it just says that there's a conflict and doesn't tell you which settings panel might be helpful. Only a very tiny subset of users will actually try to reconfigure their router or make other such changes as the result of this information. Options requiring a prompt before automatic renewal would be great for those users, but it's not relevant in 99%+ of cases.


No, they're relevant in far more than <1% of csses. Forgotten DHCP renewal is not the only reason that IP address conflicts happen.


But that doesn't mean that alongside the warning the computer can't try to get a new IP address by itself, even if it won't fix the problem 100% of the time.


No. If something is wrong with the current network config, don't automatically muck around with it so that I can't troubleshoot the problem myself. It's important to understand the current state to figure out how you got into that broken state; kind of like a breakpoint at the first exception. Automatically renewing is like 'On Error Goto Next'.

That said, I agree that it should suggest renewing the DHCP lease.


Well if the current network config is set to simply use DHCP, then renewing DHCP is a pretty safe move.

If the DHCP is set to serve a static IP for the MAC, the problem will presist, if the DHCP server is dynamic it will just do what its desgined to do and assign a unused address fixing the problem while not changing the network configuration at all.


For good product usability, being proactive, and getting 80% of the use cases right is much more important than being making no mistake and shift the decision (and blame) to the user. Especially since most of the time the user has no technical background to make the correct decisions. I used to think like you until I read The Inmates are running the Assylum[0]. I still do for my personal workflow (that is why I prefer archlinux to ubuntu), but I try to be proactive in the error catching in the software I write.

[0]: http://www.amazon.com/The-Inmates-Are-Running-Asylum/dp/0672...


Yes, and that end is your Mac. It did this on purpose, and it should just renew.

[ http://news.ycombinator.com/item?id=2755461 ]


The usual cause of this for most people is that they've rebooted their router and it kept the leases in memory, so it isn't aware it assigned the IP to multiple computers. Really, automatically renewing the lease if you're on DHCP is probably the best behaviour for a home/simple OS. For computers on a network administered by professionals, it should be possible to disable this (and other automatic) behaviour.

Does anyone know what the iPhone/iPad/etc. do in this case?


The iDevices all bounce around getting multiple addresses within the DHCP range. Last time I was administrating an office network full of iDevices I had to increase the DHCP range to account for the iDevice's bouncing all over the range and using up addresses pretty needlessly.


Most sane DHCP servers will also attempt to ping an IP address before offering it (and won't offer it, of course, if it gets a response).


ping and ARP ... ping can be turned off for instance.


That is a perfectly valid error that you should know about so you can go and fix your fucking network.


For those of us who know how to "fix our fucking network". Most consumers don't have a clue what that error means.


Most consumers won't ever see it, because one of the only ways to get that error is a manual configuration on at least one client which conflicts with the DHCP range.


  Use exit() or Ctrl-D (i.e. EOF) to exit
Gets me more often than I'd like to admit.


That message is generated by the __repr__() method, so it could get inadvertently triggered if it just exited automatically.


Use "logout" to leave the shell.

Thanks, tcsh. Now I remember why I don't use you.


Why don't you just alias exit?


Because that's a B.S. thing to have to do. Should the whole community of users of that piece software have to alias that command?


Another reason to switch to IPython: if you type "exit", it will just exit.

http://ipython.org/


Even worse, on Windows the message is:

  Use exit() or Ctrl-Z plus Return to exit
Well, first, there is no Return key on any PC keyboard I've seen in a very long time. There is an Enter key.

But that's minor. The bad part is if I'm used to Linux and try to exit on Windows with Ctrl+D, I get this cryptic message:

  >>> ^D
    File "<stdin>", line 1
      ♦
      ^
  SyntaxError: invalid syntax
  >>>
Suppose it's the other way around: I started out as a Windows user and got accustomed to using Ctrl+Z to exit.

Now what happens if I switch to Linux (as an inexperienced Linux user) and try to use Ctrl+Z to exit? Yes, I get this somewhat cryptic message:

  >>> 
  [1]+  Stopped                 python
  mike@s:~$
Now, you know that I haven't exited Python at all but have merely backgrounded it, but remember I'm an inexperienced Linux user and don't know about fg and bg and stuff like that yet. So every time I "exit" from Python I've added another background task. Oops.


Is there any reason you can't ctrl-c out of man pages?


Because it use less as default pager and less is an interactive utility.

If you prefer you can use another pager like more that will let you ctrl-c:

  export PAGER=more
Or activate the -K option of less that will quit on ctrl+c

  export PAGER='less -K'


Why can't interactive utilities be cancelled?


They can be cancelled, just not with CTRL+C - that key combo sends SIGINT [1] to the process, which can be caught by a handler, and often is when the writer of the utility thought there was something else that users would want that signal to do instead of quit. For example, in less, SIGINT will stop a long-running text search that's taking too much time.

[1] http://en.wikipedia.org/wiki/SIGINT_%28POSIX%29#SIGINT


Wouldn't this mean that the application has some concept of state, then? Following output, running a search, etc?

If your state is just sitting in the file, interpret the signal as exit.


It is possible, though arguably it wouldn't necessarily be the best user experience. At least I know I have the habit of smacking ctrl-c several times in a row when something isn't responding as rapidly as I like. Things like ctrl-c probably should not be stateful if one of the potential things it can do is close your program.

As another example, perhaps it would be nice if ctrl-w deleted one word backwards if you were currently editing text in your webbrowser, but close your current tab/window otherwise. If the state is always what the user expects when they press ctrl-w it would work fine, but in reality you would have lots of people accidentally closing tabs.


SIGINT in less wouldn't lose work though. It's safe and would be consistent with other apps and other uses of SIGINT inside less.


So if the search finished just before issue ctrl+c you will close the application instead. Be vary of inconsistencies, even if they are just perceived inconsistencies.


It would be pretty annoying if I ran a search or a follow (shift+F), thought it was taking too long, and hit Ctrl+C right after it finished.

Ctrl+C doesn't mean "end this program." If you want that, consider Ctrl+\.


Even Ctrl+\ (SIGQUIT) doesn't always "end this program." Java uses it to print a stack trace to the console.


Most Java programs I have to deal with print stack traces to the console even without prompting because somebody thought that the `ex.printStackTrace();` IDE autofix is definitely more than enough to handle an exception.


I think that man uses 'less' as a pager, and 'less' uses vi-like key bindings.

It enables you to search using '/keyword', for example.


That's because vi uses less as a pager, too :)


If you press 'q', the man page will close.


that's great. but why doesn't ctrl c do it? I mean it closes everything else on my system, except man pages, vim and some other programs that use less.


This one annoyed me recently.

  $ bundle
  The source :rubygems is deprecated because HTTP requests are insecure.
  Please change your source to 'https://rubygems.org' if possible, or 'http://rubygems.org' if not.
Why can't Bundler just use the new URL? Or, better yet, assume https://rubygems.org unless I specify sources in my Gemfile?


SemVer means you can't break backwards compatibility. Everyone loves not breaking compatibility until they want a breaking change.


> SemVer means you can't break backwards compatibility.

No, SemVer means you increment major version numbers if, and only if, you break backwards compatibility.


Right, and therefore, if you don't want to increment the major version number, you can't break compatibility. I am really not going to get into an argument about this, you're being ridiculous.


> Right, and therefore, if you don't want to increment the major version number, you can't break compatibility.

That's true, but both silly and not what you said before.

There's no good reason to care about version numbers in and of themselves, if you care about them at all, its because of their semantics: using SemVer doesn't make you not want to break backward compatibility, not wanting to break backward compatibility is independent of the use of SemVer. Of course, if you don't want to break backward compatibility, you won't end up bumping the major version number under SemVer, but you've got the whole direction of cause-and-effect wired backward when you say that SemVer is the reason not to break backward compatibility.


Okay let me spell it out for you:

No one wants a major version bump in Bundler in order to change the primary rubygems URL because historically there is a lot of potential pain in Bundler updates, and so you better be getting something new that's worth the pain, not just a superficial change that happens to break compatibility only as a technicality.


> No one wants a major version bump in Bundler in order to change the primary rubygems URL because historically there is a lot of potential pain in Bundler updates

SemVer seems pretty much tangential to that, that's a problem with the particular history of the particular project.

If the original claim had been "Bundler's history of major version updates means that they don't want to bump their major version; combined with their commitment to SemVer, that means that any backward incompatible changes are avoided", rather than just that SemVer alone led to people in general seeking to avoid backward-incompatible changes, I wouldn't have challenged it.


So, which is the reason that takes someone to be scared about incrementing the major number?


End users expect major changes with a major number release (rightly or wrongly), so if you don't have any other than a minor breaking change, they might be annoyed. Of course if you're using semantic versioning you should probably just ignore them and increment anyway till they get used to it.


> End users expect major changes with a major number release (rightly or wrongly)

I think that's highly dependent on the particular market that a product is in, and how its version numbers are used in marketing. Chrome (which neither heavily markets around version numbers nor uses SemVer, but instead increments the major version number with every regularly-scheduled feature release) end-users (whether you are speaking of regular end-users or developers) probably don't have any particular expectations tied to major version bumps.

Microsoft Windows (which doesn't use SemVer, but does heavily market around major versions) end-users probably have much bigger expectations for major versions, as a direct result of the vendor's marketing investments.

A developer-focussed product that consistently uses SemVer and markets around particular new features rather than major version numbers probably won't have much expectation from its target users tied to major release bumps except that they will meet the SemVer definition of a major release bump.


This may be a terrifically silly question, but here goes anyways:

What sensitive data are people passing back and forth to rubygems of all places that needs encryption? Especially in the context of a bundle where you're more likely than not pulling down or updating gems (from a public repository, which contains no sensitive data)? And this need is apparently so pressing that it should be complained about to the user?


You're installing code to run using your privileges. Don't you want protection against MITM attacks?


That's why everyone who runs a package repository who isn't a shitdick SIGNS THEIR FUCKING PACKAGES. It protects against tampering on the mirrors, too, with the added bonus that you don't need SSL.

Oh, wait, this is rubyland. Nevermind.


And there's the confirmation that it was a silly question.

I've been on a plane all morning and am in no condition to be parsing the topics here :)


Confidentiality is not the only feature of HTTPS. Using HTTPS can also protect from simplest man-in-the-middle attacks by validating the certificate, thus improving data integrity.


Maybe because it wants you to be in control?

What they probably could do, though, is a 302 redirect to https. But that's frown upon by some, too.


It's not just frowned upon. Having a 302 redirect is almost as insecure as not upgrading to https at all.

In browsers this is sort of acceptable because they let you visually confirm that you've been upgraded to https, and are now securely connected to the right server.

In a text mode app that does the connection behind the scenes and fully automatically, there's no such confirmation or even intervention, so there's no perceptible difference between http and http 302 to https.

Ruby (on Rails) values convention over configuration. It should have made https obligatory per default and overrideable with a flag per this guideline.


> In a text mode app that does the connection behind the scenes and fully automatically, there's no such confirmation or even intervention, so there's no perceptible difference between http and http 302 to https.

This assumes the client doesn't kick up a fuzz. The entire point is Bundler now does kick up a fuzz. But it could also be made to follow any redirect first, and only kick up a fuzz if the final resource is not https.


That really doesn't provide any security advantage over just using http. The point of https is to prevent mitm and in this case there's nothing preventing somebody from mitming the original http connection and redirecting to an evil https site.


It could only allow redirect to the same domain and complain if a cert check failed.


I only want to be in control when my use case differs from the common case. I would wager that at least 95% of Gemfiles could source https://rubygems.org implicitly and automatically.


You can't guarantee that Ruby has OpenSSL support compiled in, it's not required.


But you can check, and fail if the user specifies https and it's not present, and give a warning if the user doesn't specify anything or specifies http and OpenSSL is not present.

I still think mdavidn is probably right that 95% or similarly high number has OpenSSL available, and it makes little sense to inconvenience all of those users when you can inconvenience "just" the few who don't instead.


Here's another one.. In facebook apps settings, if you enter a url ending with a "/", the error message is "The url can't end with a '/'". Can't help but think about that lazy programmer who prefered to shoot an error message rather than fixing it. The irony is that it's often way easier and faster to add a .replace() and fixing it rather than writing , testing and translating the error message.


http://example.com/page and http://example.com/page/ could be totally different pages... So .replace()'ing could silently cause unexpected breakage.


They shouldn't be. Slash normalization is pretty normal and it's ones of the first things frameworks add.


It doesn't matter if some people do slash normalizing. It's not how the web works at the standards-level.


Well, maybe everyone's unawareness of this means the standard is wrong.


A de facto standard? We'll have to ask the committee about that.


No.


However, they're probably building to the URL spec which is what they should be doing.


They are not allowing the trailing / anyway, so what's the difference?


If you type google.com you don't expect FB to replace it to bing.com without telling you.

/path and /path/ are two totally different things in the standards, just like /path and /path2. Sure in most cases with and without trailing slash gives the same page; but it's not something you can assume.

Same reason why "we don't allow https links" doesn't mean you can just drop the s.


That's not a safe assumption, especially in the presence of a CMS system.

foo.com/bar and foo.com/bar/ (-> foo.com/bar/index.html) can be totally seperate URLs.


You're missing his point. Its a safe assumption in the case that the latter gives you an error pointing you to the former. If the latter was a blank/meaningful response that would make sense, but its not.


You misunderstood OP, the FB error he describe is not about hitting a url "path/" that 301 to "path" and FB not detecting it.

In the uri standards, /path and /path/ are just as different as /path and /path2.


You are describing a developer-oriented UI. Silently doing an automagic .replace() on developer-entered information may cause more infuriation and (importantly) bugs than flagging it as invalid. Devs should not be left blissfully unaware of string format restrictions.


If you change a file extension, the file may become unusable! Are you sure?


This, or even "You have used the extensions ".gz" at the end of the name. The standard extension is ".tar.gz". You can choose to use the standard extension instead."

Now click "Use .tar.gz" => file now has name foo.tar.tar.gz


mmm.. foo tartar


It always annoys me when git does this:

    $ git submodule update
    You need to run this command from the toplevel of the working tree.
I bet git knows exactly where that is. So go there and do it!


It may know exactly where that is, but it doesn't necessarily know that you weren't trying to only update submodules in the current directory. Most git commands operate only on stuff reachable from the current directory, but `git submodule update` is incapable of being limited like that.


I'm not sure that is true...

You can update only a specific repo if you'd like. In which case it would be possible to limit the submodule command to only submodules existing in the current directory.

Yes, it would need to scale back up to the .gitmodules file, but it is still possible.


Yes, obviously the behavior of `git submodule update` could be rewritten. But the way it works right now, it can't be restricted to just submodules reachable from the current directory.


This might be because it's ambiguous which submodules you want to update if you have nested submodules.

In other words, git knows "exactly where that is" by searching parent directories until it sees a ".git". Unfortunately, submodules also have ".git" in them since they're full git repositories themselves.

git submodules are not the nicest part of git, unfortunately.


OK, so it wouldn't work if you're in a submodule. It could print a warning if there are no submodules. But if would still work perfectly fine in the majority of the cases.

For those also annoyed, I use this git alias, which does what I want:

    sup = !sh -c 'cd `git rev-parse --show-toplevel` && git submodule update --init' -


Honestly, git's the only program on this page that I'm fine with it being explicit and pedantic about what it's going to do. There are many situations where it could probably guess what I wanted to do but be wrong...

I mean, christ, I fetch and merge as separate steps.


  @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
  @         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
  @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
  Permissions 0440 for './id_rsa' are too open.
  It is recommended that your private key files are NOT accessible by others.
  This private key will be ignored.
What they think they know about the user and group setup on my machine doesn't give me a lot of confidence in whatever other unsupported assumptions the software probably makes. (and despite saying "recommended" there's no override)

edit: to be clear, this is SSH's "shitdick excuse" not Heroku's.


Maybe I'm weird, but I personally appreciate software that is fundamental to the security of my computing environment being a little paranoid...


Isn't this a warning from ssh not heroku ?


Sibling responses are addressing the general phenomenon in software.


It is from SSH indeed.


Actually, StrictHostKeyChecking=no overrides it, right? and you can stuff that into your ssh config file, if you prefer that over a chmod


No that controls whether you will be allowed to connect to a host when the host key is not the same as the one previously used (mitm detection).


Doesn't in my version. looks like the behavior is in key_perm_ok in authfile.c and not configurable at all.


chmod 0600 ./id_rsa


If the file really was already in the badguys group and 0440 it's already been compromised so that's just covering it up. I suppose if SSH really had the courage of its convictions it'd automatically upload the key fingerprint to some revoked key blacklist.


You have the ordering backwards: SSH complains now so the problem is fixed before someone gets limited access to the computer.

If your account or computer has already been compromised you're still screwed but this helps considerably on shared computers, NFS, etc. where the access situation is easier to misunderstand.


blacklist the way sudo does http://xkcd.com/838/


on a fresh debian install, the first user (me) wasn't in the sudoers list. I sudo'd, and got reported. The report mail went to root, which was cloned to the first user...


As always, the shining counterexample is TeX: "Missing $ inserted."


Makes me think about the Arch Linux install I got half way through last night. As I started to fall asleep from the sheer boredom induced by copying and pasting commands, I had realized that I had no fucking idea why I was doing it. A fresher kernel wasn't worth it. I hit reset and went Ubuntu 12.10 instead.


And this is probably for the best, because you probably aren't the right kind of user for the distribution and community.

If you take a look at [1], you will see that the wiki provides a lot more than just commands to type in, but also a rationale and explanation what each component does. Usually there are also links for further reading that should help your understanding of the topic.


Fedora user...if Arch's forums are any indication of the quality of the distro, then Arch is excellent.

Was looking for some help getting started with Awesome WM as well as sorting out an issue with crap Broadcom wireless card -- both searches led me to Arch, and it was there that I found informative, detailed threads that led to the solutions.


As an Awesome user on Ubuntu, I often end up referring to the ArchWiki. Also for a lot of driver issues. Their wiki is really excellent, I guess because when people are dealing with a lot of raw, newer packages they tend to work out the finer details of issues that people later on benefit from.


What bothered me the most was that I did install Arch over the summer. It was awesome! It wasn't to hard, but more technical than most "consumer" installs. This was due to the EXCELLENT bit software called that AIF (Arch Install Framework) that offered a ton of usability improvements while maintaining the customizability that many crave.

So, I started recommending it to a bunch of people I know as an excellent development environment. What was odd was that they all reported back that it was incredibly hard to do and manage.

It turns out that just days after I had done this install, Arch completely removed AIF in favor of a bunch of smaller, much less helpful scripts specifically so that it wouldn't be as simple (and thus less flexible) to install.

I was, and still am, totally floored by this. I switched to using Ubuntu Server with my own choice of UI layer (Xmonad for life!) after dealing with that.


I don't use Heroku but, from the help page, it seems the point is to push users to learn the new syntax. I don't know if it's a good idea or not, but I can see how they got there. Without this, they'll never be able to remove the deprecated command.


Why do they need to remove it? It's a simple alias.


Desktop SQL clients get me with this... connect to a database, run a query, go to another window for a while. Come back, try to run another query...

  "Database connection closed. Reconnect?"
Drives me nuts... I ran a query, didn't I?!?


I understand your frustration, but what if the program thinks you're doing something you're not? You may do irreversible damage. I guarantee that you'd be far more pissed if you trashed an entire project than you are about 'shitdick excuses'.


Sure, but:

    heroku console -> heroku slightlydifferentsyntaxforconsole 
is not one of those cases.


    $ git statis
    error. Did you mean status? Y/n
You'd get the best of both worlds.


I want git to do that so badly. It's not like its output is particularly pipe-friendly anyway. Just ask me, don't make me type the whole thing again. Especially annoying when it's a longer command with a bunch of arguments or one that in no way could have negative side-effects, like git status or git log



Thank you so much.


Why are you typing the whole thing again instead of using the up arrow or Ctrl-P?


zsh can do that:

    > ce somedir
    zsh: correct 'ce' to 'cd' [nyae]? y


Ugh, this is my second least favorite "feature" of zsh. (First is "are you sure you want to delete all of the files in this directory?")


I find it quite handy and it can quite trivially be disabled if you don't like it. Not sure what's not to like.


It's on by default, so you have to disable it every time you set up zsh on a new machine. Yes, it's not a big deal. I just find it annoying.


Use an alias! I like

  git config --global alias.s 'status --short --branch'


Ok, but please don't alias statis to status. That's a horrendous typo.


The old "DELETE FROM some_table" <enter>

Crap.. I meant "DELETE FROM some_table WHERE id=1"

sighs


The MySQL command line client has an argument "--i-am-a-dummy" that refuses to run UPDATE and DELETE's without a WHERE clause.

More commands could use options to adjust how big guns they let you fire at your feet with...


Such options might be more appealing to users if it were named something like "--extra-safety" or "--require-where".


Humorless users can use "--safe-updates"

http://dev.mysql.com/doc/refman/5.0/en/mysql-tips.html#safe-...


...which is not the default. Fuck them.


I always SELECT first and confirm I got the right result. Then I add the DELETE just before SELECT and commment out the SELECT line. Never fails!


After doing that on a local db a couple of times, I started just writing the where clause first. Goes like this

WHERE id = 1 ctrl + a DELETE FROM....

Found it's also helpful for writing SELECTS and avoiding the whole "oh crap, now let's load the entire DB" type of moments.


If you're in a GUI where you can run only the highlighted code, I do this:

    select * -- delete
    from sometable
    where id = 1
Then you can run the whole thing to see what would be deleted, and highlight starting at the word delete to actually delete.


I do this too, If I'm saving the file for reuse, I wrap the entire thing in /* */ when I'm done just in case I accidentally hit run script instead of run statement.


Do that once on a production system in the middle of a shipping run. Lesson learned.

All my deletes written first as selects now, as a sibling comment suggested.


That's what transactions are for. you realize the mistake, type ROLLBACK, and move on.

Oh yeah, you're using a toy database aren't you.


The solution to that is to make potentially damaging operations reversible, not to throw the problem back in the user's face. In fact, this is a very reliable design smell: any time you find yourself throwing something back in the user's face, stop and think about the problem for a few minutes. 99% of the time, the right solution is to provide more reversibility instead.


cp(1)'s handling of directories.

  ~/foo$ mv bar baz
  ~/foo$ ls
  baz
  ~/foo$ cp baz bar
  cp: omitting directory `baz'
  ~/foo$ ls
  baz


To be fair, `cp baz bar` is the only one of those commands that might take three hours.


That's not true. mv across filesystems degrades to a copy, and can take just as long.


And CoW filesystems are getting more common, so cp's best case can be almost as fast as mv's.


Really? After some quick digging, nope. Tl;dr: Not by default, and only on Linux, and only with btrfs (still marked experimental) and only with the `--reflink` flag.

FreeBSD `cp(1)`: `copy_file()` does a simple `read()`/`write()` loop with a buffer[0] (despite ZFS maybe supporting CoW).

GNU coreutils `cp(1)`: `copy_reg()` calls out to `clone_file()` if and only if `--reflink` was passed[1], and `clone_file()` fails on everything but btrfs[2].

[0]: FreeBSD `copy_file()`: https://github.com/freebsd/freebsd/blob/master/bin/cp/utils....

[1]: coreutils `copy_reg()`: http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob;f...

[2]: coreutils `clone_file()`: http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob;f...


Lotus Notes is hilariously awful in this regard.


> Lotus Notes is hilariously awful

I think it's enough to stop right there. :)


> some shitdick excuse

No reason to associate gays with people who can't program and/or design software competently.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: