Nothing in computing is worse than software that knows exactly what you want it to do, then gives some shitdick excuse as to why it's not going to do it in an effort to get you to jump through meaningless hoops.
Going to File->Save As... and trying to save to .jpg brings up a dialog box telling me to exit the current Save As dialog and go back to File->Export to save as .jpg. Asinine.
What I don't get is that you can open any image type and modify it in gimp on the fly. The reason they are separated is that xcf can maintain layers and any other image loses that information. But anyone who uses gimp for a few minutes will realize this, and then you don't need 3 dialogs. Hell, have a warning message in the save as box if you haven't saved a project in xcf.
I think the new behavior is great. First of all because it remembers both where I'm saving and where I've been exporting in my workflow, and second because frankly saving in GIMP's native format and creating a flat lossy JPEG that approximates the work you've been doing are really fucking different things, and I find it comforting that they choose to make that explicit. I can how if you open GIMP once a month to make a LOLCAT how this behavior could be annoying.
The comments I've found seem to indicate that the Gimp developers have no intention of changing this back, and that anyone who wants it back to the Gimp 2.6 and earlier approach is not in their "target audience". In other words, they hold their most common users in the highest of contempt.
I think they value developers and people who use their software frequently. As a developer, I love the separation of save and export. They really are separate functions, and the XCF actually saves the export location relative to the original XCF location. It makes working on large projects so much easier.
If I wanted to just do a quick edit to something, Gimp isn't the ideal tool for that job anyways. If I wanted to make it so, selecting export instead of save is really fine, especially since it gives the option in the menu to overwrite whatever JPG you just opened.
I've used gimp since 1997 exclusively and I hate every bit of 2.8s interface ... from the impossible to find dockable tabs that hide every which way, to the laborious change in workflows, to the new way they've decided to do ranges ... making it really simple to choose numbers like 15 bazillion versus 10 bazillion but making it impossibly difficult to easily go between 1 and 4.
And they STILL only have 8 bits per plane (people were whining about this in the 90s ... when Bill Clinton was president), STILL can't remember sub-pixel rendering steps ... other than improved PSD importing, I don't see why.
I'm sure they've put a lot of work into this; but not in the right areas.
Their target audience should be highly technical people that need to do image work and want a tool like vim/emacs/zsh etc but for images; learning curves are ok as long as you can combine simple steps in composable tasks to do amazing things.
I say the same things about all my good tools: "I've been using it for 10+ years, I totally suck at it still, and I would never use anything else". You can say that about paintbrushes, driving, violins, maybe even photoshop or excel; but not gimp.
They were going down that road for a while in the 2.2 or so days, but then they did a 180.
I think they value developers and people who use their software frequently.
I use their software very frequently, and the biggest frustration with this is that it completely screws with my muscle memory for the Gimp and every other application. I continue using 2.6 because 2.8 is just too frustrating.
When working with images so much that you no longer think about the keyboard and mouse, but you think "Save" and it just happens, a change like this is like rewiring your hand so that all the fingers' nerves are reversed, and you have to get permission from your navel before you can lift your pinky.
If I wanted to just do a quick edit to something, Gimp isn't the ideal tool for that job anyways.
Gimp has always been the ideal tool to just do a quick edit to something, because it's the tool I know. I've been using it for more than a decade. I don't need some new developers to come along and think they know better than me how I use their software. That's Apple's billion dollar job, and even they get it wrong often enough.
If I wanted to make it so, selecting export instead of save is really fine, especially since it gives the option in the menu to overwrite whatever JPG you just opened.
Clearly you're not working fast enough for your fingers to be ahead of your brain. I don't need options, I need my tools not to talk back to me.
That's something Emacs has really gotten right: it doesn't force me to relearn keybindings that are so deep in my fingers they'll come out my elbows someday.
Clearly you're not working fast enough for your fingers to be ahead of your brain. I don't need options, I need my tools not to talk back to me.
If you have any constructive feedback - or better yet, _code_ - make a polite post on their forum or mailing list. No one here cares about your specific workflow.
Actually, you're the only one here vehemently defending the Gimp's new behavior. The other comments seem to prefer Gimp 2.6's Save behavior. But this is getting unnecessarily petty and personal; it's obvious we disagree, and I'll just stick to 2.6 and stop recommending Gimp to people who would otherwise be happy pirating something more expensive.
>I think they value developers and people who use their software frequently.
I think people who use their software frequently would take offense to the software telling you to fuck off and go do it a different way.
The intent is obvious, both to the user and the developer. Why make people's lives more difficult for such questionable benefit? MS Paint doesn't have this problem. Photoshop doesn't have this problem. The user knows what they want to do, and they could do it before the change. Why are you getting in their way?
It's as if everyone in the Gnome/Gtk/Gimp world wants to be a demanding dictator like Steve Jobs, but doing that and getting it right requires an insane budget, and even after all their effort, I still don't like Apple's stuff.
There have been multiple forks, but they all end up dying. The ideal solution would be an automated fork that applies the UI patch to the latest official sources, but obviously the Gimp developers aren't going to care if they break your patch.
Why? The new workflow is better for everything except for quick edits of files. And the GIMP isn't the right tool for that job. Use a smaller, simpler program for a quick edit.
If you really want to destroy your layers and history (which GIMP saves into XCFs) there is a menu option for that. But you almost certainly don't want to do that.
Simpler program? I usually just end up using Photoshop instead :)
When you save a layered file in a unlayered format photoshop just forces the "Save Aa a Copy" option to be checked and then saves a flattened version while preserving the open one, thus if you then decide to quit without saving a layered version you get a "Are you sure you wish to quit without saving document x?"
The way Photoshop handles it is far superior imho. With gimp iirc even if the file is not layered you still have to do export?
And for many edits, i simply don't want to save layers or history. there's just no point if i'm cropping, converting, re-sizing, or quickly fixing something. If i do then i'll save it as a layered file and layered is by far in the minority. I know what i'm doing i don't need the app to 2nd guess me, and it seems the new workflow is better for people who don't understand the difference between layered and un-layered files.
From what I recall, wasn't that the behavior that Gimp used to have? It's how I remember it, and I have used only Gimp for the past decade. I was frankly surprised and confused when the new behavior presented itself.
Back in the 2.6 era yes, this was the behavior. It was changed because in my opinion it seems the Gnome Developers since around Gnome 3 jumped the shark. People make suggestions, they argue some divine vision of how computers work, and a year later they are still hemorrhaging users and have little adoption because of their iron clad positions.
So what smaller, simpler program (other than Gimp 2.6) has all of Gimp's features, but is suitable for a quick edit? Say I have a .png image stored in a version control system, with no need for layers or history, and I want to do a quick perspective clone to paint over something?
Only today did i stumble upon this behaviour for the first time (i was used to Gimp 2.6). But i understood the distinction between "save" and "export" and got used to the ctrl+E shortcut immediately. I think it's good that these are two separate functions actually.
This is pretty the sole reason I don't use gimp anymore and have no plans to go back, ever. I used it on OSX under X11 and had no problems with the mental gymnastics using control instead of command because, well, it wasn't GIMPs fault.
I was so excited when they made it a native app, too.
If Command-E actually brought the window up in front of the rest of the windows instead of hiding it behind the main window, it might have been a different story.
Gimp Save dialog has always sucked in one way or another.
For example, I open a jpg and go to save it as a PNG. I select PNG from the drop down list of available types and type the filename, but unless I also add .PNG to the filename it makes a jpg again.
It doesn't presently prompt to renew — it just says that there's a conflict and doesn't tell you which settings panel might be helpful. Only a very tiny subset of users will actually try to reconfigure their router or make other such changes as the result of this information. Options requiring a prompt before automatic renewal would be great for those users, but it's not relevant in 99%+ of cases.
But that doesn't mean that alongside the warning the computer can't try to get a new IP address by itself, even if it won't fix the problem 100% of the time.
No. If something is wrong with the current network config, don't automatically muck around with it so that I can't troubleshoot the problem myself. It's important to understand the current state to figure out how you got into that broken state; kind of like a breakpoint at the first exception. Automatically renewing is like 'On Error Goto Next'.
That said, I agree that it should suggest renewing the DHCP lease.
Well if the current network config is set to simply use DHCP, then renewing DHCP is a pretty safe move.
If the DHCP is set to serve a static IP for the MAC, the problem will presist, if the DHCP server is dynamic it will just do what its desgined to do and assign a unused address fixing the problem while not changing the network configuration at all.
For good product usability, being proactive, and getting 80% of the use cases right is much more important than being making no mistake and shift the decision (and blame) to the user. Especially since most of the time the user has no technical background to make the correct decisions. I used to think like you until I read The Inmates are running the Assylum[0]. I still do for my personal workflow (that is why I prefer archlinux to ubuntu), but I try to be proactive in the error catching in the software I write.
The usual cause of this for most people is that they've rebooted their router and it kept the leases in memory, so it isn't aware it assigned the IP to multiple computers. Really, automatically renewing the lease if you're on DHCP is probably the best behaviour for a home/simple OS. For computers on a network administered by professionals, it should be possible to disable this (and other automatic) behaviour.
Does anyone know what the iPhone/iPad/etc. do in this case?
The iDevices all bounce around getting multiple addresses within the DHCP range. Last time I was administrating an office network full of iDevices I had to increase the DHCP range to account for the iDevice's bouncing all over the range and using up addresses pretty needlessly.
Most consumers won't ever see it, because one of the only ways to get that error is a manual configuration on at least one client which conflicts with the DHCP range.
Suppose it's the other way around: I started out as a Windows user and got accustomed to using Ctrl+Z to exit.
Now what happens if I switch to Linux (as an inexperienced Linux user) and try to use Ctrl+Z to exit? Yes, I get this somewhat cryptic message:
>>>
[1]+ Stopped python
mike@s:~$
Now, you know that I haven't exited Python at all but have merely backgrounded it, but remember I'm an inexperienced Linux user and don't know about fg and bg and stuff like that yet. So every time I "exit" from Python I've added another background task. Oops.
They can be cancelled, just not with CTRL+C - that key combo sends SIGINT [1] to the process, which can be caught by a handler, and often is when the writer of the utility thought there was something else that users would want that signal to do instead of quit. For example, in less, SIGINT will stop a long-running text search that's taking too much time.
It is possible, though arguably it wouldn't necessarily be the best user experience. At least I know I have the habit of smacking ctrl-c several times in a row when something isn't responding as rapidly as I like. Things like ctrl-c probably should not be stateful if one of the potential things it can do is close your program.
As another example, perhaps it would be nice if ctrl-w deleted one word backwards if you were currently editing text in your webbrowser, but close your current tab/window otherwise. If the state is always what the user expects when they press ctrl-w it would work fine, but in reality you would have lots of people accidentally closing tabs.
So if the search finished just before issue ctrl+c you will close the application instead. Be vary of inconsistencies, even if they are just perceived inconsistencies.
Most Java programs I have to deal with print stack traces to the console even without prompting because somebody thought that the `ex.printStackTrace();` IDE autofix is definitely more than enough to handle an exception.
that's great. but why doesn't ctrl c do it? I mean it closes everything else on my system, except man pages, vim and some other programs that use less.
$ bundle
The source :rubygems is deprecated because HTTP requests are insecure.
Please change your source to 'https://rubygems.org' if possible, or 'http://rubygems.org' if not.
Why can't Bundler just use the new URL? Or, better yet, assume https://rubygems.org unless I specify sources in my Gemfile?
Right, and therefore, if you don't want to increment the major version number, you can't break compatibility. I am really not going to get into an argument about this, you're being ridiculous.
> Right, and therefore, if you don't want to increment the major version number, you can't break compatibility.
That's true, but both silly and not what you said before.
There's no good reason to care about version numbers in and of themselves, if you care about them at all, its because of their semantics: using SemVer doesn't make you not want to break backward compatibility, not wanting to break backward compatibility is independent of the use of SemVer. Of course, if you don't want to break backward compatibility, you won't end up bumping the major version number under SemVer, but you've got the whole direction of cause-and-effect wired backward when you say that SemVer is the reason not to break backward compatibility.
No one wants a major version bump in Bundler in order to change the primary rubygems URL because historically there is a lot of potential pain in Bundler updates, and so you better be getting something new that's worth the pain, not just a superficial change that happens to break compatibility only as a technicality.
> No one wants a major version bump in Bundler in order to change the primary rubygems URL because historically there is a lot of potential pain in Bundler updates
SemVer seems pretty much tangential to that, that's a problem with the particular history of the particular project.
If the original claim had been "Bundler's history of major version updates means that they don't want to bump their major version; combined with their commitment to SemVer, that means that any backward incompatible changes are avoided", rather than just that SemVer alone led to people in general seeking to avoid backward-incompatible changes, I wouldn't have challenged it.
End users expect major changes with a major number release (rightly or wrongly), so if you don't have any other than a minor breaking change, they might be annoyed. Of course if you're using semantic versioning you should probably just ignore them and increment anyway till they get used to it.
> End users expect major changes with a major number release (rightly or wrongly)
I think that's highly dependent on the particular market that a product is in, and how its version numbers are used in marketing. Chrome (which neither heavily markets around version numbers nor uses SemVer, but instead increments the major version number with every regularly-scheduled feature release) end-users (whether you are speaking of regular end-users or developers) probably don't have any particular expectations tied to major version bumps.
Microsoft Windows (which doesn't use SemVer, but does heavily market around major versions) end-users probably have much bigger expectations for major versions, as a direct result of the vendor's marketing investments.
A developer-focussed product that consistently uses SemVer and markets around particular new features rather than major version numbers probably won't have much expectation from its target users tied to major release bumps except that they will meet the SemVer definition of a major release bump.
This may be a terrifically silly question, but here goes anyways:
What sensitive data are people passing back and forth to rubygems of all places that needs encryption? Especially in the context of a bundle where you're more likely than not pulling down or updating gems (from a public repository, which contains no sensitive data)? And this need is apparently so pressing that it should be complained about to the user?
That's why everyone who runs a package repository who isn't a shitdick SIGNS THEIR FUCKING PACKAGES. It protects against tampering on the mirrors, too, with the added bonus that you don't need SSL.
Confidentiality is not the only feature of HTTPS. Using HTTPS can also protect from simplest man-in-the-middle attacks by validating the certificate, thus improving data integrity.
It's not just frowned upon. Having a 302 redirect is almost as insecure as not upgrading to https at all.
In browsers this is sort of acceptable because they let you visually confirm that you've been upgraded to https, and are now securely connected to the right server.
In a text mode app that does the connection behind the scenes and fully automatically, there's no such confirmation or even intervention, so there's no perceptible difference between http and http 302 to https.
Ruby (on Rails) values convention over configuration. It should have made https obligatory per default and overrideable with a flag per this guideline.
> In a text mode app that does the connection behind the scenes and fully automatically, there's no such confirmation or even intervention, so there's no perceptible difference between http and http 302 to https.
This assumes the client doesn't kick up a fuzz. The entire point is Bundler now does kick up a fuzz. But it could also be made to follow any redirect first, and only kick up a fuzz if the final resource is not https.
That really doesn't provide any security advantage over just using http. The point of https is to prevent mitm and in this case there's nothing preventing somebody from mitming the original http connection and redirecting to an evil https site.
I only want to be in control when my use case differs from the common case. I would wager that at least 95% of Gemfiles could source https://rubygems.org implicitly and automatically.
But you can check, and fail if the user specifies https and it's not present, and give a warning if the user doesn't specify anything or specifies http and OpenSSL is not present.
I still think mdavidn is probably right that 95% or similarly high number has OpenSSL available, and it makes little sense to inconvenience all of those users when you can inconvenience "just" the few who don't instead.
Here's another one.. In facebook apps settings, if you enter a url ending with a "/", the error message is "The url can't end with a '/'". Can't help but think about that lazy programmer who prefered to shoot an error message rather than fixing it. The irony is that it's often way easier and faster to add a .replace() and fixing it rather than writing , testing and translating the error message.
If you type google.com you don't expect FB to replace it to bing.com without telling you.
/path and /path/ are two totally different things in the standards, just like /path and /path2. Sure in most cases with and without trailing slash gives the same page; but it's not something you can assume.
Same reason why "we don't allow https links" doesn't mean you can just drop the s.
You're missing his point. Its a safe assumption in the case that the latter gives you an error pointing you to the former. If the latter was a blank/meaningful response that would make sense, but its not.
You are describing a developer-oriented UI. Silently doing an automagic .replace() on developer-entered information may cause more infuriation and (importantly) bugs than flagging it as invalid. Devs should not be left blissfully unaware of string format restrictions.
This, or even "You have used the extensions ".gz" at the end of the name. The standard extension is ".tar.gz". You can choose to use the standard extension instead."
Now click "Use .tar.gz" => file now has name foo.tar.tar.gz
It may know exactly where that is, but it doesn't necessarily know that you weren't trying to only update submodules in the current directory. Most git commands operate only on stuff reachable from the current directory, but `git submodule update` is incapable of being limited like that.
You can update only a specific repo if you'd like. In which case it would be possible to limit the submodule command to only submodules existing in the current directory.
Yes, it would need to scale back up to the .gitmodules file, but it is still possible.
Yes, obviously the behavior of `git submodule update` could be rewritten. But the way it works right now, it can't be restricted to just submodules reachable from the current directory.
This might be because it's ambiguous which submodules you want to update if you have nested submodules.
In other words, git knows "exactly where that is" by searching parent directories until it sees a ".git". Unfortunately, submodules also have ".git" in them since they're full git repositories themselves.
git submodules are not the nicest part of git, unfortunately.
OK, so it wouldn't work if you're in a submodule. It could print a warning if there are no submodules. But if would still work perfectly fine in the majority of the cases.
For those also annoyed, I use this git alias, which does what I want:
Honestly, git's the only program on this page that I'm fine with it being explicit and pedantic about what it's going to do. There are many situations where it could probably guess what I wanted to do but be wrong...
I mean, christ, I fetch and merge as separate steps.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0440 for './id_rsa' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
What they think they know about the user and group setup on my machine doesn't give me a lot of confidence in whatever other unsupported assumptions the software probably makes. (and despite saying "recommended" there's no override)
edit: to be clear, this is SSH's "shitdick excuse" not Heroku's.
If the file really was already in the badguys group and 0440 it's already been compromised so that's just covering it up. I suppose if SSH really had the courage of its convictions it'd automatically upload the key fingerprint to some revoked key blacklist.
You have the ordering backwards: SSH complains now so the problem is fixed before someone gets limited access to the computer.
If your account or computer has already been compromised you're still screwed but this helps considerably on shared computers, NFS, etc. where the access situation is easier to misunderstand.
on a fresh debian install, the first user (me) wasn't in the sudoers list. I sudo'd, and got reported. The report mail went to root, which was cloned to the first user...
Makes me think about the Arch Linux install I got half way through last night. As I started to fall asleep from the sheer boredom induced by copying and pasting commands, I had realized that I had no fucking idea why I was doing it. A fresher kernel wasn't worth it. I hit reset and went Ubuntu 12.10 instead.
And this is probably for the best, because you probably aren't the right kind of user for the distribution and community.
If you take a look at [1], you will see that the wiki provides a lot more than just commands to type in, but also a rationale and explanation what each component does. Usually there are also links for further reading that should help your understanding of the topic.
Fedora user...if Arch's forums are any indication of the quality of the distro, then Arch is excellent.
Was looking for some help getting started with Awesome WM as well as sorting out an issue with crap Broadcom wireless card -- both searches led me to Arch, and it was there that I found informative, detailed threads that led to the solutions.
As an Awesome user on Ubuntu, I often end up referring to the ArchWiki. Also for a lot of driver issues. Their wiki is really excellent, I guess because when people are dealing with a lot of raw, newer packages they tend to work out the finer details of issues that people later on benefit from.
What bothered me the most was that I did install Arch over the summer. It was awesome! It wasn't to hard, but more technical than most "consumer" installs. This was due to the EXCELLENT bit software called that AIF (Arch Install Framework) that offered a ton of usability improvements while maintaining the customizability that many crave.
So, I started recommending it to a bunch of people I know as an excellent development environment. What was odd was that they all reported back that it was incredibly hard to do and manage.
It turns out that just days after I had done this install, Arch completely removed AIF in favor of a bunch of smaller, much less helpful scripts specifically so that it wouldn't be as simple (and thus less flexible) to install.
I was, and still am, totally floored by this. I switched to using Ubuntu Server with my own choice of UI layer (Xmonad for life!) after dealing with that.
I don't use Heroku but, from the help page, it seems the point is to push users to learn the new syntax. I don't know if it's a good idea or not, but I can see how they got there. Without this, they'll never be able to remove the deprecated command.
I understand your frustration, but what if the program thinks you're doing something you're not? You may do irreversible damage. I guarantee that you'd be far more pissed if you trashed an entire project than you are about 'shitdick excuses'.
I want git to do that so badly. It's not like its output is particularly pipe-friendly anyway. Just ask me, don't make me type the whole thing again. Especially annoying when it's a longer command with a bunch of arguments or one that in no way could have negative side-effects, like git status or git log
I do this too,
If I'm saving the file for reuse, I wrap the entire thing in /* */ when I'm done just in case I accidentally hit run script instead of run statement.
The solution to that is to make potentially damaging operations reversible, not to throw the problem back in the user's face. In fact, this is a very reliable design smell: any time you find yourself throwing something back in the user's face, stop and think about the problem for a few minutes. 99% of the time, the right solution is to provide more reversibility instead.
Really? After some quick digging, nope. Tl;dr: Not by default, and only on Linux, and only with btrfs (still marked experimental) and only with the `--reflink` flag.
FreeBSD `cp(1)`: `copy_file()` does a simple `read()`/`write()` loop with a buffer[0] (despite ZFS maybe supporting CoW).
GNU coreutils `cp(1)`: `copy_reg()` calls out to `clone_file()` if and only if `--reflink` was passed[1], and `clone_file()` fails on everything but btrfs[2].
This is what I hate most about American websites. They all have this neoconservative(?) idea that curse words are fucking horrible. We're all adults here (well, 99% of us), nobody is going to die from seeing a few curse words here and there.
The problem is that this American morality is enforced to everyone internationally, as such a high portion of high-traffic websites are American. And it's not just about cursewords, it's this sick censorship of women's breasts, nudity and sex in general.
Communication on the internet is switching to a small number of mostly American social media websites like Facebook. If Facebook bans certain expression of thought or certain types of media/pictures, then that decision has pretty widespread effects. Even worse, when exposed to this kind of thinking, other countries will slowly start to adopt it.
On the other hand, other countries would enforce a different kind of morals more (like censorship related to racism and religion).
I love it when some sites make exceptions, like Tumblr (or Reddit in the past).
I'd say neopuritan, not neoconservative. Neoconservative generally refers to the right-wing movement that advocates military interference overseas (i.e. Afghanistan, Iraq).
Anyway, a lot of people object to the overuse of the word "fucking" simply because they've grown out of it. I'm all for artfully dropping it in from time to time for well-placed effect, but it can easily become a crutch used by those too lazy to consult their inner thesaurus. It's sort of shorthand for "I really mean it, which makes it more true!" Although it's arguably true that this particular tool takes it around the bend enough that it starts becoming funny again. Wait, I mean it's FUCKING true. :-)
sometimes people sprinkle 'fucking' or 'shit' or other words as if its salt. then fuckers shit fucking dia-motherfucking-tribes ass if fuckers fucking make fucking money and bitches from fucking using fucking fuck words, and suddenly discourse breaks down.
It's not that the words are bad, but some people take it too far. Especially on the internet, where some people erroneously believe in the anonymity of pseudonyms
I agree with your sentiment, as curse words should be used sparingly. However, a better solution is "bottom-up" moderation, not a top-down site-wide ban on something. This is essentially important in social media websites where subcommunities exist. Different subcommunities should be able to have different rules about these things. (This doesn't apply to YCombinator, though).
For HN, I just assumed it was because it sets the tone for the discussion. By moderating profanity, it helps keep it out of the discussions and so you get more civil debate on HN than on other sites.
Oh, I also can say and hear "fuck" all day long without blinking, because, like you, I am not American and English is not my native language. However, there are words in Polish which could make me cringe, especially when heard unexpectedly. Profanity is like poetry: it only works in your mother tongue.
I posted this comment about obscenities on another thread recently, but I think it is relevant to this thread as well.
"According to Paul Fussell's Class: A Guide Through the American Status System, aversion to profanity is a middle class thing. The upper class[1] do not use euphemisms for profanity or obscenity. Fussell wrote that Jilly Cooper reported "I once overheard my son regaling his friends: 'Mummy says pardon is a much worse word than fuck.'"
I doubt that many members of the upper class (see Fussell's book for a definition of upper class, it is roughly the tastes of "old money" but not dependent upon actual wealth) read Hacker News. It is likely that those who do not object to obscenities such as the word "fuck" are more socially liberal freethinkers who dislike formality. Those who do object are likely to be members of the middle class who believe (foolishly) that in censoring profanities and vulgarities, they are emulating the upper class."
Yes and when it happened I thought a negative story about a heroku feature was replaced by a positive one, which made it all the more interesting to click and investigate.
It is a well-known software engineering fact that one of the most important things a software project can have is conceptual purity, a strong, central thesis that organizes the entire project and can be used both to understand and build on the project. This project has a fuckload of conceptual purity. My compliments; truly an inspiration to all us aspiring project architects.
Swearing is like typing in all-caps or using an exclamation point. It's usually not necessary but when it fits it really fits. (Hedberg only swears a few times here but it sounds exactly right http://www.youtube.com/watch?v=Y5-46bj8b4w).
Used incorrectly swearing suggests someone who doesn't have much control over their emotions or vocabulary and lacks range of expression.
Yet this Heroku library, presumably created by someone who, stubbing their toe on that same problem over and over, is one big exclamation point all-caps rant, with all possible lines of code and input fields in Github (even the license!) filled with rage and satisfaction, and the nice thing is that the library ultimately fixes a problem and makes the solution available to all.
Separately, to anyone thinking this "unprofessional", take a look at Philip Greenspun's definition of a software professional: http://www.youtube.com/watch?v=JsPFdVrbGeE#t=41m20s (incidentally, this entire lecture deserves to be bookmarked and watched).
Edit:
By the standards of Greenspun's definition the author of the library would be considered a consummate professional.
For those without time to watch, here is the link for the presentation he used (though he's an excellent speaker and the presentation adds much more):
I recently started using Tim Pope's git-vim plugin. The first time I used :Gstatus and added a file with one keystroke had me won over. Pure genius if you ask me!
1. git blame
2. move to the blame window and hit 'o' to open the commit
3. discover that the line in question was merely moved not created by this commit, then move to the --- line in the diff and hit 'o' to open the previous version of the file.
4. repeat from step 1
It's a few more steps than `git log -S` but it provides a different kind of flexibility. You can trace all manner of historic code migrations even without a common search term.
That youtube lecture is confusing two terms. 'Being a professional' is being a member of a profession. "That surgeon is a great professional" is saying "that surgeon is a great member of her profession". 'Being professional' is acting in a way that is suitable for a member of a profession. You don't have to be a professional to be professional.
Also, how divorced he is from his own example is when he says 'professionalism to [cosmetic sales] is nice car, clean clothes, no offensive language'... and then says 'but that's not professionalism to a surgeon - they have a lot of money so probably a nice car, but I don't expect them to be clean'. A surgeon!? This guy doesn't think it's in a surgeon's professionalism to be clean!?
Greenspun first attempts to define what being "professional" does not mean in the context of software engineering (dress nicely, drive a clean car, don't offend), then discusses what makes one surgeon "more professional" than another (competence, innovation, and teaching account for full points), and finally describes how a software engineer would accommodate that standard. The talk does not hinge on the semantics of the term.
I did mishear the comment, but immediately prior he's saying he doesn't expect the surgeon to be well-dressed. Which is wrong in itself - being well-dressed and looking orderly is important for surgeons because it helps put patients at ease and builds rapport with them.
I didn't give the speech more than a few minutes because he seemed quite confused by the term - while the speech might not hang on the semantics, I was thinking 'what can he offer if he can't even keep his context straight'.
Also, taking your summary as a pracie, it seems to be missing a fundamental part of acting professional: a measured response when acting with others, be it clients or other professionals. You can be competant, inovative, and teach and still be a histrionic, self-aggrandising arsehole that creates drama left, right, and centre. 'Being professional' includes downplaying drama that isn't directly related to the core work of the profession (if your lab is about to run out of a critical isotope, this isn't a drama you want to downplay).
I don't use it either. Even the git commits comments are more than enthusiastic: `Provide a fucking help topic`.
Made me laugh for the first time today.
[vagrant@localhost conf]$ hadoop --help
Error: No command named `--help' was found. Perhaps you meant `hadoop -help'
[vagrant@localhost conf]$ hadoop -help
Error: No command named `-help' was found. Perhaps you meant `hadoop help'
[vagrant@localhost conf]$ hadoop help
Exception in thread "main" java.lang.NoClassDefFoundError: help
Caused by: java.lang.ClassNotFoundException: help
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
Could not find the main class: help. Program will exit.
I solved this a long time ago with a simple `alias hrc-='heroku run console --remote'`. That way I can type in console `hrc- production` or `hrc- staging`
What is poor about this usage of "fucking"? It's an intensifier, appropriately used in this instance. Some people lean on swear words too much, but I don't think this is such a case.
I read his usage of "fuck/ing" as a literary style considering the context and name of the repo. If the amount of swearing here was from commit messages in an unrelated project, then yes, that would be excessive and the author should invest in a thesaurus.
An aside: why do you use "f-bomb" instead of "fuck" when talking about "fuck"? It's not like we don't automatically fill it in when we hear it. See Louis CK's rant about this (nsfw language): http://www.youtube.com/watch?feature=player_detailpage&v...
If you want literary use of "fucking" (there, I said it), then I would invite you and the repo owner to read "The Elements of Fucking Style" by Chris Baker and Jacob Hansen [1]. Maybe then you both would understand how to use the word.
Lastly, to help the situation I've decided to offer a helping hand to fix this problem, a pull request[2]!
Does anyone know if you can rename a repo with a pull request?
Maybe literary was the wrong word for me to use, but I get the sense that it is a purposeful use of the word for stylistic purposes, and that Tim Pope isn't leaning on it as a crutch.
I feel like you're treating your prescriptivism of how the English language should be used as more objective than it really is.
You know this is one of those times where editing the title is not helpful! The title of the project is actually "Heroku Fucking Console." The edit makes me think it's pointing to something official and it's not!
It's more than that, though, it's when a program won't DWYM despite an error message indicating that the programmer knew exactly what was meant, but decided not to do it anyway.
Solution: remove all the error messages for common misspellings of what people want to do, or alternatively never deprecate any user interaction choice ever.
One may make the argument that removing the default behind 'heroku console' was too pure and not pragmatic enough, but going from strictly what you are writing here, this is the logical implication: "don't write error messages suggesting what I meant, and I'll (apparently?) be happier."
I love this. I also miss how the bamboo console would let me enter a ruby command locally and then execute it remotely when I hit enter. I've been meaning to make a gem to replicate this behavior.
I would be interested in this if it weren't for the foul language. This speaks volumes about the author's attitude.
If you're annoyed with something, have they even tried bringing it up with Heroku's support team? If so, have they tried shipping this tool that doesn't make the maintainer look like an arrogant troglodite?
I never claimed I was deeply offended by swearing. I claimed it will ultimately impact the maintainer's reputation and user base. Clearly you're a narrow minded person that can't see the big picture.
Uh, this is not going to impact Tim Pope's reputation in any way whatsoever given that it is a tiny inconsequential project. He is already an open-source A-lister.
I don't see what fucking language you're talking about. Joke aside, I think this is just part of the author's humor, it has nothing to do with being arrogant.
That fucking language was quite mild. Past versions of Linux (don't know if it's still there) used to have a quote from "Heathers" in a comment: "Fuck me gently with a chainsaw", as well as non-quote swearing like "fuck me plenty" and "fuck me harder".
init/main.c:
/*
* Tell the world that we're going to be the grim
* reaper of innocent orphaned children.
*
Some software development subjects are best not discussed in front of ordinary people, but it's hard to not make fun of it when you're dealing with a daemon that's not properly reaping it's zombie children...