Nothing in computing is worse than software that knows exactly what you want it to do, then gives some shitdick excuse as to why it's not going to do it in an effort to get you to jump through meaningless hoops.
Going to File->Save As... and trying to save to .jpg brings up a dialog box telling me to exit the current Save As dialog and go back to File->Export to save as .jpg. Asinine.
What I don't get is that you can open any image type and modify it in gimp on the fly. The reason they are separated is that xcf can maintain layers and any other image loses that information. But anyone who uses gimp for a few minutes will realize this, and then you don't need 3 dialogs. Hell, have a warning message in the save as box if you haven't saved a project in xcf.
I think the new behavior is great. First of all because it remembers both where I'm saving and where I've been exporting in my workflow, and second because frankly saving in GIMP's native format and creating a flat lossy JPEG that approximates the work you've been doing are really fucking different things, and I find it comforting that they choose to make that explicit. I can how if you open GIMP once a month to make a LOLCAT how this behavior could be annoying.
The comments I've found seem to indicate that the Gimp developers have no intention of changing this back, and that anyone who wants it back to the Gimp 2.6 and earlier approach is not in their "target audience". In other words, they hold their most common users in the highest of contempt.
I think they value developers and people who use their software frequently. As a developer, I love the separation of save and export. They really are separate functions, and the XCF actually saves the export location relative to the original XCF location. It makes working on large projects so much easier.
If I wanted to just do a quick edit to something, Gimp isn't the ideal tool for that job anyways. If I wanted to make it so, selecting export instead of save is really fine, especially since it gives the option in the menu to overwrite whatever JPG you just opened.
I've used gimp since 1997 exclusively and I hate every bit of 2.8s interface ... from the impossible to find dockable tabs that hide every which way, to the laborious change in workflows, to the new way they've decided to do ranges ... making it really simple to choose numbers like 15 bazillion versus 10 bazillion but making it impossibly difficult to easily go between 1 and 4.
And they STILL only have 8 bits per plane (people were whining about this in the 90s ... when Bill Clinton was president), STILL can't remember sub-pixel rendering steps ... other than improved PSD importing, I don't see why.
I'm sure they've put a lot of work into this; but not in the right areas.
Their target audience should be highly technical people that need to do image work and want a tool like vim/emacs/zsh etc but for images; learning curves are ok as long as you can combine simple steps in composable tasks to do amazing things.
I say the same things about all my good tools: "I've been using it for 10+ years, I totally suck at it still, and I would never use anything else". You can say that about paintbrushes, driving, violins, maybe even photoshop or excel; but not gimp.
They were going down that road for a while in the 2.2 or so days, but then they did a 180.
I think they value developers and people who use their software frequently.
I use their software very frequently, and the biggest frustration with this is that it completely screws with my muscle memory for the Gimp and every other application. I continue using 2.6 because 2.8 is just too frustrating.
When working with images so much that you no longer think about the keyboard and mouse, but you think "Save" and it just happens, a change like this is like rewiring your hand so that all the fingers' nerves are reversed, and you have to get permission from your navel before you can lift your pinky.
If I wanted to just do a quick edit to something, Gimp isn't the ideal tool for that job anyways.
Gimp has always been the ideal tool to just do a quick edit to something, because it's the tool I know. I've been using it for more than a decade. I don't need some new developers to come along and think they know better than me how I use their software. That's Apple's billion dollar job, and even they get it wrong often enough.
If I wanted to make it so, selecting export instead of save is really fine, especially since it gives the option in the menu to overwrite whatever JPG you just opened.
Clearly you're not working fast enough for your fingers to be ahead of your brain. I don't need options, I need my tools not to talk back to me.
That's something Emacs has really gotten right: it doesn't force me to relearn keybindings that are so deep in my fingers they'll come out my elbows someday.
Clearly you're not working fast enough for your fingers to be ahead of your brain. I don't need options, I need my tools not to talk back to me.
If you have any constructive feedback - or better yet, _code_ - make a polite post on their forum or mailing list. No one here cares about your specific workflow.
Actually, you're the only one here vehemently defending the Gimp's new behavior. The other comments seem to prefer Gimp 2.6's Save behavior. But this is getting unnecessarily petty and personal; it's obvious we disagree, and I'll just stick to 2.6 and stop recommending Gimp to people who would otherwise be happy pirating something more expensive.
>I think they value developers and people who use their software frequently.
I think people who use their software frequently would take offense to the software telling you to fuck off and go do it a different way.
The intent is obvious, both to the user and the developer. Why make people's lives more difficult for such questionable benefit? MS Paint doesn't have this problem. Photoshop doesn't have this problem. The user knows what they want to do, and they could do it before the change. Why are you getting in their way?
It's as if everyone in the Gnome/Gtk/Gimp world wants to be a demanding dictator like Steve Jobs, but doing that and getting it right requires an insane budget, and even after all their effort, I still don't like Apple's stuff.
There have been multiple forks, but they all end up dying. The ideal solution would be an automated fork that applies the UI patch to the latest official sources, but obviously the Gimp developers aren't going to care if they break your patch.
Why? The new workflow is better for everything except for quick edits of files. And the GIMP isn't the right tool for that job. Use a smaller, simpler program for a quick edit.
If you really want to destroy your layers and history (which GIMP saves into XCFs) there is a menu option for that. But you almost certainly don't want to do that.
Simpler program? I usually just end up using Photoshop instead :)
When you save a layered file in a unlayered format photoshop just forces the "Save Aa a Copy" option to be checked and then saves a flattened version while preserving the open one, thus if you then decide to quit without saving a layered version you get a "Are you sure you wish to quit without saving document x?"
The way Photoshop handles it is far superior imho. With gimp iirc even if the file is not layered you still have to do export?
And for many edits, i simply don't want to save layers or history. there's just no point if i'm cropping, converting, re-sizing, or quickly fixing something. If i do then i'll save it as a layered file and layered is by far in the minority. I know what i'm doing i don't need the app to 2nd guess me, and it seems the new workflow is better for people who don't understand the difference between layered and un-layered files.
From what I recall, wasn't that the behavior that Gimp used to have? It's how I remember it, and I have used only Gimp for the past decade. I was frankly surprised and confused when the new behavior presented itself.
Back in the 2.6 era yes, this was the behavior. It was changed because in my opinion it seems the Gnome Developers since around Gnome 3 jumped the shark. People make suggestions, they argue some divine vision of how computers work, and a year later they are still hemorrhaging users and have little adoption because of their iron clad positions.
So what smaller, simpler program (other than Gimp 2.6) has all of Gimp's features, but is suitable for a quick edit? Say I have a .png image stored in a version control system, with no need for layers or history, and I want to do a quick perspective clone to paint over something?
Only today did i stumble upon this behaviour for the first time (i was used to Gimp 2.6). But i understood the distinction between "save" and "export" and got used to the ctrl+E shortcut immediately. I think it's good that these are two separate functions actually.
This is pretty the sole reason I don't use gimp anymore and have no plans to go back, ever. I used it on OSX under X11 and had no problems with the mental gymnastics using control instead of command because, well, it wasn't GIMPs fault.
I was so excited when they made it a native app, too.
If Command-E actually brought the window up in front of the rest of the windows instead of hiding it behind the main window, it might have been a different story.
Gimp Save dialog has always sucked in one way or another.
For example, I open a jpg and go to save it as a PNG. I select PNG from the drop down list of available types and type the filename, but unless I also add .PNG to the filename it makes a jpg again.
It doesn't presently prompt to renew — it just says that there's a conflict and doesn't tell you which settings panel might be helpful. Only a very tiny subset of users will actually try to reconfigure their router or make other such changes as the result of this information. Options requiring a prompt before automatic renewal would be great for those users, but it's not relevant in 99%+ of cases.
But that doesn't mean that alongside the warning the computer can't try to get a new IP address by itself, even if it won't fix the problem 100% of the time.
No. If something is wrong with the current network config, don't automatically muck around with it so that I can't troubleshoot the problem myself. It's important to understand the current state to figure out how you got into that broken state; kind of like a breakpoint at the first exception. Automatically renewing is like 'On Error Goto Next'.
That said, I agree that it should suggest renewing the DHCP lease.
Well if the current network config is set to simply use DHCP, then renewing DHCP is a pretty safe move.
If the DHCP is set to serve a static IP for the MAC, the problem will presist, if the DHCP server is dynamic it will just do what its desgined to do and assign a unused address fixing the problem while not changing the network configuration at all.
For good product usability, being proactive, and getting 80% of the use cases right is much more important than being making no mistake and shift the decision (and blame) to the user. Especially since most of the time the user has no technical background to make the correct decisions. I used to think like you until I read The Inmates are running the Assylum[0]. I still do for my personal workflow (that is why I prefer archlinux to ubuntu), but I try to be proactive in the error catching in the software I write.
The usual cause of this for most people is that they've rebooted their router and it kept the leases in memory, so it isn't aware it assigned the IP to multiple computers. Really, automatically renewing the lease if you're on DHCP is probably the best behaviour for a home/simple OS. For computers on a network administered by professionals, it should be possible to disable this (and other automatic) behaviour.
Does anyone know what the iPhone/iPad/etc. do in this case?
The iDevices all bounce around getting multiple addresses within the DHCP range. Last time I was administrating an office network full of iDevices I had to increase the DHCP range to account for the iDevice's bouncing all over the range and using up addresses pretty needlessly.
Most consumers won't ever see it, because one of the only ways to get that error is a manual configuration on at least one client which conflicts with the DHCP range.
Suppose it's the other way around: I started out as a Windows user and got accustomed to using Ctrl+Z to exit.
Now what happens if I switch to Linux (as an inexperienced Linux user) and try to use Ctrl+Z to exit? Yes, I get this somewhat cryptic message:
>>>
[1]+ Stopped python
mike@s:~$
Now, you know that I haven't exited Python at all but have merely backgrounded it, but remember I'm an inexperienced Linux user and don't know about fg and bg and stuff like that yet. So every time I "exit" from Python I've added another background task. Oops.
They can be cancelled, just not with CTRL+C - that key combo sends SIGINT [1] to the process, which can be caught by a handler, and often is when the writer of the utility thought there was something else that users would want that signal to do instead of quit. For example, in less, SIGINT will stop a long-running text search that's taking too much time.
It is possible, though arguably it wouldn't necessarily be the best user experience. At least I know I have the habit of smacking ctrl-c several times in a row when something isn't responding as rapidly as I like. Things like ctrl-c probably should not be stateful if one of the potential things it can do is close your program.
As another example, perhaps it would be nice if ctrl-w deleted one word backwards if you were currently editing text in your webbrowser, but close your current tab/window otherwise. If the state is always what the user expects when they press ctrl-w it would work fine, but in reality you would have lots of people accidentally closing tabs.
So if the search finished just before issue ctrl+c you will close the application instead. Be vary of inconsistencies, even if they are just perceived inconsistencies.
Most Java programs I have to deal with print stack traces to the console even without prompting because somebody thought that the `ex.printStackTrace();` IDE autofix is definitely more than enough to handle an exception.
that's great. but why doesn't ctrl c do it? I mean it closes everything else on my system, except man pages, vim and some other programs that use less.
$ bundle
The source :rubygems is deprecated because HTTP requests are insecure.
Please change your source to 'https://rubygems.org' if possible, or 'http://rubygems.org' if not.
Why can't Bundler just use the new URL? Or, better yet, assume https://rubygems.org unless I specify sources in my Gemfile?
Right, and therefore, if you don't want to increment the major version number, you can't break compatibility. I am really not going to get into an argument about this, you're being ridiculous.
> Right, and therefore, if you don't want to increment the major version number, you can't break compatibility.
That's true, but both silly and not what you said before.
There's no good reason to care about version numbers in and of themselves, if you care about them at all, its because of their semantics: using SemVer doesn't make you not want to break backward compatibility, not wanting to break backward compatibility is independent of the use of SemVer. Of course, if you don't want to break backward compatibility, you won't end up bumping the major version number under SemVer, but you've got the whole direction of cause-and-effect wired backward when you say that SemVer is the reason not to break backward compatibility.
No one wants a major version bump in Bundler in order to change the primary rubygems URL because historically there is a lot of potential pain in Bundler updates, and so you better be getting something new that's worth the pain, not just a superficial change that happens to break compatibility only as a technicality.
> No one wants a major version bump in Bundler in order to change the primary rubygems URL because historically there is a lot of potential pain in Bundler updates
SemVer seems pretty much tangential to that, that's a problem with the particular history of the particular project.
If the original claim had been "Bundler's history of major version updates means that they don't want to bump their major version; combined with their commitment to SemVer, that means that any backward incompatible changes are avoided", rather than just that SemVer alone led to people in general seeking to avoid backward-incompatible changes, I wouldn't have challenged it.
End users expect major changes with a major number release (rightly or wrongly), so if you don't have any other than a minor breaking change, they might be annoyed. Of course if you're using semantic versioning you should probably just ignore them and increment anyway till they get used to it.
> End users expect major changes with a major number release (rightly or wrongly)
I think that's highly dependent on the particular market that a product is in, and how its version numbers are used in marketing. Chrome (which neither heavily markets around version numbers nor uses SemVer, but instead increments the major version number with every regularly-scheduled feature release) end-users (whether you are speaking of regular end-users or developers) probably don't have any particular expectations tied to major version bumps.
Microsoft Windows (which doesn't use SemVer, but does heavily market around major versions) end-users probably have much bigger expectations for major versions, as a direct result of the vendor's marketing investments.
A developer-focussed product that consistently uses SemVer and markets around particular new features rather than major version numbers probably won't have much expectation from its target users tied to major release bumps except that they will meet the SemVer definition of a major release bump.
This may be a terrifically silly question, but here goes anyways:
What sensitive data are people passing back and forth to rubygems of all places that needs encryption? Especially in the context of a bundle where you're more likely than not pulling down or updating gems (from a public repository, which contains no sensitive data)? And this need is apparently so pressing that it should be complained about to the user?
That's why everyone who runs a package repository who isn't a shitdick SIGNS THEIR FUCKING PACKAGES. It protects against tampering on the mirrors, too, with the added bonus that you don't need SSL.
Confidentiality is not the only feature of HTTPS. Using HTTPS can also protect from simplest man-in-the-middle attacks by validating the certificate, thus improving data integrity.
It's not just frowned upon. Having a 302 redirect is almost as insecure as not upgrading to https at all.
In browsers this is sort of acceptable because they let you visually confirm that you've been upgraded to https, and are now securely connected to the right server.
In a text mode app that does the connection behind the scenes and fully automatically, there's no such confirmation or even intervention, so there's no perceptible difference between http and http 302 to https.
Ruby (on Rails) values convention over configuration. It should have made https obligatory per default and overrideable with a flag per this guideline.
> In a text mode app that does the connection behind the scenes and fully automatically, there's no such confirmation or even intervention, so there's no perceptible difference between http and http 302 to https.
This assumes the client doesn't kick up a fuzz. The entire point is Bundler now does kick up a fuzz. But it could also be made to follow any redirect first, and only kick up a fuzz if the final resource is not https.
That really doesn't provide any security advantage over just using http. The point of https is to prevent mitm and in this case there's nothing preventing somebody from mitming the original http connection and redirecting to an evil https site.
I only want to be in control when my use case differs from the common case. I would wager that at least 95% of Gemfiles could source https://rubygems.org implicitly and automatically.
But you can check, and fail if the user specifies https and it's not present, and give a warning if the user doesn't specify anything or specifies http and OpenSSL is not present.
I still think mdavidn is probably right that 95% or similarly high number has OpenSSL available, and it makes little sense to inconvenience all of those users when you can inconvenience "just" the few who don't instead.
Here's another one.. In facebook apps settings, if you enter a url ending with a "/", the error message is "The url can't end with a '/'". Can't help but think about that lazy programmer who prefered to shoot an error message rather than fixing it. The irony is that it's often way easier and faster to add a .replace() and fixing it rather than writing , testing and translating the error message.
If you type google.com you don't expect FB to replace it to bing.com without telling you.
/path and /path/ are two totally different things in the standards, just like /path and /path2. Sure in most cases with and without trailing slash gives the same page; but it's not something you can assume.
Same reason why "we don't allow https links" doesn't mean you can just drop the s.
You're missing his point. Its a safe assumption in the case that the latter gives you an error pointing you to the former. If the latter was a blank/meaningful response that would make sense, but its not.
You are describing a developer-oriented UI. Silently doing an automagic .replace() on developer-entered information may cause more infuriation and (importantly) bugs than flagging it as invalid. Devs should not be left blissfully unaware of string format restrictions.
This, or even "You have used the extensions ".gz" at the end of the name. The standard extension is ".tar.gz". You can choose to use the standard extension instead."
Now click "Use .tar.gz" => file now has name foo.tar.tar.gz
It may know exactly where that is, but it doesn't necessarily know that you weren't trying to only update submodules in the current directory. Most git commands operate only on stuff reachable from the current directory, but `git submodule update` is incapable of being limited like that.
You can update only a specific repo if you'd like. In which case it would be possible to limit the submodule command to only submodules existing in the current directory.
Yes, it would need to scale back up to the .gitmodules file, but it is still possible.
Yes, obviously the behavior of `git submodule update` could be rewritten. But the way it works right now, it can't be restricted to just submodules reachable from the current directory.
This might be because it's ambiguous which submodules you want to update if you have nested submodules.
In other words, git knows "exactly where that is" by searching parent directories until it sees a ".git". Unfortunately, submodules also have ".git" in them since they're full git repositories themselves.
git submodules are not the nicest part of git, unfortunately.
OK, so it wouldn't work if you're in a submodule. It could print a warning if there are no submodules. But if would still work perfectly fine in the majority of the cases.
For those also annoyed, I use this git alias, which does what I want:
Honestly, git's the only program on this page that I'm fine with it being explicit and pedantic about what it's going to do. There are many situations where it could probably guess what I wanted to do but be wrong...
I mean, christ, I fetch and merge as separate steps.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0440 for './id_rsa' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
What they think they know about the user and group setup on my machine doesn't give me a lot of confidence in whatever other unsupported assumptions the software probably makes. (and despite saying "recommended" there's no override)
edit: to be clear, this is SSH's "shitdick excuse" not Heroku's.
If the file really was already in the badguys group and 0440 it's already been compromised so that's just covering it up. I suppose if SSH really had the courage of its convictions it'd automatically upload the key fingerprint to some revoked key blacklist.
You have the ordering backwards: SSH complains now so the problem is fixed before someone gets limited access to the computer.
If your account or computer has already been compromised you're still screwed but this helps considerably on shared computers, NFS, etc. where the access situation is easier to misunderstand.
on a fresh debian install, the first user (me) wasn't in the sudoers list. I sudo'd, and got reported. The report mail went to root, which was cloned to the first user...
Makes me think about the Arch Linux install I got half way through last night. As I started to fall asleep from the sheer boredom induced by copying and pasting commands, I had realized that I had no fucking idea why I was doing it. A fresher kernel wasn't worth it. I hit reset and went Ubuntu 12.10 instead.
And this is probably for the best, because you probably aren't the right kind of user for the distribution and community.
If you take a look at [1], you will see that the wiki provides a lot more than just commands to type in, but also a rationale and explanation what each component does. Usually there are also links for further reading that should help your understanding of the topic.
Fedora user...if Arch's forums are any indication of the quality of the distro, then Arch is excellent.
Was looking for some help getting started with Awesome WM as well as sorting out an issue with crap Broadcom wireless card -- both searches led me to Arch, and it was there that I found informative, detailed threads that led to the solutions.
As an Awesome user on Ubuntu, I often end up referring to the ArchWiki. Also for a lot of driver issues. Their wiki is really excellent, I guess because when people are dealing with a lot of raw, newer packages they tend to work out the finer details of issues that people later on benefit from.
What bothered me the most was that I did install Arch over the summer. It was awesome! It wasn't to hard, but more technical than most "consumer" installs. This was due to the EXCELLENT bit software called that AIF (Arch Install Framework) that offered a ton of usability improvements while maintaining the customizability that many crave.
So, I started recommending it to a bunch of people I know as an excellent development environment. What was odd was that they all reported back that it was incredibly hard to do and manage.
It turns out that just days after I had done this install, Arch completely removed AIF in favor of a bunch of smaller, much less helpful scripts specifically so that it wouldn't be as simple (and thus less flexible) to install.
I was, and still am, totally floored by this. I switched to using Ubuntu Server with my own choice of UI layer (Xmonad for life!) after dealing with that.
I don't use Heroku but, from the help page, it seems the point is to push users to learn the new syntax. I don't know if it's a good idea or not, but I can see how they got there. Without this, they'll never be able to remove the deprecated command.
I understand your frustration, but what if the program thinks you're doing something you're not? You may do irreversible damage. I guarantee that you'd be far more pissed if you trashed an entire project than you are about 'shitdick excuses'.
I want git to do that so badly. It's not like its output is particularly pipe-friendly anyway. Just ask me, don't make me type the whole thing again. Especially annoying when it's a longer command with a bunch of arguments or one that in no way could have negative side-effects, like git status or git log
I do this too,
If I'm saving the file for reuse, I wrap the entire thing in /* */ when I'm done just in case I accidentally hit run script instead of run statement.
The solution to that is to make potentially damaging operations reversible, not to throw the problem back in the user's face. In fact, this is a very reliable design smell: any time you find yourself throwing something back in the user's face, stop and think about the problem for a few minutes. 99% of the time, the right solution is to provide more reversibility instead.
Really? After some quick digging, nope. Tl;dr: Not by default, and only on Linux, and only with btrfs (still marked experimental) and only with the `--reflink` flag.
FreeBSD `cp(1)`: `copy_file()` does a simple `read()`/`write()` loop with a buffer[0] (despite ZFS maybe supporting CoW).
GNU coreutils `cp(1)`: `copy_reg()` calls out to `clone_file()` if and only if `--reflink` was passed[1], and `clone_file()` fails on everything but btrfs[2].