Pennsylvania the 33rd largest state. It is definitely not a large state by western standards. I guess if you consider NYC the center of the universe then maybe it's the Midwest, but as a Midwesterner (from Minneapolis) I'm not really feeling that.
I think puls meant that Pennsylvania is a big state in the sense that Philadelphia is considered to be a part of the east coast while Pittsburgh (home of Carnegie Mellon) is often considered to be a part of the midwest.
In that sense, Pennsylvania is a big state. Perhaps "wide" is a better descriptor.
Pittsburgh is kind of a buffer between the east coast and the midwest. Architecturally, it feels more like an east coast city, but culturally it feels more midwestern.
Well, see, this kind of depends on your definition of "Midwest". I grew up a fair distance west of Denver, and to me, the dividing line between East and Midwest is Chicago. The "real" west begins at Denver. This neatly divides the country into approximately thirds.
But to say that Ohio and western PA are "Midwest" because they're west of the Appalachians... bah. You easterners need to get out more. And if you think steel town culture makes you part of the Midwest, well... come west of Chicago and look around. You don't see many steel towns out here.
I mean, I suppose that's the labels that you grew up on, and it's how the locals label themselves, so on that level it's hard to argue with. But anything that includes "West" as part of the name (or even "Mid") that includes Pittsburgh... look at a map of the country, and then explain to me how that makes any sense at all.
This is so telling: GitHub had a blog post yesterday about pull requests and now Atlassian does today with a very similar title. The one from GitHub was about social dynamics and how to work together better, while the one from Atlassian is about technical minutiae.
Having used both GitHub and Stash, the difference in focus between the two companies comes across plainly, and these two blog posts only back it up.
Atlassian are talking about a real problem here, which problem they appear to have solved. Working code is usually superior to social convention (rebase, etc.); that's why we're using git (or possibly other DVCSs) instead of RCS, etc.
Not sure I agree. In either case, both a social convention and an expected response from the executed command are necessary because both the "double dot" and "triple dot" logic are necessary to truly understand the implications of a complex pull request.
In the case of Atlassian's solution, the "merged" social convention will have to be use to first use and personally verify the "triple dot" findings in order to review that the change to the branch is, in isolation, as expected - before proceeding through the "double dot" flow that now characterizes the pull request feature.
For Github, it's simply the inverse.
Either way, both are relatively easy to achieve with the command line or a diff tool like Meld or LiClipse.
Having an upstream can be useful for CI testing and for deployments with tools like Capistrano. Plus it allows you to pull down your code from other environments if needed and acts as a additional backup, so there are potentially lots of reasons to have one.
Because even though it's personal stuff, it's shared between many computers. For people that live in VCSs all day, it's just more convenient (history, branches, etc.) than e.g. storing it on Dropbox or similar.
Cloud access? I have a similar problem where my various machines are behind NATs/firewall (personal computer, workstation...). It could just be more convenient to put it in the cloud rather than have to poke port forwarding or VPN rules between the machines.
In my case, OS X has the latest git, while Windows is stuck on 1.9, and my git folders get corrupted if I do a push on OS X and simply do a status on Windows. I guess something has changed between 1.x and 2.x but this corruption is really not something which should ever happen in a VCS. My solution is to use Gitlab instead of Dropbox only local repos.
I'm not aware of any changes between 1.x and 2.x that would corrupt a repo... I use a mix on various systems, some even still have git 1.7.x on them. The only problem I run into is when you have some script that depends on a newer git feature added in a newer version...
My money would be on Dropbox busting your repo somehow. I don't think Dropbox is an ideal solution for pushing/pulling git repos from.
I think the integration with JIRA (which is arguably best-in-breed for moderately heavy-duty issue tracking) is quite a compelling argument for going with an all-Atlassian setup; you can even use their SourceTree product as a git GUI.
On the other hand, GitHub's stuff generally does feel nicer to use and better and more thoughtfully UX'd.
Bitbucket provides free private repos, and for smaller development teams and/or companies not wanting to deal with the cost of Github (which does admittedly grow exponentially the more repos you require), Bitbucket is a fine choice.
When we were making the decision at my company, we went with Github because the dev team cared about having the little green squares show up on the "activity" chart for their account's... I know, petty, but it's something, and since most of us do FOSS projects, it's a status thing.
It used to be any commit that made it into a repo's master branch, the green squares showed up, even if it was a private repo (it just didn't show details of the repo to public users). But now, those don't show up to the public, only the user themselves sees them while logged in... so if we were making the decision today, I'd probably lean towards Bitbucket.
For a team, pricing is different. 50 repos is $100 a month, 125 repos is $200 a month ($2,400 a year). Granted, it's not "exponential", I was using a figure-of-speech referring to the pricing becoming significant.
For an internal-dev team which generates a great deal of new repos throughout the year (one-off scripts/programs for different departments, etc...), this adds up very quickly.
If you reach that 126th repo, it jumps to $450 monthly or $5,400 a year. At those prices you get questions from Accounting about why we aren't hosting this internally...
GitHub Enterprise is an appliance VM you run internally. It's charged per seat. The last enterprise-y company I was at used it primarily because you keep your code inside the firewall, but also because the pricing model is more aligned with the usage in that environment, as you suggest.
It can depend highly on your needs. We have a small team but a large number of private repositories. Github basis prices on number of repositories, which makes them incredibly expensive for us. Bitbucket basis prices on team size, which made them cheap for us.
We use BitBucket since we're a small shop from a developer count standpoint but we have a ton of small repos plus we migrated over a ton of legacy svn repos. GitHub gets expensive real quick when you are in that situation.
For anyone doing 'client work' rather than developing a single/few products, GitHub's pricing is almost prohibitively expensive.
We have a little over a hundred repositories. This puts is in the $200/mo plan for GitHub (125 repository limit).
Atlassian prices per-user. Our small three person dev team costs us nothing. We'll reach the next pricing tier when we hit our 6th developer, at which point it will cost us $10/mo and will remain that cost up to 10 developers.
BitBucket's top plan is $200/mo. That gets you unlimited repositories and users compared to GitHub's 125 repositories.
In the middle tier, GitHub charges $50/mo for 20 repositories whereas BitBucket charges $50/mo for 50 users.
If you have few repositories but many users, GitHub's pricing is advantageous. If you have many repositories but few users, BitBucket makes way more sense.
You'd think more start-ups/mid-size ones would use Bitbucket since they have free private repos. I'm guessing the social allure of Github and its superior repo/pull request UI is what trumps Bitbucket to that end.
The differences aren't as superficial as you suggest, in my experience. GitHub is just far more polished. Take commit history as an example: bitbucket makes it uncommonly awkward to step through a series of commit diffs for a given file, whilst github makes it a bit easier (although still not perfect - am I the only one who wants this feature?). Nicer looking doesn't always mean more usable, but there's often a correlation and, in this case, it definitely bears out.
Having said all that, I'm currently using bitbucket for a private work repo because I'm cheap :)
Unsurprisingly, this is very much a personal choice. I vastly prefer BitBucket's issue tracker. I love their side-by-side diff (somewhat recently added to GitHub). I love that they don't try to force this 72 char commit message on me and seem to handle it rather nicely via hover text. And I very much liked that they didn't support emoji and inline GIFs (since implemented, lamentably).
The more GitHub seems to adjust itself, the less I enjoy it. The more BitBucket seems to try to copy GitHub, the less I enjoy it. At the end of the day, I probably want GitHub circa 2008. I find when they really became opinionated about things to be the inflection point about whether they pushed things that were truly more usable.
That's not to say your view of things is wrong for you. But I don't think it's clear that "in this case, it definitely bears out".
> bitbucket makes it uncommonly awkward to step through a series of commit diffs for a given file, whilst github makes it a bit easier (although still not perfect - am I the only one who wants this feature?)
I want it too :) A time-machine style forward & back, showing changes to a file.
I've been coding full-time in Swift for the last three months and at this point our source base is about 10K lines of code. Given this experience, if I had to do it all over again, I'd probably choose Swift again.
- Swift is a far better language than Objective-C. It's much safer, the type system is great, and the functional features are a joy to use.
- Everything largely works as advertised; even for a super new language, the amount of total brokenness is minimal.
- The current compiler's error messages are frequently bad to the point of being useless. Try to mutate an immutable dictionary? You get a type mismatch when you could get an error about immutability.
- The compiler is a lot slower than I think they mean for it to be.
- The debugger takes several seconds to evaluate an expression compared to almost instantaneous evaluation in Objective-C.
- 8 MB of standard library in your app binary.
- "SourceKitService crashed" messages in Xcode flashing on the screen at 30 hertz.
But at the end of the day, if you're hiring an outside company to do this, why does it matter to you what language it's written in? Shouldn't they be able to use their best tools?
>But at the end of the day, if you're hiring an outside company to do this, why does it matter to you what language it's written in? Shouldn't they be able to use their best tools?
Unless the owner wants to be forever betrothed to the outsourcing company the language selection is a valid concern. For instance, [she|he] may want to take over updates when iOS9 comes out and plans on being proficient in Swift by then.
The owner should weigh why they wanted Swift in the first place vs. the consultant's recommendation and then decide from there.
Same place we were five years ago: nobody cares what API you're programming to as long as your apps are good.
No, seriously. The iPhone SDK came out in 2008. Google Maps came out in 2005, ushering in the modern era of web apps. Mac OS X came out in 2001 and .NET came out in 2002, representing the major desktop platforms we know today.
The web as a platform is no different than any other platform in that it's just another platform with its own strengths and weaknesses. It just moves a whole lot slower than the rest, which is why we're still asking this question after this many years.
APIs and platforms come and go. Developers always have had and always will have choices about which ones to pick for developing their apps against. These choices have some impact on how easy it is to build various types of apps, but at the end of the day, the only thing that matters is how well your app serves the needs of those using it.
Microwave tech was awesome and cost effective until the first time the satellite got out of alignment and burned half the city down. I've been terrified of the idea ever since: you just can't make a weapon like that fail safe, and if you do manage, then your throughput is too low to make it worth it.
What you possibly could do is to not have a single emitter but loads of them for each collector. So that a single satellite getting out of alignment would be harmless. Many emitters would need to get out of alignment pushing the beam in the same direction for it to be harmful.