No syntax highlighting on diffs, horrible defaults (closing development branch when you merge into master???), not being able to make a PR after you make one commit to a branch without refreshing and loosing your message, inconsistent code formatting that is just horribly broken in general, weekly downtime that's not reflected on their status page, having to manually press a button to see updated diffs after updating a branch, no support for signed commits, API support lacking in the weirdest places, random failings in commit webhooks, etc etc.
God I hate it.
As for Bitbucket.... It works, but I can't think of a whole lot it does that GitLab doesn't do better.
I've done test runs with other tracking software, and I can't really find anything better. Every tool sucks in its own way. Do you have a recommendation on something that you've found that's better than Jira?
At my previous startup job we evaluated Jetbrains YouTrack and GitLab issues. Both were fairly competent looking, both had nicer interfaces, but nothing has the same feature set or ecosystem as Jira. GitLab now has multi-project boards and help desk support, so it's actually getting to be pretty useful in it's own right for issue tracking.
(I think you can turn most of the annoyances off. But it leaves a bad impression. They also seem to like PHP.)
Autocorrect is such a prankster
Why's that a horrible default? It doesn't happen to match the workflow you use? It matches the one I use. So I can understand it's not ideal for you but they can't suit everyone with a binary default so what makes it so 'horrible'?
So if you merged feature -> develop -> master and remembered to delete feature it would delete develop by default as well...
You're in the minority here. For the majority of software projects branches like `develop` are usually considered "long living" and are not "merged in" when arriving at `master`. That's why we have things like Git Flow .
Also an action that defaults to destroying something is user-hostile, so that's why it's a big deal and thus why GP wrote their comment.
And anyway Git Flow still has feature branches that you close on merging!
I migrated everything off BitBucket to Gitlab. I'm forced to keep a Github account because it's expected. As soon as enough people move away from Github to allow me to drop it, I'll migrate those too.
Oh, I beg to disagree. The last 8 months have been much better, but in 2015-2017 it was like 3-5x per year that I had to tell my boss that we can't deploy because Bitbucket wasn't triggering the CI server.
edit: come to think of it, I switched timezones from California to Asia. I haven't run into as many problems because no one is awake to break something :).
Syntax highlighting on diffs is not a useless feature.
We keep an ifttt webhook into this page, very helpful to resolve the WTF chorus when something is busted. Props to Atlassian for transparency: please keep this going.
We use confluence, bb, and jira. The good news is that triad is very nicely integrated: you can easily crosslink issues between them. There is definitely room for improvement on uptimes though.
As one data point, several weeks ago it would take BitBucket over 70 seconds to respond to a `git ls-remote` with a repo containing about 200 branches. Usually this takes ~10 seconds or so, but it caused all kinds of headaches with Jenkins. On top of this pushes where incredibly slow.
Status page was all green.
They definitely need more alerts pertaining to slow API responses on the git interfaces. I still give them props for usually fessing up to outages. Some places (cough aws) will be out for hours before they admit to it -- not gaming any numbers, nope never.
Because Bitbucket Server (on prem) has been fantastic for us and is continuously improved. From what I know, it's a different codebase though (was once known as Stash)
The included Bitbucket Pipelines is also very very cool. at 30 developers, you pay a 1$ per month cost. Gitlab is at 4$ minimum.
4$ buys you 4 meals in India.
But the real reason is, startup funding rounds are generally 1/4th of the size for an equivalent stage as compared to the US.
every bit counts.
This is a bit offtopic: (OP here) the reason why I was even reading Atlassian's terms is to understand what kind of language is used in enterprise self-hosted software because I need one for my own product. I really like GitLab's open core model (as long as the core is still perfectly usable), and I was wondering if I could ask you some questions about it as I'm looking to adopt something very similar? Could I contact you over email about this?
Please have a https://about.gitlab.com/handbook/ea/#pick-your-brain-meetin... (public video) so others can benefit as well.
Please reference the url of this response in your email to her.
One of my customers even raised the issue with them, saying it would cause harm to the company if it came to be known, but they dodgeball’ed it with legal.
Basicaly GitLab is a competitor to GitHub.
Atlassian allways wanted to compete with those but will never succeed.
Just like they had to admit that HipChat and Stride are worse than Slack and they'd never get to the point that someone would want to use their chat tools, they will have to admit that Bitbucket + Bamboo will never be able to compete with GitLab or GitHub.
In my opinion the only really good things are Jira and Confluence.
There is no standalone tool that does the Jira stuff as good as they do.
There has never been an enterprise wiki which was so usable and consistent like confluence. Even if I have many paintpoints with it, I don't know something better for a wiki solution that has to be used by everyone from management to development (dev's cry the most about it).
There’s actually a lot of enterprise wiki software out there that is comparable or better. One example from a previous job: http://twiki.org
Oh and hamburger menus are hiding the key features you DO use every page view.
At least to a first order, palm-reading, internet comment approximation. ;)
Modern Java, IBM
C# .NET, Microsoft
I was a huge fan, but any more I just want to use GitHub.
If only your team needs to access it, set the instance to open permissions (I.e. most rights are set to “Everyone”) and then control access using the network or with a proxy in front. This took the instance I run at $dayjob from infuriatingly slow to no slower than any other web application.
—Source: 5 years and counting Jira admin
But fine-grained permissions are a fairly regularly cited reason for using Atlassian’s stuff over other (simpler...) offerings. This sounds a lot like “don’t try the fancy stuff because it’s unusably slow”.
Basically, anytime you hit a ticket's page, Jira has to scan your account's group memberships and compare that against a litany of permissions to determine whether you can even see the ticket and what actions you can take with each of its fields. This is done to avoid showing you UI elements for things you can't do. If a given permission is set to "everyone", the check simply isn't done (kinda the equivalent of replacing a call to the user directory with a "return true;")
I'm not talking entirely about reducing security here. I mean that if everyone in your team is a member of a certain security group, and only your team touches your Jira, set that permission to "everyone" rather than using that security group - the lookup is completely unnecessary in this case.
Basically, you want as few permissions as possible to ensure the level of security you actually need. Often this isn't done, people go a bit crazy with permissions schemes trying to segregate this and that.
I was just going through theirs (and other) terms and conditions to understand what kind of legalese goes into these and I was appalled at this. While these terms aren't effective until after 01-Nov-2018, I tried searching online to see if there was any discussion about how stupid this is, and this unanswered forum post came up.
I have no idea how this is still considered acceptable.
I was curious whether or not the CRFA could be overridden by contract terms, but the FTC claims quite the contrary: https://www.ftc.gov/tips-advice/business-center/guidance/con...
The whole point of it is to override and render invalid/unenforceable terms which try to forbid posting honest reviews of companies.
Yep. Bitbucket as well. In some of the repo config screens you get a save button in others you don't. No consistency at all.
Like all enterprise software vendors their customers are the ones who buy it, not the ones who use it. In almost all cases the person signing the cheques will never experience any of the issues themselves.
This happened like 2 years ago at least, every update to Jira makes it slower and less usable, I have given them this feedback many times, and I suspect I am not alone, and they simply do not care. Until they see subscriptions drop, they will do nothing, they are like Rational Software or CA when they hit the big time and started buying up all their competition.
You and I must use a very different JIRA.
But then they're never going to look at the UI for Jira either because they're too busy for that.
(Slack is another good example. Slack is "just" an IRC client with better emoji/GIF support. But I'll be damned if Microsoft Teams doesn't _really suck_, even with the benefit of knowing how Slack does everything. Good software is hard and takes work.)
Trello was the counterexample: it implemented less than 20% of Jira's features and was all the more useful for it. (thus Atlassian bought it and have started ruining it with bloat).
"We don't plan to offer a hosted version of Tettra in the immediate future. We believe that secure web services are the new standard for business applications and take security seriously."
It has to work without an internet connection at all. That includes installation, updates, and help pages. People who care about security have air-gapped networks. To update software, we burn it to a DVD and then walk it over to the secure network.
I have a hard time understanding how anybody would tolerate their business secrets being on your servers. That is really weird to me.
It's also just slow.
Now, there are other reasons you would want on-prem other than air gapped networks, but that’s not the discussion we’re having.
Edit: oh, this guy is a troll. Just read his other comments. Sigh.
As for trolling, "Assume good faith." is in the site guidelines:
https://news.ycombinator.com/newsguidelines.html FYI, I'm not trolling, and any appearance of it is due to cultural differences.
I'm not sure how you can have used the UI and hold this option. I feel like I've been greeted by the same basic bugs time and time again, like clicking on an issue ans it not opening until the second click, or opening the previously viewed issue.
It's funny reporting benchmarking is being banned. The guise of web apps being too complex to be reasonably communicated is laughable.
There is a 15 year old bug open to reduce the flood of emails Jira can create from updates. JRASERVER-1369  could be it's own community.
With minimal java/jvm tuning experience, it seems likely the oncloud product is aggressively cached and resource throttled. Once a page or filter has loaded, it generally loads quicker.
The speed of an on premise install of JIRA compared to the cloud can be staggeringly faster.
One solution might be for some clever person to first crawl an entire Jira or Confluence site, and then continually ping all the pages to keep the system performing better.
It’s slow, annoying UX, and very inconsistent.
But for some reason businesses like it, it is what it is and as an employee I have little choice so I have to bear with it.
> Cloud was full buy in to AWS...
> everyone is using nodejs and your language is now banned...
> Add internal politics... hire as many people as possible and terminated anyone raising valid concerns...
> monolith service now becomes highly distributed...
As some comments mentioned before, there are perfectly good software written in NodeJS (trello) which are blazingly fast. Similarly, there are enough good, well performant multi-tenanted applications on AWS. Anything that has been used for a long time (linux, java, protocols, etc) is always going to carry tech debt and there is nothing wrong in that.
While there may be a grain of truth in some corner cases of what is written, it overall comes across as an emotionally negative let off, and in some places borderlining on indirect propaganda against generic stuff (sorry).
Yes, competent engineers can work around these problem given time (e.g. pivoting from thread-level to process-level parallelism), so it wouldn't be fair to entirely blame nodejs for the problems, but that wasn't the vibe I got from GP.
The JVM is just getting warmed up load testing for that short a period.
On a server JVM instance a method will have to be executed (by default) 10 thousand times before it is compiled.
There is also a question of how stable the compilation profile is and how likely deoptimizations are but that's a different story.
I wonder how typical is this nowadays in IT companies and departments all around the world.
I'm aware of a nodejs script that is basically packed up with the node runtime and run on Windows as a "native application," which interacts with the filesystem, windows registry, and more. The packer is years old and nobody knows how it works.
It is everything you expect of such a program. From both a user and developer perspective.
It is deployed on very expensive, very important equipment in pharma labs and hospitals.
I've even stated in writing on one system that I refuse to receive "emergency" calls on holidays, weekends, or nights to fix a particular system if it ever goes down at that time. Management refusal to plan ahead or heed warnings does not constitute an emergency on my part.
What languages were banned? This is a very interesting data point.
One reason is that you made the business decision to attempt customer lock-in. You supported wikimedia-style text markup in your wiki (in addition to the GUI) so that people could migrate to your stuff, and then you took out that feature so that people would have trouble leaving. I'm sure that makes sense to an MBA, but I have been discouraging use of your products all throughout a large company.
The other reason is that yes, all your stuff is slow as fuck. OMG it is slow. The Java grows to consume gigabytes for no damn reason, and it munches CPU time, and generally it sucks pretty hard.
Golang might be right for you. I think it's the fastest choice available for development teams that can't handle stuff like pointers. On the other hand, I think you would still manage to sort-of-leak memory by hanging on to references that you really don't need.
I think you could fix the slowness problem by requiring all development and testing to be done on computers that are slower than the ones your customers use. Get an old Pentium II with 256 MiB of RAM... which is still overkill for the task at hand. Remember, back in the day we ran stuff like your products on computers with 8 MiB or less and a 486 or less. You can live with a Pentium II and 256 MiB of RAM, and the resulting performance of your software will delight your customers.
For example, maybe there's a push against everyone using all their own favourites everywhere. I can see a strong argument for, "please stop writing in Foo. We use Java, Go, Python, and C++. Pick the right one of those for the job at hand. We all benefit from using a common set of tools."
I'm not denying what you're saying. Just feeling a healthy skepticism that they "ban" languagues without a good faith objective.
This was a bit back but not to far back.
Took a significant pay rise.
elsewhere and got to work with some other ex atlassian colleagues.
I would have serious reservations about working for a company that bans publishing benchmarks of its software.
...until November, when these outrage terms come into force?
It also seems somewhat delusional (or perhaps "misguided", if I was feeling charitable) to try to hire in a thread bashing Atlassian for these downright scummy terms, and indeed for having tools that are simply awful to work with.
I miss it sorely when working with Github Issues.
It’s a great product. The speed and ease of use is amazing.
Which I guess makes sense, Pivotal as a whole is aiming more towards capital-E Enterprise.
Our headline products, for reference: https://pivotal.io/products
Link to UN human rights
Right number 19.
"Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers."
So its a basic human right to be able to express how fast or slow Atlassians products are.
Further First Amendment to United states constitution
"Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances."
Kinda shocked to see all the negative posts on here. How can a product fall so much?
Jira's always been hard to get going with, and it's still way better than pretty much everything else (once configured), but they are really bad at actually making it easy to use.
I would have thought terms like these would prompt people to release benchmarks for the sole purpose of generating bad PR if the company actually took action.
Oh and https://twitter.com/HackerNewsOnion/status/98160924222131814...
I'd propose that they are hiring cheap workers just to keep the ship afloat, rather than to radically improve it.
The other aspect of the founders believing in the importance of being able to hire talent from overseas is actually at the higher levels - the local talent pool of managers with 10+ years experience running large SaaS orgs is pretty small, given the industry here is fledgling. We’ve developed our own leaders internally, but there’s no substitute for bringing in an external hire to develop the next generation of leaders.
I always wondered how or if it is possible to place arbitrary restriction on software use.
Also I wonder if a clause like this would be binding for tech journalists who run a benchmark because essentially they don't really agree to the license when they are testing software.
> Except as otherwise expressly permitted in these Terms, you will not [...] (c) use the Cloud Products for the benefit of any third party;
What does that even mean ? "benefit" is such loose language. Can I not use JIRA to build anything that 'benefit's my customers? Can someone with experience working on such terms throw some light on this ?
At one point I was writing test automation against their JIRA Cloud offering, because they didn't provide an analog to their authentication API in the JIRA on-prem version.
To get the tests to pass, I had to create a jiraRetryFixture and when that didn't work I wrote a preflight check which would just skip those tests if it wasn't available.
At Monolist (https://monolist.co), we’re building a streamlined task experience that integrates deeply with Jira (and Confluence) Cloud specifically so you don’t have to deal with these painful UIs.
Hard to find a decent alternative to confluence for an internal wiki, particularly one with a good ACL system. We need certain customer details locked to just the people servicing those accounts.
Build a bare bones system just like Git itself that keeps track of the data and ACLs, and let the silicon valley startup guys make fancy web interfaces and cloud packaging to make it palatable to middle management.
Like Git the core can be moved around and interacted with in the terminal so you're not dependent on any one vendor, and if you don't like any of their GUIs you can just work on the console.
One reason we decided to keep this clause in the Caddy EULA (which I should clarify here only applies to official binaries, not the open source, Apache-built binaries you can make yourself) is because we found out that very few people are expert enough to benchmark correctly. I've read a dozen Caddy benchmarks, for example, that turned out to be based on false assumptions or had hidden factors or were simply not reproducible (and not just by me).
Benchmarking requires expertise that, it turns out, very few people have. I don't think I even have enough skills to do it correctly and meaningfully.
Also, web servers are complex enough (in terms of both configuration and all the layers involved with networking stacks) that one correct benchmark is not generally useful to the next person.
Spreading wrong performance information can hurt a business. It's not that there's anything to hide or any desire to take away your freedom -- and I would normally be one to assume the worst from any large company -- it's just business: they don't want the risk of bad PR based on a possibly false premise, especially when that information tends to only create negative hype rather than actually being useful.
Anyway, this link doesn't seem like news. Just usual HN hype.
If a tree falls in the forest, does it make a sound?
The title is not misleading. Performing 69,000 benchmarks but being unable to publish them is de facto banning benchmarks, period. Hiding behind the idea that benchmarking is hard therefore it should be banned because it might be fake news is ludicrous.
It is also not standard software EULA licensing, unless you think Oracle's practices are somehow industry standard and good for everyone.
The better way for a company to handle this concern, if they feel it's important, is to proactively run and release benchmarks including commentary on the results, together with everything necessary for anyone to reproduce their results. Even better if they fund a trustworthy neutral third party to do this instead, with proper disclosure of the funding.
They can then respond very effectively to bad PR about badly done benchmarks. Unless their performance is actually bad, or course.
Clauses like these make me think you think there is. Which is perhaps a bigger red flag than the attempted censorship in itself.
> Spreading wrong performance information can hurt a business.
Firstly, I'd suggest you should perhaps focus on trying to educate your users on how to make better performance tests - if they are bad at making benchmarks then they are likely bad at running your server as well.
Secondly, boohoo. Not spreading unflattering but correct performance information might not hurt your business but will hurt your customers.
Lastly, you are curtailing speech which isn't ethical and due to that I'm pretty ambivalent if any hurt visited your business. Imagine if everyone did that about everything.
> requires expertise
I'm sure every despot anywhere used a variation of your argument, something about economics being complex offshoot of mathematics that requires expertise to handle so you best not share any overly rushed opinions.
> Apache-built binaries
Which people can benchmark to their heart's content and I'd hope the clause would irritate many people into publishing their own benchmarks..
Also, I gather from your comment in support of these restrictions, they are more than boilerplate and you will pursue offenders.
Even if the review is done incorrectly, it's a data-point on a misconception your users have, and it gives you a chance to respond accordingly.
The company I work for has to deal with performance complaints all the time, some very from very public and loud entities in our Market. A company deciding to move away from our product signals very clearly across the market and it shows in our renewals numbers -- yet, we've never considered saying "don't benchmark us" in our contract.
Every single time these complaints kick up, we treat it as a chance to prove that we know our stuff, we reach out and offer to assist with re-evaluating, and explain our position, and most of the time, it works. It also shows a big public commitment to helping instead of just hiding behind our Legal Team, and it shows we know what we're doing.
We're not a huge company by any means in terms of actual persons able to act, but we still make it work without the need to put out such terms. If your customers are frequently doing something to make your product less than ideal, then you have an education issue that needs to be resolved.
Spreading correct performance information can hurt Atlassian's business. That's why they don't want us to.
Ultimately, though I appreciate you sharing your perspective, I personally will be steering clear of caddy and your other work.
Do you believe that clause in the Caddy EULA is enforceable in the United States under the Consumer Review Fairness Act?
Very important and often overlooked point.
But I wonder, why not forbid public dissemination of inaccurate, non-reproducible benchmarks?
Wouldn't that be libel? (IANAL)
That's not to say that anti-benchmarking licenses are the right solution, of course. I can just sympathize with why Atlassian wants one.