Hacker News new | past | comments | ask | show | jobs | submit | yuppiepuppie's comments login

If we did it with male cows, it would be quite literally a “bull-market”.

Best guess, NewsCorp is pushing for it as Meta is a direct competitor.

Albeit, many 16 year olds are not using social media for news, but it does get them used to social media as there go to site early on.


Anecdata: I have no issues with my simple git workflow that I’ve used at the previous 3 companies I’ve worked for. As long as communication is solid across teams/people, merge conflicts and rebasing isn’t that big of a pain to introduce an un proven tool to an org.

My thought would be that most people are happy and it’s the <5% of people who complain the loudest that are heard.


> to introduce an un proven tool to an org

Your org doesn’t have to adopt the tool. Nobody on my team needs to know or care that I use jj instead of git. The only people who do know or care have themselves switched when I showed it off to them.

I’ve worked too many places where I’ve helped fix too many coworkers’ broken git repos to believe in a simple git workflow. Basically everyone uses the same fetch/branch/commit/merge-to-main approach and people still constantly run into problems. None of the people with some claimed simple workflow are doing anything meaningfully different than what everyone else is doing.

It’s just astonishingly easy to internalize all the fixes and band aids we’ve adopted to smooth the sharp edges and forget how often we have to work around them.


But I don’t see the complaints that were claimed. I help people as well. They learn, and we move on. If they don’t learn then we have another problem to solve.

I will admit there is occasional call where I have to get someone out of a twisted pretzel, but that is few and far between.


You don’t deal with rebase conflicts? You don’t ever unstash to the wrong branch? Or to the right branch, but it was changed and now it’s applied uncleanly? You don’t ever want to go fix a previous commit? You don’t ever need to do linear work that gets merged one piece at a time into the trunk?

All of these things (and others) can be worked around with varying levels of annoyance ranging from just living with it to new commands that help out to changing your workflow to completely avoiding some workflows. But in my experience nearly everyone deals with those annoyances on a semi-regular basis.


With a proper workflow and communication, only the first one (rebase conflicts) is a semi-regular occurrence (once a month). The others I have never had to do while working on a team. We establish a clear way of working and everyone abides by it.

And before you say well this doesn’t work with a bigger team, my team is 8 and the org is 50 on the same codebase. At “google scale”, I understand this might not be the same case.

If someone has to go out of that workflow to fix a previous commit on main, they submit a pr on top of latest.

Again, I don’t see the major complications here. It seems to me to be fixing a communication issue in an org more than anything.


I get what you are saying about the nicer workflows for those cases.

However, why would the contributors to jj not just try to make git better by addressing these weaknesses?

I’m not being glib, just that it puzzzles me when a new oss comes out that does the same thing as another tool but a a bit different. I would have though that there is a way to have a git plug-in or even a way to contribute to git to enhance the issues outlined.

Yes, I know that some code bases are too far gone for major enhancements…


> However, why would the contributors to jj not just try to make git better by addressing these weaknesses?

Because the git workflow causes a lot of the difficulty and fixing it involves making a new concept: mutable changesets with a unique id which are built on top of immutable commits.

jj isn’t just some bugfixes or a few porcelain improvements, it deeply changes the way you approach things. And there’s a pretty decent amount of unlearning of subtly-broken concepts you need to do.

None of that could ever make its way into git.


In some sense they already are making git better since that’s the data format that the tool supports.

Also, some of the UI improvements or even protocol improvements might make it into git eventually.

I’m skeptical about it being worthwhile to switch to a new backend. Maybe it would work if it’s just local, but a lot of IDE plugins and other tools would need to be written, and git has lots of inertia.


> why would the contributors to jj not just try to make git better by addressing these weaknesses?

Because jj isn't being built to improve git, but to allow using git at Google to work on Piper-backed code instead of hg/fig, and adding support for the abstractions that are needed to support git and piper was probably seen as free complexity on git's side.


This is slightly incorrect. I started it because I believed in the idea of modeling the working copy as a commit, but it's true that I made the storage pluggable from the beginning because I wanted to be able to convince my team at Google that we should use it internally too.


I thought it was simply a response to git5/git-multi getting deprecated once fig became popular.

I'm a bit surprised then, that's one of the things that has kept me from trying it as it feels weird to not have that much control over what gets into the commit. Using magit doesn't help as it's just way too good once you get used to it.


Think of it this way: they are trying to make git better by addressing fundamental weaknesses in git, but some of those weaknesses require a fundamentally different UI paradigm and as such aren’t practical to upstream, hence a new project.


> I’m not being glib, just that it puzzzles me when a new oss comes out that does the same thing as another tool but a a bit different. I would have though that there is a way to have a git plug-in or even a way to contribute to git to enhance the issues outlined.

You would have thought? Plugins? Git doesn’t have plugins to my knowledge. Hooks are not plugins for security reasons. Meaning you can’t distribute hooks, have people install them and call it an extension.

Then, assuming that they could just make Git better. Are they interested in supporting all of Git plus their new work? Why would they be?


If anything, lazygit has shown it is possible to have more natural workflows on top of git as demonstrated by https://www.youtube.com/watch?v=CPLdltN7wgE, so it is indeed possible


Most likely an opening offer to be negotiated down to 3-5% would be my guess.


This looks very strange:

"Automattic will have full audit rights."

"WordPress.org and Automattic will have full audit rights, including access to employee records and time-tracking."


Well it's basically saying, just give us the money. The "audit rights" only apply if they're donating time, so they want that to be the most difficult and intrusive option -- ie., its only there for PR.

The choice is, "some money and we'll look the other way; or open all your books, and donate 8% of your workforce to wordpress"


The 'audit rights' are written at the end of 2(a) and apply to just giving money. It's not just time.


You're right, I should've focused on the " including access to employee records and time-tracking " aspect


isn't that necessary given the request of a revenue percentage?

Could automattic know the exact numbers in a different way, since WPEngine is not a public company?


Agree with this. Is this correct?


It's there in the Fee section.....

I get that regulatory bodies should have access under certain conditions, but letting a competitor do this? Makes zero sense.


Surely this would be a violation of HIPAA, Privacy and any number of state laws.

You can't just give out confidential employee records to third parties.


None of these parties are subject to HIPAA.


Employee health records are often stored in third party systems that are subject to HIPAA.

Point is that Automattic would have full access to this as well.


Those providers may be subject to it.

Attempts to go fishing in such records would be pretty unlikely to succeed; it'd be an uneforcable request contrary to public policy, with no relevance to such an audit. It would be correctly and easily fought.


Having advised a company who applied to YC with traction, proven MRR, and a serious team behind, I can’t say it is certainly the case. But having seen a number of companies coming out of the latest batch, it only smells like YC has been drinking from the punch bowl that is HackerNews’ front page hype train.


This is crazy. And then I hear about solid products/companies that don’t get any/very little funding at all because it has no “AI” and it baffles the mind.


Funders are betting on founders. They see somebody willing to take shortcuts and ride hype trains, turn good faith projects and pass them off as their own innovation, and the see the kind of sociopathic organization that has the potential to "become the next Uber."

They aren't funding the viability of the product, they are funding for a take of future earnings of the morally bankrupt or the righteous will of the hopelessly naive. In either case, it's all about how effectively they can affix a leash.

Rent-seeking over value-creation.


YC filters for companies for gorillion dollar missions / solve gorillion dollar problems.

A mission that is, "I have an AI, input is business plan, output is entire IT infrastructure" is what YC sees here. i.e. programming as a field (quite a cost center) is no longer needed, ironically. Because of that the bar is low. Honestly if anyone is reading this, if you find a very clever novel way to go after this, surely you would be funded


Very nice demo!

When you ran it the first time, it took a while to load up. Do subsequent runs go faster?

And what cloud provider are you all using under the hood? We work in a specific sector that excludes us from using certain cloud providers (ie. AWS) at my company.


You are correct! After the first request, an image will be on a machine and it’s cached for future use. This makes subsequent container startups much faster. We also route requests to machines where the image is already cached as well as dedupe content between images in order to make startups faster

We are running on top of AWS however can run on top of any cloud provider as well as are working on you using your own cloud. Happy to hear more about your use case and see if we can help you at all - email me at michael@cerebrium.ai.

PS: I will state that vLLM has shocking load times into VRam that we are resolving.


My anecdotal experience tells me that it never works in a high scale product environment. Having managed and lead 2 teams that maintained a legacy system with hex-arch and we had to move DBs in both. We ended up rewriting most of the application as it was not suitable for the new DB schema and requirements.


Thanks for sharing. It matches my experience.

After many years of a lean team serving high scale traffic (> 1 million monthly active users per engineer), most abstractions between customer and data seem to turn into performance liabilities. Any major changes to either client behavior or data model are very likely to require changes to the other side.

There's a lot to be said for just embracing the DB and putting it front and center. A favorite system we built was basically just Client -> RPC -> SQL. One client screen == one sql query.


Wonder how this will work for remote companies. Will the post a super large band for positions ranging across Europe? Or will it geographically defined?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: