Hacker News new | past | comments | ask | show | jobs | submit | rileymat2's comments login

Isn't there a third option?

Where those statements are available to the plaintiff to evaluate for evidence of, but the underlying communication is not allowed to be presented to the jury because it is prejudicial until it is established that the x had a real foreseeable issue?


I am not entirely sure what any of this has to do with the situation in Russia. Is IQ correlated with a desire for personal risk? Is independent thinking correlated with the collective action needed to over turn an entrenched power?

The one thing that gives me pause, is that I have seen stages of mastery where the base stage is repetition and adherence to rules to internalize them before understanding them and knowing when to break them.

If much of our industry is new, evangelizing these rules as harder and faster than they are makes a lot of sense to bring people to get people ready for the next stage. Then they learn the context and caveats over time.


That made me want to look up some link about Shu Ha Ri. Turns out that's actually been made popular in some corners of sw dev already. E.g. https://martinfowler.com/bliki/ShuHaRi.html

That implies they were created with best practices in the first place.

If not, then what was created with best practices in the first place?

If we can agree that most large, financially successful software projects are of questionable quality, then either

- they used best practices and yet they still suck, OR

- they did not use best practices, but are widely successful anyway.

So no matter how you look at it, software best practices just haven't panned out.


"All hardware sucks, all software sucks."

The name “best practices” kind of implies that they actually are practiced somewhere. So it’s different from theoretical abstract ideas “how we should write software”, which maybe nobody follows.

Without the encapsulation of a function, won’t the code around the common block depend on the details of the block in ways that cause coupling that make the common block hard to change without detailed analysis of all usages.

I like what you are saying, i think, but am stuck on this internal coupling.


It will share nuance with non-hygienic macros, yes. The difference here is that (1) unlike macros which hide what’s going on, the code is always expanded and can be patched locally with the visual indication of an edit, and (2) the changes to the origin block aren’t automatically propagated, you simply see +-patch clutter everywhere, which is actionable but not mandatory.

If you want to patch the origin without cluttering other locations, just move it away from there and put another copy into where it was, and edit.

The key idea is to still have the same copied blocks of code. Code will be there physically repeated at each location. You can erase “block <name> {“ parts from code and nothing will change.

But instead of being lost in the trees these blocks get tagged, so you can track their state and analyze and make decisions in a convenient systemic way. It’s an analysis tool, not a footgun. No change propagates automatically, so coupling problem is not a bigger problem that you would have already with duplicated code approach.

You can even gradually block-ize existing code. See a common snippet again? Wrap it into “block <myname> {…}” and start devtime-tracking it together with similar snippets. Don’t change anything, just take it into real account.


The main reasonable criticism would be that it obscures the things you missed from naive audits while still being accessible by an attacker. So you hide the issue from the "good guys" while not baring much entry by the "bad guys". I have seen this pattern emerge many times, because what is obscure to you may not be obscure to someone else. So it /causes/ you to miss things.

Facts about a book have never been copyrightable, for instance the odds of one word following another.

Facts in a book like a cook book recipe are also not generally copyrightable either. By convention we try not to rip them off but it does not come from copyright.


> But the judge found a lack of specifics about what Zuckerberg did wrong, and said “control of corporate activity alone is insufficient” to establish liability. Her decision does not affect related claims against Meta itself.

It is unclear to me how your position is different, it would seem that any fair law would have the same aspects where you would have to prove specifics. So without specifics hold the company liable, with hold the individuals.


This conversation is confusing without the FDA isn’t everything allowed by default and you get far worse like the current supplement industry?


Regulatory challenge is that the FDA have to combine 3 related but seperate concepts:

1. Manufacturing quality/ingredients accuracy (is the product what is says on the tin) 2. Safety 3. Efficacy

Medicines must pass all three, supplements don't have to meet any.


FDA and DEA should be concerned mainly with the contents matching the box, and not on medical claims of effectiveness.


In your opinion, should any government agency monitor truth of claims, or is this all outsourced to private things like consumer reports? Is it class action lawsuits?

And in the case of drug effectiveness, isn't this a very expensive endeavor, where the primary source of funding would be the companies themselves biasing results?

In this case we had companies happily selling us ineffective drugs, not because the FDA wanted it, but because they did not reject it. In a world without the FDA, what entity rejects?


I may be misunderstanding, but wouldn't you want the particular microservice you are working on independent enough to develop locally, then deploy into the remote environment to test the integration? (I don't work at this scale)


Yes, exactly.

I just also like to have an option to run service locally and connect to either cloud instances (test) or local instances depending on what I am troubleshooting/testing. Much better than debugging on prod which may still be required at some point but hopefully not often.


This is how you get to "I wrote to the spec, it's your problem that clicking the button doesn't do the thing". Huge feedback loops. When you run it "locally" enough you can do your integration stuff before you even ask for review.


> This is how you get to "I wrote to the spec, it's your problem that clicking the button doesn't do the thing".

No, not really. You only find yourself in that spot if you completely failed to do any semblance of integration test, or any acceptance test whatsoever.

That's not a microservices problem. That's a you problem.

You talk about feedback look. Other than automated tests, what do you believe that is?


That doesn't seem possible to me. If you have a feature that involves 10 teams and 10 services, nothing will actually work until the 10th change is made (assuming everything was done perfectly).


If you have one team per service, yes. In many companies, you may have one team and 10 services though. I wish I was making this up.


Invariably this is an ideal and does not match up in reality. I work at ~50 ish employee company and we have layers of dependencies between at least 6 or 7 various microservices. I can see this adding up in complexity as the product scales


> Invariably this is an ideal and does not match up in reality.

No, this does indeed match reality. At least for those who work with microservices. This is microservices 101. It's baffling how this is even being argued.

We have industry behemoths building their whole development experience around this fact. Look at Microsoft. They even went to the extents of supporting Connected Services in Visual Studio 2022. Why on earth do you believe one of the most basic traits of backend development is unreal?

> I work at ~50 ish employee company and we have layers of dependencies between at least 6 or 7 various microservices.

Irrelevant. Each service has dependencies and consumers. When you need to run an instance of one of those services locally, you point it to it's dependencies and you unplug it from it's consumers. Done. This is not rocket science.


You can't compare Microsoft to your run-of-the-mill small (or even large) software shop, though. Maybe on HN, most people work on these amazingly designed systems, but in my experience most tech out there is shit and has no proper design or architecture beyond "we're doing microservices because everyone is".


> You can't compare Microsoft to your run-of-the-mill small (or even large) software shop, though.

True. Your run-of-the-mill shop should have a simpler and more straight-forward system.

But you seem to want the reverse.


No one wants the reverse, I would love if my microservices were perfectly isolated little boxes with known inputs and outputs! That would make my life easier. But I don’t have the ownership over the planning process and our sales person already told our customer we’d have the new feature they asked for that no one on the engineering team knew about delivered by next sprint. It would be nice if my company planned things well! But they don’t


So then it’s bad engineering being wagged by Sales, not some expected sane choice.


Just s/Sales/Management, because sales actually can't wag the development process. But yeah.


Most of the time it's bad engineering caused by other engineers.


> You can't compare Microsoft to your run-of-the-mill small (or even large) software shop, though.

I'm talking about how Microsoft added support for connected services to Visual Studio. It's literally a tool that anyone in the world can use. They added the feature to address existing customer needs.


Apart from the fact that not everyone uses Visual Studio, "connected services" appears to be something by which you can connect to existing cloud-based services.

How does that solve the problem of a mess of interconnected services where you may have to change 3 or more of them simultaneously in order to implement a change?


> Apart from the fact that not everyone uses Visual Studio, "connected services" appears to be something by which you can connect to existing cloud-based services.

Yes. That's the point.

> How does that solve the problem of a mess of interconnected services (...)

I don't think you got the point.

The whole point is that you only need to connect your local deployment to services that are up and running. There is absolutely no need to launch a set of ad-hoc self-contained services to run a service locally and work on it. That is the whole point.


Your whole argument boils down to "don't write shit software" which yeah, fair, but in the real world, the company that you just joined has shit code that evolved over 10 years and has accumulated all sorts of legacy cruft. The idea that there is "absolutely no need to launch a set of ad-hoc self-contained services to run a service locally and work on it" just doesn't match the reality of most places I've worked at. You either got very lucky or you didn't work on complex enough systems.


> Your whole argument boils down to "don't write shit software" (...)

No. My whole argument is open your eyes, and look at what you're doing. Make it make sense.

Does it make sense to launch 50 instances locally to be able do work on a service? No. That's a stupid way of going about a problem.

What would make sense? Launch the services you need to change, of course. Whatever you need to debug, that's what you need to run locally. Everything else you consume it from a cloud environment that's up and running.

That's it. Simple.

If there's something preventing you from doing just that then that's an artiicifal constraint that you created for yourself, and thus that you need to fix. We're talking about things like auth. Once you fix that, go back to square one.


Just FYI, you come across as extremely antagonistic in the way you're conveying your message. The underlying tone seems to be "you're stupid".


Shouldn't all systems be tested end-to-end regardless of if they are microservices or not?


You don’t have to test them all end to end before merging a PR. You should have multiple stable pre prod environments for e2e testing. But if most changes fail e2e testing then your sdlc is broken before then and that should be fixed first. You need better designs and better collaboration and better local tests and code reviews.


> You don’t have to test them all end to end before merging a PR.

You have to test the changes you want to push. That's the whole basis of CI/CD. The question is at which stage are you ok with seeing your pipeline build.

If you accept that you can block your whole pipeline by merging a bad PR then that's ok.

In the meantime, it is customary to configure pipelines to run unit, integration tests, and sometimes even contract tests when creating a feature branch. Some platforms even provide high-level support for spinning up sandbox environment as part of their pipeline infrastructure.


So basically a distributed monolith :P


I'm sure companies with well designed and properly isolated services exist but... in my time spent at several companies, "microservices" invariably degenerate to distributed monoliths.


If the microservice has dependencies on other services it is not a microservice.


According to whom? How do those microservices get anything done if they just live in their own isolated world where they can't depend on (call out to) any other microservice?


Would anyone care to explain the reasoning behind their down votes?


Connecting to a messaging queue or database count as a dependency?

Why not break a microservice into a series of microservices, its microservices all the way down.


Only if you cannot change one service without changing the other simultaneously. It's fine to have evolving messages on the queue but they have to be backwards compatible with any existing subscribers, because you cannot expect all subscribers to update at the same time. Unless you have a distributed monolith in a monorepo, but at least be honest about it.

Multiple services connecting to the same database has been considered a bad idea for a long time. I don't necessarily agree, but I have no experience in that department. It does mean more of your business logic lives in the database (rules, triggers, etc).


> Only if you cannot change one service without changing the other simultaneously.

Not true at all.

You're conflating the need for distributed transactions with the definition of microservices. That's not it.

> Multiple services connecting to the same database has been considered a bad idea for a long time.

Not the same thing at all. Microservices do have the database per service pattern, and even the database instance per service instance pattern, but shared database pattern is also something that exists in the real world. That's not what makes a microservice a microservice.


> If the microservice has dependencies on other services it is not a microservice.

You should read up on microservices because that's definitely not what they are not anything resembling one of their traits.


Yes, the real world trumps theory, hence my question.


Reminds me of a favorite quote: "What's the difference between between theory and practice (reality)? In theory, they're the same."

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: