Hacker News new | past | comments | ask | show | jobs | submit | tinco's comments login

Its the interpretation of an expert which to me is preferable to the marketing website. Just from the marketing website someone who isn't up to date can't tell what's new, what's good and what's just being fluffed up.

Linq came out as part of a set of features that addressed the comforts of languages like Ruby. I don't know if they considered Ruby to be a threat at the time but they put a bunch of marketing power behind the release of linq. The way I understand it, as someone who jumped from C# to Ruby just around the time Linq came out is that it's a DSL for composing higher order functions as well as a library of higher order functions.

I always liked how the C# team took inspiration from other language ecosystems. Usually they do it with a lot more taste than the C++ committee. The suppose the declarative linq syntax gives the compiler some freedom of optimization, but I feel Ruby's do syntax makes higher order functions shine in a way that's only surpassed by functional languages like Haskell or Lisp.


CLT is in no way similar to the product InventWood is working on, besides that it uses the same base material, and I guess requiring some pressing. You can think of CLT as being like plywood, but for beams instead of for sheets.

That said, CLT has 2 major advantages over regular wood:

1. It is more dimensionally stable, it expands/contracts less and in a more uniform way.

2. It is cheaper at larger dimensions. I.e. 0.5m x 0.5m x 20m wooden beams would take decades to grow and even then not realiably, but you can just manufacture CLT beams with those dimensions easily out of <10yr old trees.

Those two advantages are not limiting factors for the construction of cars or airplanes, so CLT is not super relevant to them.


Dimensional stability isn't irrelevant to planes. Thermal cycling is inevitable when going from the ground to cruising altitude and back.

Certainly would be better to laminate IF you're using wood. But it's still got a long road ahead.

Aluminum still has a higher strength/weight ratio which is everything in aero. Also, I'm not finding any information on cyclic strain behavior. Dimensional stability is only part of that.

Edit: There could be room for this in experimental aircraft. Once we tease out all the failure modes and properly characterize cyclic behaviorof course.


One thing I didn't quite realize is that they accidentally spend significant amounts of money. At a party I talked to someone who was like a financial coach for super wealthy people and he helps them clean up their shit. The way we might accidentally have a running $50/mo subscription for something we don't use any longer, they might have a $500/mo club membership they never go to. Or prebooked vacation rentals they don't visit, or maintenance on cars they don't use, etc.

If you're over 100m net worth and get 5% returns, you get over 400k/month income on capital. At that point it might not be worth your time and attention to save $50 or even $500/mo if it takes any effort.

Or you could pay an advisor a thousand to clean that up. Then donate excess to charity and get a tax break. Win win.

Exactly, to use the same math from the linked comment, it's less like comparing $500/mo and $50/mo and more like comparing $500/mo for the high net worth folks to $0.05/mo for someone making a more average salary.

You could have a single part-time minimum wage job and you're not going to waste your time worrying about $0.05/mo. 60¢ a year? Please.


This is it really. Being wealthy is the ability to live financially inefficiently without concern. That is very liberating.

They are optimizing for the scarce resource - which is time, since they got money in abundance.

Interesting, I feel that it worked OK for the first couple questions, but then I got annoyed with that it kept asking questions even though I felt we had established my answer.

Perhaps there should be a visual indication of completing a topic. If it had said "1/3 topics covered" after it was satisfied with my answer and before it shifted the conversation I might have answered a couple more questions. I was tempted to not hit the submit survey button because I felt I was abandoning it.


Thanks! Very valuable feedback. I'll try to think of a prompt that is not so insistent and able to change gears

I can rest your mind, I've being coding Ruby for 17 years now, and I've never seen two gems or even files define the same global constant in a way that wasn't intentional. It's fair to say it's not at all pervasive.

That said, I do think there's use to this. First of all it would allow fancy platforms like RoR to make more effective use of namespaces. Right now you always need to specify the fully qualified name of a constant whenever you're defining it, which is just not aesthetically pleasing.

Another potentially useful place for this is in migrations. If you could just move the old implementation of a thing into a subdirectory and then load it into a namespace you make references to it a lot more explicit, and you give the replacement architecture full freedom within its root namespace.

Just to say, it's not only bad behavior that would be enabled by this feature. I definitely agree that having gems not sticking to their own modules would be a very bad thing indeed.


The services that are marketed as being VPN providers are actually selling a very restricted form of VPN where they create for you a very small VPN between you and some other node in their fleet and then you route your traffic through that node.

It would be more correct to call such a provider a secure (two-way) proxy service (and in the past people did), but for some reason they went with VPN and that stuck.

Mycoria is basically the textbook definition of a VPN.


Typically when people say open source they mean that the source code can be used , modified and made public for any purpose. There is an organization called OSI that maintains a ratified list of licenses that are compatible with the ideals of the open source movement. Although the OSI has been compromised by the big cloud providers and no longer serves the public interest, the list can still be relied on as a good sign that the license you're looking at is open source.


It's dual licensed in a way, but neither license is open source. The OSI messed up in not coming up with an answer to the SSPL, and now ambitious projects that would have traditionally gone with an open source license like AGPL are now foregoing open source entirely and just slapping a sustainable use license on it.

So yeah, you can use n8n for free, but that doesn't make it open source. It is a source available license.


What's the catch?

You have to pay after some revenue threshold?


One of my more painful design mistakes happened in this sort of way when designing a system for recording inspections. I interviewed multiple inspectors and came up with a representation that was a little bit more elaborate than I would have hoped, but it at least captured all information I believed.

Then the company progressed and eventually we got to market fit and for two years the team and I were dealing with this increasingly burdensome complexity that we were not reaping any rewards of. Then one day we had enough and a colleague redesigned the system to ditch the extra complexity resulting in a much more elegant design that was easier to reason about.

That bliss continued for less than a year, until some customers asked for a particular report that we needed to generate based on a structure of the information that was now no longer present. We had to redesign it again, migrating was super painful and involved a temporary workaround that basically involved an extra branch on literally every piece of code that touched the inspection system.

In retrospect, I still don't know how I could have convinced the team that the complexity was something we needed when no customer required it for 3 years. Especially when the colleagues who took over that system from me had gained much more experience and expertise in the domain than I had since I had designed the original.

It would probably have been better if I had recorded the requirement that prompted the complexity, but had not actually implemented it as no customer had actual need for it at the time. Then we would not have had to deal with the complexity the first three years, and then evolved the product when the need arose.


This seems like a business problem more than a design issue. Systems need to evolve alongside the business they support. Starting out with a simple design and evolving it over time to something more nuanced is a feature. Your colleague was right, and you were also right; except for the part where all nuances of the ideal solution need to be present on day 1.

The clients you have on day one are often very different from the ones you’ll have a few years in. Even if they’re the same organisations, their business, expectations, and tolerance for complexity likely have changed. And the volume of historical data can also be a factor.

A pattern I’ve seen repeatedly in practice: 1. A new system that addresses an urgent need feels refreshing, especially if it’s simple. 2. Over time (1, 3, 10 years? depending on industry), edge cases and gaps start appearing. Workarounds begin to pile up for scenarios the original system wasn’t built to handle. 3. Existing customers start expecting these workarounds to be replaced with proper solutions. Meanwhile, new customers (no longer the early adopter type) have less patience for rough edges.

The result is increasing complexity. If that complexity is handled well, the business scales and can support growing product demands.

If not… I'm sure many around here have experiences where that leads (to borrow Tolstoy: “All happy families are alike; each unhappy family is unhappy in its own way.”).

At the same time a market niche may open for a competitor that uses a simpler approach; goto step 1.

The flip side, and this is key: capturing all nuances on day 1 will cause complexity issues that most businesses at this stage are not equipped to handle yet. And this is why I believe it is mostly a business problem.


Thinking aloud here...

I like your idea of capturing some requirement(s) that motivated the extra complexity, and retaining those requirements in a place they'll be seen and consulted when any new release planning and architecture happens.

This seems related something I do when scoping an MVP or release: work through the requirements breakdown, and mark the things we considered but decided not to do in that release (the "NON-Reqs"). Keeping them in the document gets everyone who looks at it up to speed, so we don't keep rehashing, and also makes it very clear to people that this thing someone told them the system would do is definitely not happening (a very-very-very common miscommunication in some companies).

But if a NON-Reqs suggests some future growth affordances that I think are important to try to include in the new architecture work now, to avoid/reduce very expensive refactoring/rewrites/rearchitecture in the near future, maybe some of those NON-Reqs should be moved to bulletpoints in a requirements section like "Architecture Growth Affordances", and become medium-priority requirements not to preclude in the architecture. Where it can be triaged, prioritized, and traced like any other requirements.

I like that idea a bit, but a few problems:

* Someone might get confused that you're promising to deliver that in a future release.

* Someone might blame if, when the future release planning comes, you say that feature will take a lot of work, but they thought you already did most of the work for it in the previous release.

* You'll need everyone involved to find the right balance of how much to think about about this release, and how much to keep in mind ability to do future releases. By default, people will have a lot of trouble with both kinds of thinking.

* A lot of these architectural decisions with growth in mind will happen after the requirements are locked in, and making frequent changes to that set of requirements is a very confusing precedent to set. (For example, most people won't take the commitment to the requirements seriously, if they think things can just be tossed into it later as needed, or if they think you're just doing incompetent theatre. You want them to get the idea "anything that isn't committed to in the requirements, isn't happening in the release; so everyone think and agree hard together now, so that we'll all be working effectively and efficiently towards the same viable release". Only after everyone gets that, do you introduce the flexibility.)

Maybe those requirements should go to a place for "anticipated future requirements" (but that isn't only a backlog of "candidate requirements"), in that it can be referenced for traceability when you make an architectural decision with future options in mind?

Or, the fallback (which I've used countless times) is to document with the architecture (embedded in the code, or in other canonical arch docs) the rationale for the decisions. Then, when someone goes to change the architecture, that information is hopefully somewhere they'll see. (This assumes that the person changing the architecture is a smart professional.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: