Hacker News new | past | comments | ask | show | jobs | submit | hobofan's comments login

Journalists (and I'd argue that MKBHD falls under that even) definitely have responsibility in their reporting.

Apart from the hypocratic oath, there are also the 4 pillars of journalism, one of which is "minimize harm". Bringing that one up feels like a recent trend whenever people see legitimate criticism that they can't really argue with, and it's an easy one to bring up as in every truthful reporting about people that have being lying there is at least the lier being "harmed".


If anything, MKBHD is minimizing harm, by discouraging people from wasting their money on a very poor product.

The first pillar is report truth.

> and it's an easy one to bring up as in every truthful reporting about people that have being lying there is at least the lier being "harmed".

Heh. "Hypocratic"… Appropriate.


I think the same goes for on-foot navigation.

Anecdotally, I find that my friends that just willy-nilly rotate around their Google Maps are the ones that have the worst sense of direction. Of course you are going to get lost after one wrong turn if you throw out your frame of reference that would help to reorient yourself!


"I think the same goes for on-foot navigation."

This is the absolute opposite. You orient your map to north (generally magnetic north when using magnetic compasses). If you have your map oriented for "north up" your headings will be all wrong.


"Art" already puts more value on live performances and scarcity. Popular music consumption is largely removed from that.

Yeah, the last two live mega tours (Taylor Swift and Beyoncé) have a tad more personality than the average artist, but the usual stuff that you would hear on the radio might as well be AI generated and live-performed by animatronics and a significant chunk of the audience wouldn't care or even notice.


I think the first place we will see large scale AI music replace 'real' music is in the 'lo-fi' background music that just about every bar, restaurant and shop has going in a loop. Instead of having various playlists, the restaurant owner can just choose between some styles, moods and tempos in an app and the AI will autogenerate an endless steam of background music that matches the vibe they are going for.

See also vocaloid 'performers' such as Hatsune Miku or Ia in Japan.

> it's hard to find concrete evidence though

It's hard to find concrete evidence for anything in SEO. However, from what I've seen (and is also shown by some big websites that index just fine) at least Google seems to do handle client side rendering just fine and has been doing so for ~5+ years.


For my part, I've had exactly the opposite experience. Without SSR or equivalent, you often have to wait weeks for your content to be indexed on Google, and the quality of the results varies. I'm talking about sites of at least several hundred/thousand pages.

Same. Lots of concrete info out there the more time one spends in SEO openly seeing how non-tech people achieve SEO, and most of the time (not surprisingly) Wordpress is server side.

The GitHub implementation of git archive does it's best to be deterministic. Some reproducible build systems like e.g. Bazel heavily rely on that.

GitHub had a bug early last year[0] that broke that determinism and it caused a huge uproar. So through a mixture of promises to individual projects and just so many projects relying on it, GitHub's git archive has been ossified into being deterministic (unless they want to lose a lot of goodwill among developers).

[0]: https://github.com/orgs/community/discussions/45830


Related to that, I know that the SINE foundation[0] has been investigating using zero knowledge proofs for benchmarking.

I also looks like they recently released a tool concretely to allow for privacy preserving benchmarking[1]. (I haven't looked into the the contents of the repo itself to check whether they are actually using zero knowledge proofs).

[0]: https://sine.foundation

[1]: https://github.com/sine-fdn/sine-benchmark


As far as my understanding goes, there are multiple parts to what you are bringing up.

In your simplified example of a 2D plane, yeah, you are right, I think.

However that's why we have more than two dimensions. E.g. If we just have a few more dimensions, we could have one dimension per attribute "lion-ness", "bear-ness", "tiger-ness" of the document, so going higher on the "lion-ness" dimension doesn't have any impact on the other two dimensions, and the conflict is resolved.

Of course in practice, we can't spare one dimension per animal, and that's where embedding models come in, which take care of mapping a string of text into a high-dimensional vector space, and in the process compresses topics, context, etc.. If that model were to be constructed badly, it is possible that we may actually end up with such a bug as you described, where mention of multiple topics produces a not-so-optimal embedding.

Another part that's also relevant in practice is that embedding vectors can be very sensitive (depending on model used) to how you "chunk" (= divide the input string). E.g. if you create chunks where each sentence that is about a bear, a tiger and a lion are separated you will of course get clearer embeddings regarding the individual animals. If you chunk all of them together you will probably receive an "apex-predator"-like embedding, that performs better if you are doing vector search on a more generalized level. That can indeed cause a challenge if you'd like to implementa generalized search that can handle both individual word retrieval as well as concept based retrieval.


But if I only have 3 feature vectors, aren't they always in a 2D plane, regardless of their dimensionality?

I don't get why anyone would use this. You are completely replicating the API that Kubernetes exposes, without adding any benefit, so this just adds an unnecessary layer of abstraction (that doesn't abstract anything) that increases brittleness.

You say that it "reduces time to prepare deployment"? How? What would be a before and after scenario where this actually saves time?


I kinda get why this is useful, helm has poor library/component story, so people just dup the chart templates thing for any service, which itself became a problem… i do think helm should not be such popular without a solid component/lib design.


But how does this do anything to improve the situation on that front? It's not like this creates new meta structures that could actually fix that like e.g. a "proper" package manager.

If you would like to do re-use of any service you would just put them inside their own first-class chart in which you'd write the templates directly, rather than going through this layer of indirection, and then copy-paste the small usage portion in your parent chart.


I really hate where the industry is at this point... Kubernetes is nice but the API is incredibly broad and complicated, overkill for the vast majority of applications. I liked Heroku much better for simple web development.

Helm is terrible. It's a bit like bash for kubernetes (string based templating, really??), instead of something strongly typed. This leads to text & obtuse yaml being the way to deploy complex applications to k8s, which leads to bad packaging.

Enforcing some kind of convention in Helm configs is a necessary evil. I see this project as a "contract" on helm configs, much like bitnami is doing (their Helm charts all look alike). It's about as good as bash scripts written in bash that all had a similar case/esac/getopts function to parse arguments.


Helm is terrible, except for everything else.

> Kubernetes is nice but the API is incredibly broad and complicated, overkill for the vast majority of applications. I liked Heroku much better for simple web development.

These are not comparable. You're welcome to use any provider's managed offerings -- Google Cloud Run, AWS Fargate, fly.io, etc.

I'm really sick of people hating on Kubernetes because it's complicated, sure, but the thing it abstracts is far more complicated. When it comes to orchestrating resources across systems nothing comes close.


> Helm is terrible, except for everything else.

100% .

> I'm really sick of people hating on Kubernetes because it's complicated, sure, but the thing it abstracts is far more complicated. When it comes to orchestrating resources across systems nothing comes close.

Agreed, and it's elegant at the core. It's just A LOT to take in for most developers. It solves a much bigger problem than Heroku did, but most web devs would just need a simple overlay over a managed k8s offering that doesn't expose all the k8s interfaces.


I kind of get this, but you see this same level of complaints about build tooling as well. Gradle, for instance, is well known for being difficult to understand and work with. Part of the problem is that generic build tooling is necessarily complex, but part of the problem is also the UX around Gradle is just terrible (lots of ways to make your code difficult to understand, for instance) and could stand for some significant improvement and better abstractions. Both of these things can be simultaneously true


100% agree. The default Helm chart generated when creating a helm chart is itself too bloated in my opinion! Just take the basic k8s yaml files and parametrize what needs to be parametrized!


Without Helm or a proper language you might end up parametrizing whole blocks of indented YAML via environments variables and sed.


Deploying software on single machines was too risky, because all of the operations and configurations had to happen in one place, and if that system went down, you'd be offline, so we built specialized hardware and VM platforms to allow each piece of the service to be deployed on a specialized platform that could be run by dedicated experts.

But deploying systems to specialized VMs and load balancers and networking devices had too many disparate pieces and was impossible to unify, and developers didn't want to have to talk to all of those experts to get the configurations they needed, and anyway, resources were wasted on all these specialized pieces of hardware, so the Kubernetes API was created to encapsulate all the pieces into straightforward and consistent APIs any developer could understand that allowed services to be binpacked to achieve maximum efficiency.

But the Kubernetes API was too complicated with so many complicated interrelated concepts in one big monolith of an API, and developers did not want to have to think about how to configure all the individual pieces and dependencies, so people created Helm charts to allow that one dude on the team who understood the infrastructure to hide the complexity.

But Helm charts were too obfuscated, and no one could tell what was going on inside them, and reliability suffered and it became too risky to use black boxes to configure your deployment, so now there's a universal chart that exposes every Kubernetes API option for easy visibility.

But obviously, this new universal chart has far too many options, when all I want to do is deploy my application without having to think about every detail. So I'm looking forward to the upcoming packaging system that wraps this universal chart into something with less visible complexity.


> the worse is 0 years of experience starting a company

I think that's a crucial point of 4. I'm not sure if actual founding experience is necessary (as long as the company structure/cap table looks good if going the VC route). However a managing director position or something similar would definitely be something I'd look for nowadays as a lot of people seem to underestimate what that role entails and/or don't cope well with having the final responsibility in a organization structure.


ex enterprise MD‘s have been among the worst fucking startup founders I’ve ever encountered. Close second to startup idea bros.


Things become difficult at scale regardless of mono- or multirepo. You also have to build dedicated tooling if you heavily lean into splitting things into a lot of repositories, in order to align and propagate changes throughout them.


Sure, but polyrepos don't break with scale in the same way as monorepos. You only need additional tooling when you are trying to coordinate homogeneity a scale larger than your manual capability. Autonomous services don't typically do that kind of coupling without cohesion that people naturally find necessary in a monorepo and you can build cooperative and coexisting products without that kind of coupling.

When I read the white papers by google or uber on their monorepos, when I see what my company is building, it is just a custom VCS. Everything that was thrown away initially gets rebuilt over time. A way to identify subprojects/subrepositories. A way to check out or index a limited number of subprojects/subrepositories. A way to define ownership over that subproject/subrepository. A way for that subproject/subrepository to define its own CI. A way to only build and deploy a subproject/subrepository. Custom build systems. Custom IDEs.

The entirety of code on the planet is a polyrepo and we don't have problems dealing with that scale like we would have if we stuffed it all in one repo like this debian monorepo shows. Independence of lifecycle is important, and as a monorepo scales up people rediscover that importance bit by bit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: