Hacker News new | past | comments | ask | show | jobs | submit | strogonoff's comments login

> why specifically are people allowed to build on humanity's past achievements, but not AI?

Let’s untangle this.

1. Humanity’s achievements are achievements by individuals, who are motivated by desires like recognition, wealth, personal security, altruism, self-actualisation.

2. “AI” does not build on that body of work. A chatbot has no free will or agency; there is no concept of “allowing” it to do something—there is engineering it and operating a tool, and both are done by humans.

3. Some humans today engineer and operate tools, for which (at least in the most prominent and widely-used cases) they generally charge money, yet which essentially proxy the above-mentioned original work by other humans.

4. Those humans engineer and operate said tools without asking the people who created said work, in a way that does not benefit or acknowledge them, thus robbing people of many of the motivations mentioned in point 1, and arguably in circumvention of legal IP protections that exist in order to encourage said work.


That's a long-winded way of spelling out a a Dog in the Manger approach to society, coupled with huge entitlement issues:

There is something valuable to others, that I neither built nor designed, but because I might have touched it once and left a paw print, I feel hurt no one wants to pay me rent for the valuable thing, and because of that, I want to destroy it so no one can have it.

Point 2) operates on a spectrum, there's plenty of cases where human work has no agency or free will behind it - in fact, it's very common in industrialized societies.

RE 3), "engineers" and "operators" are distinct; "engineers" make money because they provide something of immense value - something that exists only because of collective result of 1), but any individual contribution to it is of no importance. The value comes from the amount and diversity and how it all is processed. "Operators" usually pay "engineers" for access, and then they may or may not use it to provide some value to others, or themselves.

In the most basic case, "engineers" are OpenAI, and "operators" are everyone using ChatGPT app (both free and paid tiers).

RE 4) That's the sense of entitlement right there. Motivations from point 1. have already been satisfied; the value delivered by GenAI is a new thing, a form of reprocessing to access a new kind of value that was not possible to extract before, and that is not accessible to any individual creator, because (again) it comes from sheer bulk and diversity of works, not from any individual one.

IMO, individual creators have a point about AI competing with them for their jobs. But that's an argument against deployment and about what the "operators" do with it; it's not an argument against training.


Your dog quote seems to mean the opposite of the rest of your comment. We’re talking about destruction of the idea of creative work as product of specific living and breathing, at least at some point in time, humans by companies with market caps the size of countries wishing they could charge everyone rent for access to something someone else made without licensing it.

> human work has no agency or free will behind it

There is one case where it is sort of true, but crucially 1) agency still exists, it is just restricted, and 2) it is called “slavery”. I felt like your comment not only equates a human being with freedom/agency (whether restricted or not) to a software tool, it also equates “I have to do my job because it pays money” with having brutally been robbed of freedom, which really underplays how bad the latter is.

> something that exists only because of collective result of 1), but any individual contribution to it is of no importance

Collective result, which consists of individual works. That argument appears to be “if we steal enough, then it is okay”.

> Motivations from point 1. have already been satisfied

We exist over time, not only in the past. A swath of motivations for doing any original work is going away for upcoming original work on which chatbots etc. are built.

> that is not accessible to any individual creator, because (again) it comes from sheer bulk and diversity of works, not from any individual one

Yes, humans can be inspired and build upon a huge diversity of works, like what has been happening for as long as humanity existed (you may have heard the phrase “everything is a remix”). If you talk to me and I previously read Kant then who knows, whatever you create may have been inspired by Kant.

Libraries have done great job at amplifying that ability. Search engines put massive datasets at your fingertips while maintaining attribution (you are always directed to the source), connection of authors and readers, and sometimes even offering ways of earning money (of course, profiting off it as well; the beauty of capitalism). I am sure there are plenty of other examples of ML models in various fields that achieved great results yet were trained on appropriately licensed work. In other words, none of this does justify theft.


From my understanding, Paper and the like are good for Minecraft servers focused around specific mini-games (rather than freedorm building), and are the only sensible choice for servers with many people (or not that many people, but really underpowered hardware).

However, they may be a problem if players are sensitive to possible non-vanilla behaviour (as you mentioned, and it’s not limited to cheaty duping). Thankfully, spinning up a server with a selection of performance mods is very easy these days. Various tricks like pre-generating chunks in advance also help.


It's kinda nuts. The upstream mojang server binary starts to groan if you have >4-5 players on the same server doing stuff. They've really been dropping the ball on optimization in recent years.

Paper is good enough for anyone but very technical players pushing to the limits of redstone tick timing logic, entity behavior, chunk loading mechanics, etc. These don't matter even for advanced players doing normal things.


I actually had to splurge got 2 VCPUs on Digital Ocean to avoid "skipping ticks" and it does sound pretty nuts to me. We play max 3 players. I would expect the server with such a load to be able to run on a slightly tuned up toaster.

It is not cheap for the cloud. Had to use some beefy variety of EC2 medium instance for 4 players or so, with a simple dash for starting it up and terminating, I think using spot instance pricing. Otherwise it cost a pretty penny. At that point I did not use any performance mods, though.

to be fair with the power on most people's laptops and phones now I think we tend to lose track of just how little "1 CPU" is if you're not just running like, a small web app.

Wasn't it always like this? There's a lot going on in the game, especially if generating new chunks, and it's in Java.

It was not always like this. You used to comfortably be able to handle 70+ players in a single server before Paper existed (my memory of this is from before like 2015). You'd need to allocate a lot more memory than normal, like 8 gigs instead of the normal suggestion of 1 or 2, but it could handle it without regular lag.

Monitoring and metric collection makes a lot of sense when you run a production system, or a personal but critical system.

Promoting a telemetry solution when it comes to a hobby server, which you host for yourself and which can’t bankrupt you by running up a massive AWS bill, doesn’t seem to make much sense when simply bottling it up in Docker and being able to restart or recreate at will is enough (mount volumes for logs and persistent data, back it up, and you’re good).

With games like Minecraft in particular there’s value in being able to have multiple servers with different worlds, perhaps different mods, etc. If you decide not to have more servers because they are snowflakes you do not have time to set up monitoring for then you rob yourself and your players of the opportunity to have more fun.

Furthermore, containerizing it allows you to upgrade as new game versions come out quickly by simply spinning up a new container with your preexisting world as a test, and you get you basic system resource usage monitoring built-in.

What I think could be a more interesting exercise is a dashboard for friends or family that allows to manage the lifetime and configuration of their respective containers.


Implementing proper monitoring in a toy system doesn't prepare you to do it in a massive critical system, but at least you may had learn something in the process, and notice things that in big scale may not be as evident.

In any case, fun starts when the system have more interdependent components.


I think there is value in learning which pattern is good to apply in which scenario, and I will argue that in this case the best pattern is “servers are cattle”.

One of the stretch goals for me writing this article was indeed to show between the lines how Prometheus Exporters, the OpenTelemetry Collector and Systemd can all work together. That is a very reusable skill on monitoring workflows running outside containers on Linux VMs or hosts.

The goal of this article is to show you how to integrate with this service from just about anything. It's an ad that was fun to make as a hobby project. I doubt the goal was ever to set up a fully integrated Minecraft monitoring pipeline. At best, this is an employee at this company just decided to show the flexibility of their product by integrating with a random piece of kit they like.

Luckily, all of the interesting components are existing third party libraries so if you don't want to use their SaaS service, you can build your own Minecraft dashboard pretty easily.


I am indeed an employee of Dash0. The setup for telemetry collector will work with anything that accepts OTLP, and with minor adjustments, the data can be sent elsewhere too in other formats, as the OpenTelemetry Collector is very flexible in that regard.

Alerting is specific to Dash0. I know of no other monitoring solution that lets you run real PromQL on logs. But there will be similar ways of accomplishing the same alerting logic.


Have you never just built something for fun?

Do you mean something like launching k3s on smartphones https://blog.denv.it/posts/pmos-k3s-cluster/?

I have built a panel like the one I mentioned for fun with friends!

The goal of my comment was to highlight opportunities for more fun and less what seems like toil.

Furthermore, this is an article about a telemetry solution posted on a site of that telemetry solution. They make money from this.


One persons toil is another persons fun.

And sometimes a person is paid to pretend toil is fun. We are talking about spending hours setting up telemetry instead of playing a game.

Not everyone is into gaming. I rather code on my side projects than use my console. Or people tweak and customize their Linux installation instead of doing work on it. Some people like to work on their cars, driving is a small part of it.

I agree, and I am as guilty of procrastination. However, the author is not really procrastinating—he gets paid for this. Me, I do in fact procrastinate on setting up a Minecraft server infra in the cloud. Maybe that’s precisely why the solution to this problem strikes me as inadequate:

> So, the Minecraft server should work reliably and, if it goes down, I should know well before they do

How are metrics helpful? There is so much fun that could be had in setting up an actually resilient system instead.

Why worry over metrics and alerts when you could orchestrate an infrastructure that grants you the superpower of being able to spin up a server with a copy of the world on a whim instead (or even a system that auto-starts one whenever there is demand)?


You are somehow very negative about this piece and are not understanding that your definition of fun is not universal.

As you said "There is so much fun that could be had in setting up an actually resilient system instead.", maybe the author has more fun setting up alerts and metrics instead of a resilient system like you do?

The truth is that in most real-world scenarios getting alerts, metrics is much more important than building a fully resilient system (Expensive, maybe overengieering for early stage etc.).

> However, the author is not really procrastinating—he gets paid for this. As the first sentence in the blog post says "One of the secret pleasures of life is to be paid for things you would do for free.", which I can very much understand as I often work or play with things I could use at work in my free time.


> The truth is that in most real-world scenarios getting alerts, metrics is much more important than building a fully resilient system (Expensive, maybe overengieering for early stage etc.).

Funny, because I have the opposite opinion. Build for failure first; if it’s critical/production then also monitor, but if an earthquake takes down an EC2 zone and you have no ability to spin it up exactly the way it was then the avalanche of alerts and metrics falling off a cliff[0] isn’t exactly going to help you (or your mental well-being).

Generally speaking, if you build for failure first, then monitoring becomes much more useful and actionable; and simultaneously it becomes much less important for a hobby project.

[0] That assuming you gather them from a different zone that wasn’t affected by the same downtime in the first place; speaking of, how are you monitoring your monitors? and so on.


This thread isn't going anywhere. If your startup hasn't found paying customers there's no need to build earthquake-resilient software. For most businesses that are not billion dollar companies there isn't.

Of course for engineers that's a nice challenge, but that's the reason why engineers without a business sense have a hard time building their own companies if you prioritize perfect code and overengineered infrastructure over talking to customers or building the business.


I don’t think running a container, which takes one command and one small YAML file, is either overengineering or difficult.

> As you said "There is so much fun that could be had in setting up an actually resilient system instead.", maybe the author has more fun setting up alerts and metrics instead of a resilient system like you do?

Adding the backup for the world files, already having Systemd bringing back a crashing server, makes the setup rather resilient. Sure, there's infinite more things that can go wrong, but with swiftly decreasing likelihood.

> The truth is that in most real-world scenarios getting alerts, metrics is much more important than building a fully resilient system (Expensive, maybe overengieering for early stage etc.).

This, very much this.

> However, the author is not really procrastinating—he gets paid for this. As the first sentence in the blog post says "One of the secret pleasures of life is to be paid for things you would do for free.", which I can very much understand as I often work or play with things I could use at work in my free time.

Yes :-)


> How are metrics helpful? There is so much fun that could be had in setting up an actually resilient system instead.

Metrics are the means to an end of alerting. And with alerting, I mean getting pinged on my phone when something important breaks. Like, you know, the server going down.

> Why worry over metrics and alerts when you could orchestrate an infrastructure that grants you the superpower of being able to spin up a server with a copy of the world on a whim instead (or even a system that auto-starts one whenever there is demand)?

As somebody who has run cloud and enterprise software for almost two decades now, I can be that needs monitoring too. The more moving parts there are, the more things go wrong. The more things go wrong, and the more you care they get fixed, the more monitoring you need :-)


Do you really need to be urgently made aware that it’s down, if the system could simply spin up a new container and keep on as it were? You could still see that it had to do it, and if in the mood investigate it, but the matter of first importance is taken care for you.

> As somebody who has run cloud and enterprise software for almost two decades now, I can be that needs monitoring too

To be clear, I strongly believe that if you run anything seriously in production, you must monitor it—but first you need to be able to spin it back up with minimal effort. It may take a while to get there if you just inherited a rusty legacy snowflake monolith that no one dares to breathe the wrong way near, but if you are starting anew it is a bad mistake to not have that down first considering how straightforward it is nowadays.

Then, for hobby projects of low criticality (because people in this thread mistakenly assume I mean any personal project, I have to reiterate: nothing controlling points of ingress into your house or the like), you may find that once you have the latter, the former becomes optional and not really that interesting anymore.


I swear I had a lot of fun setting doing the setup.

I am also a massive observability nerd, so YMMV :-)


I believe you! Just due to your affiliation I wanted to highlight to any newbie SREs in the audience that perhaps there is a better way. I still think my approach is better, but we can do things differently.

Indeed if there were “official” container images out there, I might have instead run the server on Google Cloud Run or AWS AppRunner, without having to take care of the Linux underneath. Or an Amazon ECS task. I don’t have a Kubernetes cluster, but I will at some point make a version of this blog to run it on K8s :-)

I’ve recently added telemetry to some “toy” apps at my house because a power outage or other unforeseen issue has caused things like my Siri enabled garage doors to stop working. Now I get alerts through grafana and telegram for basically free which comes in handy.

A garage door is a security concern.

For a game, a solution that simply restarts the container if it’s down solves the issue. You can mount game logs in a volume if you want, and you can see resource usage in container host dashboard. What value do detailed system metrics bring?

Furthermore, you don’t care what software you run to make your garage door system Siri-enabled, as long as it does its job and is not vulnerable; whereas with a game that adds new gameplay features multiple times per month, you do want to update it frequently. Babysitting a snowflake server makes it way more difficult than it should be.


I am currently planning adding monitoring to some toy apps I hosted on a raspberry pi cluster. The intent is that this might safe me time and stress further down the road. If a new version makes performance worse, I want to see that in the data. If resource needs go up, I want to know that before it's time to move, so that I can plan without any kind of scheduling stress. (I also want to do this in part as an exercise which is partial motivation for the cluster and most things I built that run on it. But don't tell anyone!)

Am I misguided?


Well, as far as I’m concerned, if they are toy apps, why stress? If they are going to go in production at some point, then sure; but this certainly is not happening with a family game server.

Family game server going down can be very stressful, especially if you have kids.

Also, I've had phone tech support sessions with family that were more stressful than calls with large banks who were worried about losing very large amounts of money in case of an outage. Different stressful, but nonetheless...


> Family game server going down can be very stressful, especially if you have kids.

Telemetry does not address this, though. Shoving it into a container and assigning it a simple “restart if down” rule does. Minecraft is a flaky beast, if you run snapshots and/or mods. Metrics or not, often “start again” is all you need.

Furthermore, this is a game that adds new gameplay features multiple times per month. If you do not update it frequently and your kid misses out on a new mob, you run into the same stress. Containerizing it makes the upgrade very straightforward, and once you run a couple of containerized instances… Do you not struggle to see the value of detailed system monitoring?


> Telemetry does not address this, though. Shoving it into a container and assigning it a simple “restart if down” rule does.

A Systemd unit as shown in [1] does it too without using containers and with fewer moving parts of using containers. I use containers every day at $work. I have been using containers since before Docker was a thing. In this case, it's entirely overkill: Systemd units use the important things like cgroups already.

For the upgrade: depends. You do need a container image regardless, and I have not seen official ones. Upgrading servers in Minecraft requires upgrading clients to match, and my kids prefer to play, more than upgrade. (Unless a biome is released. Then it must be immediately available to them.) But then again, I just need to download the binary with a cURL call. And if the configurations change, Docker won't help me there one bit anyhow.

[1] https://github.com/dash0hq/minecraft-server/blob/main/drople...


There are no official ones (Microsoft profits from operating its own servers, why would it make things any easier), but there are community-maintained images.

I found that vanilla server is insufficient and an ability to declaratively define mods, the seed, OP players, etc. through container environment is very important for iterative evolution, but of course this is individual.


Indeed.

My personal definition of nanosecond is the time passing between the Minecraft server having a hiccup, and the first scream piercing the air.

The printer not printing is DEFCON 5 material.


Seeing what computers are doing is good, actually. Period.

This is a real-time game. What the computer is doing is directly in front of your eyes.

I know I sound like a freak to you, but you sound like a deranged freak to me too. Who would opt for ignorance? Who would opt not to have data? Who would opt not to see more? Its insanity to me to resist enrichment so.

Limiting yourself to only naive senses is a wild proposition to me. The scientific mindset compels us to see further: it is a wild privilege to see more, to build and use tools that expand how we can see.


I don’t think you’re deranged. I do think this is a post about using telemetry 1) in (to me) excessive ways that defeat the point of the thing being measured 2) published on the website of a company that sells said telemetry solution.

Furthermore, to me useless or excessive data is very much a reality (if you do not agree that it is a possibility and a thing that happens, we have clearly no way of understanding each other), and per my criteria it is just that sort of data in this use scenario.


To be fair, the setup of the article works with most modern observability solutions, in same cases just by replacing the endpoint and authentication token. Turning telemetry processing into a sort of utility is one of the great things that OpenTelemetry did. Now, among vendors, we compete on delivering insights on the telemetry, as opposed to just collecting it. If you are interested, I wrote about it a while back [1].

About excessive telemetry, that depends on what you want to achieve. Using facilities in the OpenTelemetry Collector like [2], you can easily drop all telemetry you have no use for. At the cost of tooting my own horn, we actually provide super easy ways of doing the same dropping at no charge whatsoever to the end user in Dash0 [3].

[1] https://thenewstack.io/is-otel-the-last-observability-agent-...

[2] https://github.com/open-telemetry/opentelemetry-collector-co...

[3] https://www.dash0.com/changelog/spam-filters


Setting up telemetry is really easy if you’ve done it before and it’s a learning opportunity if you haven’t.

I have Dockerfiles from 10 years ago for Grafana and a time-series DB so basically you learn it once and you can bang out basic telemetry infra in an hour afterwards.

And I still actually use InfluxDB and Grafana for my hobby stuff. My current Dockerfiles just look like my old ones…


What happens if Grafana or InfluxDB is down? Who monitors the monitors?

For this, I have the impression that https://github.com/dirien/minectl might be very close to what you are thinking. I did not try it, but took the Minecraft Exporter from it and used in the setup.

— “Why punks are correct and old wise philosophers are wrong” is a trap in the same way “have you stopped beating your wife?” is. Assuming that punks and philosophers (inevitably painted as old and wise men) are a disjoint set is a newbie blunder or attempt at clickbait by putting together two words that would seem vaguely at odds to a philosophically unsophisticated mind.

Of course, philosophers can be punk (Nietzsche, Marx); punks can absolutely be old and/or wise; all philosophers were young at some point, some of them (Hume, Plato) wrote famous works before they were “old and wise”; a system of thought can be considered philosophy, and varieties of punk at their best are systems of thought; and, very much like philosophy, there is no “punk” as a single homogenous system of thought.

— “If I am free, why do I have to defend my actions against a specific body of people and doctrines?” You don’t, in the sense of some specific body of people. You do, in the sense that you do not exist in vacuum: self-awareness automatically requires the other that the self can be set apart from. What you call your “oppressors” are both 1) parts of the same whole that you are part of and 2) bring yourself as an entity into being in the first place. It doesn’t mean that you can’t be oppressed and shouldn’t work to address that; but it does mean that yes, being a human automatically means having to exist among other humans and act in ways that are not only concerning yourself. I’d be a bit normative and say you probably shouldn’t see everyone else as “oppressing” you just because you have to consider how your actions affect them.


It’s one thing to take an acronym and “demote” it to a common noun if it’s being used often by wide public (not unlike how proper nouns become common nouns), it’s another thing to randomly pretend that a regular noun is an acronym. I’m looking at you, photographers shouting “RAW” in all caps whenever the topic comes up. “WASM” rubs me wrong for the same reason.

I admit to being guilty of this and mimicking whatever form I encounter first, but then I’d switch once I look it up. I don’t quite understand why would anyone do otherwise.


Like any map, the inheritance pattern is bad, except when it works. It’s a strategic capability to be able to guess well which is which in given context.

My first foray into serious programming was by way of Django, which made a choice of representing content structure as classes in the codebase. It underwent the usual evolution of supporting inheritance, then mixins, etc. Today I’d probably have mixed feelings about conflating software architecture with subject domain so blatantly: of course it could never represent the fundamental truth. However, I also know that 1) fundamental truth is not losslessly representable anyway (the map cannot be the territory), 2) the only software that is perfectly isolated from imperfections of real world is software that is useless, and 3) Django was easy to understand, easy to build with, and effectively fit the purpose.

Any map (requirement, spec, abstraction, pattern) is both a blessing that allows software to be useful over longer time, and a curse that leads to its obsolescence. A good one is better at the former than the latter.


That by using and/or carrying a camera you stop being present in the moment, and that it is somehow mutually exclusive with making memories, is somewhat of a mischaracterisation.

If you are serious about photography with existing light, the act of using the camera and even merely carrying it forces you to, for the lack of better description, see things as they are. It opens your eyes, quiets your chattering brain, stops you from ruminating. It prevents you from being annoyed at the small things and keeps you in the flow.

Once you realize that a particular combination of countless factors—weather, air quality, time of day, time of year, place, angle of view, human or other subjects (or, as I often prefer it, lack thereof)—can create a vanishingly rare, one of a kind image, you just can’t help being in the moment and seeing those things; and once you have got it (captured light and developed it), it can be there with you as an additional memory trigger for years to come.


> can create a vanishingly rare, one of a kind image

Early in my photography journey, I captured some photos of a stunning sunset. “WOW, this is what I’ve been missing” I thought to myself, and also thought I could capture similar photographs regularly.

It took me some time and experience to realize just how rare or unique many conditions actually are. I’ve gone back to the same spot for years and have never seen a sunset quite like that early one.

But photography did really clue me in to what I’d been missing. The subtle and continuous change that is always happening and is never not happening. It expanded my perspective and opened my awareness. Even when I’m not carrying a camera now, I see things I never would have noticed before.


It took me many years of carrying the camera to get close to what you describe and realize how even changes in air quality and humidity can create a very different view at different times. There are many factors that no post-processing can help, even if you have the finest raw from your sensor. Processing is important, I strongly believe, but only insofar as it amplifies the scene and your impression of it.

The days of carrying a camera coincided with lots of exploration and memories (and many good shots and bad), and the days when I could not be bothered tended to be moody and gloomy. I don’t think the causation is strictly one way or the other, but for people like me (and maybe yourself) when there is a right frame of mind then the camera at least contributes to a virtuous circle, if not acts as the source of it.

More recently, without a dedicated camera (which got stolen) I find it more difficult to snap into this appreciation of reality. Phone camera’s main issue is that it’s extremely slow when it comes to a changing real-life scene (I lost light many times while opening Halide and waiting for it to become operable), and that it is part of a distracting multi-purpose device in other regards.


> when there is a right frame of mind then the camera at least contributes to a virtuous circle, if not acts as the source of it

100% this has been the case for me. I think the key is to not go looking for shots, but to immerse myself in the environment and then click the shutter button when something in the environment inspires me to do so.

Sorry to hear about the stolen camera. This recently happened to me as well, and one of my top priorities right now is replacing it. Thankfully I have some backup options in the meantime.

There’s definitely something completely different about shooting on a dedicated camera vs. the phone. Hope you’re able to get your hands on another camera soon. The used market on mpb/keh is really strong.


Absolutely. To add to this, the fact the author doesn’t take photos in their home town is a surprise as well.

You understand where you live better than visitors. As a photographer you know what’s interesting about a place. You see things that everyone else walks past and doesn’t notice. While everyone else walks away with generic tourist shots of Chapel Hill, you can capture snapshots of daily life over years that really showcase your friends and neighbors and town, and in so doing create a body of work that’s entirely unique to you and your perspective.

It’s an odd thing to be proud of—a photographer not capturing the place they know best.


For me so far it is more difficult to snap into appreciation of surroundings in a place where I have lived for a while[0], but even still it is markedly easier to do while carrying a camera and mentally prepared to use it.

[0] Maybe I haven’t found the right place yet, who knows.


There’s a cheap trick to make sure a website that claims to do everything client-side actually does everything client-side:

1. Open the site in an incognito window.

2. Turn off your Internet.

3. Do what you’ve got to do.

4. Close browser window.

As a bonus, and this makes it better than just flipping the offline switch in developer tools, if you turn off Internet in a way that keeps the browser thinking it’s online, you can also peek at whether any network requests are made (for pathological cases where the app does everything locally but phones home anyway).


Recently, the browser has become this great unifying environment where we can build complex cross-platform experiences available to anyone on demand and not locked into any walled garden. Just off the top of my head:

— WebCodecs. You don’t need ffmpeg; encode in the browser.

— Web Audio. An advanced modular synth graph in the browser.

— WebRTC. P2P communication between browsers. Calls, collaboration, etc.

— WebGPU. Run shaders in the browser.

— File System API & File System Access extensions. Read/write very large files without having to put the entire contents in RAM.

All of this required significant amount of resources to spec and implement. With 80% of funding cut, I struggle to see how it can be maintained. Would be sad to see this rot with bugs.


> we can build complex cross-platform experiences

Sounds good for the developer but as a user who gives a shit? I miss my native desktop applications! They were faster and used less memory!


In this case, good for the developer is good for the user. As users, we get 1) tons more of those (since they’re easier to build, with one codebase that runs on literally every modern computing device, from phones to tables to laptops) 2) without being locked into a walled garden 3) fairly securely (heavily sandboxed, unlike desktop apps).

The sheer number of cool things that got posted on HN in recent years leveraging these APIs.


Sounds good for the user, but as a developer who gives a shit? I'm not porting my webapps to your tinker-toy OS for marginal return on my investment. That's redundant work that I'm not being compensated for, it doesn't matter how lickable the buttons are or how much of your 16gb of RAM I'm wasting.

> I'm not porting my webapps to your tinker-toy OS for marginal return on my investment.

Even more so if there is no return in the first place. Fun toy exploration-style projects, or something to scratch an itch. Remember the GUI for ffmpeg filter graph (that also encodes the video client-side), remember retro music trackers… The kind of stuff people can and do build in single-digit days is perhaps the best thing left about this otherwise bleak post-small post-2.0 age of Web.


I observed a clean experiment that showed a friend’s Google Pixel phone listening to us and adjusting news stories on Google app’s home screen.

However:

— IIRC the phone was unlocked,

— this only affected the news feed, and

— this was 5–6 years ago.

We 1) noted how Google app shows some selection of news after opening, 2) talked clearly for a minute about a very random and conspicuous topic in presence of the unlocked phone, and 3) demonstrated that the Google app showing an article relevant to the topic within a few minutes. The article was a few days old, too, so it was clearly boosted out of more recent stories.

The only reason it could be something other than the phone microphone is if I was misled by my friend steering us towards a predefined topic. However, that would require some extensive preparation to rule out the story appearing in the first step and would be very atypical for that person.

I recall seeing an article about Google admitting this and changing their policy to stop, but can’t seem to find it now. I imagine it was bad publicity, though to my friend it was a feature to see personalized content.


This was a coincidence.

That’s why it’s something you observed one time 5-6 years ago, not something that happens repeatedly in a testable way.


Isn’t it more likely it’s not a coincidence though?


How often does someone look at their phone over 5-6 years?

Having one incidence where you’re talking about something and then you also see that something on your phone out of 2000 days of using a phone is definitely more likely to be coincidence.


How often do you think this person did experiments? It is a study with n=1 but the unrelated metric of how many times something else happens does not influence the likelihood of a false positive


Only did it once. The likelihood of coincidence is low, because the topic was specific and unusual.

Here’s something relevant in Google’s current support KB[0], where the combination of the following further supports that the experiment did not have be staged (emphasis mine):

> Web & App Activity saves your searches and activity from other Google services in your Google Account. You may get more personalized experiences, like: <…> Content recommendations

> When Web & App Activity is on, you can include audio recordings from your interactions with Google Search, Assistant, and Maps as part of your activity.

Let’s now go back to the experiment. Given the phone was unlocked, voice activity was enabled, and Google app or search widget was on Google Pixel’s screen (I am certain at least the latter was true) during the experiment, could talking near the phone be counted as “interaction”? If the answer is “yes” then it seems very reasonable for us[1] to expect, per that KB, that the app would listen more actively than what’s required for assistant activation, and that recorded snippets would count as your “activity” designed to affect content recommendations (including the article feed Google app showed to us on its app’s main screen).

No tinfoil hat required.

***

Note that it does not mention ads among personalized experiences[2], and we had not observed any change in the ads either. I didn’t see what exactly counts as “interaction” or whether this blazing-fast content personalization used to include ads previously, but in line with the “move fast” culture of mid-2010s Silicon Valley it could well have been much more lax at some point. If so, I do not envy all the people who have observed it only to be gaslit and mocked by peers and media.

***

As to the article I was vaguely remembering in my original comment, the above makes me think that it was merely about the change of the default to opt-in, which it is as of today:

> This voice and audio activity setting is off unless you choose to turn it on.

[0] https://support.google.com/websearch/answer/54068?hl=en&co=G...

[1] Us tech people; this might not at all align with the intuition of other people.

[2] I rather suspect that ToS and possibly some other KB article would indicate that your activity would, in fact, affect your interest profile and by extension ads, but probably in a much less obvious and more gradual fashion.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: