Hacker News new | past | comments | ask | show | jobs | submit | MobiusHorizons's comments login

I read the "accidentally" as applying to the "identifying" not the "post", although I agree the sentence structure would suggest "accidentally" as a modifier for "post" that makes a lot less sense.

I was originally pretty skeptical of the Lomi as well after seeing this very same video. But my friend got us one and we have been using it for a while now. Sure, it has the same parts as a breadmaker, and it's mostly just drying out and cutting down the organic material into more useful sizes, exactly like he says, but when you put in the enzymes and have it run its dirt cycle it does actually produce meaningfully good compost all with much lower footprint a garden composting setup. I'm not sure I'd pay to buy one new, but but it's not a scam.

Just remember that any positive effect you might achieve by a lifetime of composting is grossly negated by the production, usage, and inevitable way to the landfill of this thing. Startups like these are part of the problem, not the solution.

That could be true, but maybe OP's goal isn't making a positive impact on the global scale.

Just throw your scrapes and buy compost then, it'll be cheaper and easier. The city already transforms bio waste in gas and compost anyways, and much more efficiently than what you can do at home given the scale.

This is another "I'm doing my part" gimmick that solves literally nothing when you look under the hood


You're not wrong. I think people just like to feel like they're doing their part, even if it's not actually a net benefit.

our city has no bio waste. we make all our own dried fruit, eat mostly fresh from market (so little to no plastic for our veggies), but an immense amount of organic waste.

Only if you're using bad math and discounting the impact of the retail compost and fertilizer offset by these devices.

These devices reach impact parity in a few years (2 to 10 depending on the electronics and how much they're used).


it allows us to get all our organic and bio plastic waste for a big family with pets, including most bones once we have cooked stock from it, in a compost heap in the city.

we tried composting before and the volume of organic waste we produced was too much and we had to dispose a lot of our waste in general trash (our location has no organic waste disposal that runs in our neighborhood) meant animals ripped our curb side bags open.

I am not a degrowther to save the planet either, so a company putting compostable products in place of plastic ones seems like good economic activity.


There are also services that can do the sheet metal bending for you if you have the cad designed. Of course shipping can get pricey, but I think it’s not prohibitively expensive.


I believe the only product that currently ships with Fuchsia is the Google Nest Hub. I could also imagine it running on meeting room hardware for Google Meet, although I don't believe that is true today. I would imagine this is largely a defense in depth type of security measure, where it limits the blast radius of vulnerabilities services. Beyond that, it is not hard to imagine use-cases that would benefit from running less-trusted code, especially if that code comes from third parties like an app store or some sort of company specific add-on.


not really, unless your word processor outputs significant portions of text for you


No, it just does the layout, not the text. Much like LLMs do the code tokens but not the goal. Abstraction in all things, and automation of the lower levels does not devalue the higher levels (that is, a traffic sign works the same regardless if hand painted or commercially printed).


Really not sure where this tangent is going but looking at hypothetical text it can be pretty obvious that various word processors were used.

Like if an email has the same font/etc as google doc then it's pretty obvious they wrote it in a doc and then copy-paste it.


They were making the point that using a word processor as a tool isn't the same as having an LLM write something, then retain the LLM generated changes such that it is immediately obvious the bulk of the code was generated by an LLM, not that it isn't possible to discern whether someone is using a word processor.


What do you mean? I don't write code with AI because figuring out how to code things is not what takes a lot of time in my job. That's the fun bit, why would I automate it? What takes time is coming to alignment with the team on what problem we are solving, figuring out what is possible, and aligning on a solution based on the available tradeoffs. Is there some way AI is supposed to be able to help me with that?


I once fully spelunked such a Java call stack to convert some code to golang. It was amazing, there were like 5 layers of indirection over some code that actually did what I want, but I had to fully trace it keeping arguments from the full call stack in mind to figure this out, because several of the layers of indirection had the potential of doing substantially more work with much more complex dependency graphs. I ended up with a single go file with two functions (one that reproduced the actually interesting Java code and one that called it the way it would have been called across all the layers of indirection. It was less than 100 lines and _much_ easier to understand.


> instead it presents what look like rules that you shouldn't be breaking if you want to be a good programmer.

I see this a lot, especially among more junior programmers. I think it likely stems from insecurity with taking responsibility for making decisions that could be wrong. It makes sense, but I can’t help but feel it is failing to take responsibility for the decisions that are the job of engineering. Engineering is ultimately about choosing the appropriate tradeoffs for a specific situation. If there was a universally “best” solution or easy rule to follow, they wouldn’t need any engineers.


I always think of this as the programmer version of "No one got fired for choosing IBM." that was a common phrase about executives back in the day. Do the thing that you can just point to "experts" and blame them.


That is a helpful comparison. I guess it is risk aversion at the core. At some point risk aversion becomes abdication of decision making to others, which seems broken in roles that are specifically hired for making decisions, but that’s even more true of executives.


I completely agree with you, mind I feel the same way about the people who the original comment was talking about. They are paid the big bucks to decide how to spend money to optimize the company.


Ha, I just bought a VT420 a couple of weeks ago. I just finished a hacked together converter for USB keyboards working well enough (in the last hour actually). Next job is to connect it up as a login terminal for my freebsd machine.


I love those old terminals! I remember using them during late nights in college...


perhaps instead of `successor` you could say `rust fork` or `alternative`. Successor implies the original is dead or deprecated, or no longer going to be used, which is very far from the truth.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: