Hacker News new | past | comments | ask | show | jobs | submit | foxbyte's comments login

Check out SuperCollider and Clojure for more on the tech behind Overtone. They offer great resources for audio programming and live coding!


Consider exploring LXC for a more mature alternative to Docker. It's not confined to the OCI ecosystem and offers a higher degree of isolation for development environments.


I don't know if I'd use the word "mature" (Docker is quite mature after all), but as a long-time Docker user I did jump into lxc/lxd and can confidently say it's "system container" approach is better for dev environments.

I used to use WSL 2 or a VM, but on a Linux host LXD is a really nice workflow. I can create a fully isolated Linux instance with all my dev tooling on it (optionally script/automate the install of all my tools), and with nested containers enabled I can even run _Docker in that LXD container_ so that when I type `docker ps` on that instance, and `docker ps` on my host, they each have their own set of containers running. For example, I have an ElasticSearch and Redis instance running in the dev box but Syncthing on the host.

Then I can use snapshots to back up the dev environment, or blow it away completely once it gathers too much cruft all without affecting my host. It's really great.

Pair it with VS Code remote SSH and you have a very feature rich setup with little effort.


LXC doesn't have a Dockerfile concept though.

I think a Dockerfile as a recipe for an environment is pretty elegant.

I've used LXC with proxmox and managing what's in a container is kind of like being a sysadmin.


Docker is cattle, LXC is pets.

Docker packages applications, and should only really have one app inside it. LXC is similar to a vm in it presents itself as a stand-alone machine but uses the host kernel, it’s not like a vm in it’s not as isolated as sharing the host kernel.

So with LXC I agree, managing it is like being a sysadmin as that’s what it’s designed to be.

I use Docker for things that are stateless, maybe throw away, or just test an app quickly. I use LXC for things I want to run multiple services inside, more statefull, typically where people plumb a bunch of Docker images together in I’ll use LXC. The advantage in Proxmox is I tell Proxmos to backup my LXC nightly as it’s treated similar to a VM.

For making LXC feel less like needing to be a sysadmin, you can use Nix to build your LXC images and import in to Proxmox. Your LXC container becomes declarative and not to dissimilar to using a Dockerfile, it’s a far more powerful Dockerfile. What I’ve done is create a bare minimal NixOS LXC with some basic config and use that as a template then edit ‘/etc/nixos/configuration.nix’ inside the LXC on first boot. However as it’s just nixos you can build push the config remotely, use NixOps etc.

It’s a really good workflow using NixOS with LXC however it took me a while to get it as the docs are a bit thin and an old+new version of docs with the new version skipping things mentioned in the old you need to do, I.e change the tty to /dev/console to get a shell inside proxmox console.


I have never really looked into LXC. How strong are the security guarantees? Presumably less isolated than a real VM, but with significantly better performance?

I have started to run more and more software inside a VM for better security isolation, but the loss of performance is pretty discouraging. For things that are probably fine, I might be willing to trade some theoretical security benefits.


There is an LXC provider for vagrant, which gets you the one-file concept, with the benefits that not everyone on the project has to use LXC, they just need a provider that works on their host for the specified base box.

This is vagrants' real superpower.


The idea of prototyping to uncover 'unknown unknowns' resonates with me, it's like a reconnaissance mission before the actual project. This could indeed save a lot of time and effort in the long run...


Just never let management see the prototype or they'll consider it done and tell you to move on to something else.


In my experience, this is what actually happens. A developer makes a low-quality, low-effort prototype under the assumption it won’t ship. Someone sees it. It gets shipped. Everyone loses.


Sabotage your prototypes then. Have it reset its state every 5 minutes or deliberately leak memory.


Sneaky move.... but then, in the eyes of some people, you might well appear not competent to put together a working system, so make sure you know your audience.


Ya, this basically.

It helps if you have tests because when prototyping, you can write things in a way which break tests and ideally break other features. It works enough that you can validate the one idea. That way it will never pass CI.


They are cunning. You cannot sabotage it enough to fool them.


Reconnaissance mission is a great analogy. I think of it as scouting from the Age of Empires.

Besides discovering unknown-unknowns it also helps in conveying design/idea to rest of the team.


Another war analogy was the Byzantine defense. Their policy was to avoid decisive, large-scale battles where a lot can go wrong at once. If an enemy army is incurring, first try to weaken or deter it with lighter forces that also gather intel. If they can't stop the threat, form and advance a larger army with a better-planned supply chain to stop it deeper in the territory.

Which actually doesn't work so well in AoE. The game doesn't have a concept of army supplies, so you usually just want a huge unstoppable army you can send anywhere.


"tracer bullet" that shoots through all key parts of the system (agile terminology, I think).


It's interesting to consider the hype around XML and MongoDB. But it's also crucial to remember that each tool has its pros and cons and we need to choose wisely based on our project's needs. Hype isn't always a reliable indicator of utility ...


Yeah but don't worry too much about changing your habits, just continue to be selective and critical in your consumption. Remember that it's all about finding a balance between staying informed and maintaining your personal preferences, right?


I can understand your frustration with the article, but let's approach it with an open mind. While the use of a "chatgpt detector" may have its limitations, it's essential to appreciate the researchers' effort in exploring new methods. The study may not be perfect, but it contributes to the ongoing conversation about the risks of using AI in AI training. Irony aside, let's keep the discussion going and encourage further research to improve our understanding of this complex field.


So polite it hurts. I wonder if in the future people on the internet will leave deliberately offensive posts to show that they are human.


It's crazy, isn't it?

I don't know what feels worse for me - that whenever I read a mannered, well-structured and somewhat verbose comment, I now suspect it wasn't authored by a human - or that, as I quickly realized, my own writing style feels eerily similar to ChatGPT output.


If it helps, your response here doesn't feel similar to ChatGPT output.


Thanks. I've already noticed that I've started to unconsciously adjust my writing style to avoid that feeling of similarity to ChatGPT.

That said, compared to typical comments on-line (even on this site), using paragraphs, proper capitalization, correct punctuation, and avoiding typos already gets you more than half of the way to writing like ChatGPT...


@dang are we ever going to do anything about this? You almost can't read a comment section in a thread about AI without this crap now.


I mean, what rule do you actually want here?

ChatGPT has been RLHFed into a pretty distinctive style, but there's no reason to think a better LLM wouldn't have a more natural style. If AGI is possible, then HN will end up with AI users who contribute on an equal basis to the modal HN user, and then shortly after that, more equal. Should all AI be banned? Should you have to present a birth certificate to create an account?


> Should you have to present a birth certificate to create an account?

I actually honestly believe that the era of "open registration" forums and discussion places is going to come to a close, largely due to GNN.

It's not going to become a problem until the hardware and walltime costs of training models and running them comes down. You'll know it's a problem when every 10th post on 4chan is a model pretending to be a human that is of a gentle but unyielding political persuasion of some sort.

I don't know what the end pattern will be, but it'll likely be a combination of things

- large platforms, like reddit or facebook, where individual communities "vibe check" posts out.

or

- some sort of barrier to entry, such as a small amount of money (the so called "idiot tax": if you're an idiot, you get banned, and you have to pay again)

- some sort of (manual!) positive reputation system for discussion boards, sort of like how peering works

- some sort of federation technology where you apply and subscribe to federation networks

I don't think we'll really be able to predict what the future looks like right now (it's not even widely recognized as a problem). And since this is HN, I'll add: I don't think there's any serious money to be made running reputation or IDV, unless you've already started. And if it becomes a serious enough problem, players like ID.me/equifax/bureau will be the situation for "serious" networks (linkedin, facebook, chat, etc).


I wonder how far off we are from an AI with a forged birth certificate.


This highlights the need for error tracking in AI training. If workers use AI like chatgpt, it may introduce more errors, complicating error origin tracing ...


I agree, the shift from Java's state machines or "command" system to Lua's coroutines does seem to make the code more intuitive and readable for beginners . Lua's in-built coroutine functionality can be a game-changer for FIRST teams dealing with the complexity of autonomous code 1. It would be fascinating to see how this technique could be applied to other areas of programming where tasks need to be paused and resumed. It's a testament to the versatility of coroutines.


This complements the idea that problem-solving is the essence of intelligence. It's not always about speed, but the quality of decisions made, especially in complex situations


It's particularly interesting that Tree Borrows treats all mutable references as two-phase borrows, which seems to have unexpected benefits and could make the model more permissive


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: