Hacker News new | past | comments | ask | show | jobs | submit | shawnz's comments login

Exactly: if your templating logic accidentally produces a syntax error, now you can't log in to SSH. There's much less chance of that scenario with include directories. This applies for infrastructure as code scenarios, changes made by third party packages, updates of ssh, manual one-off changes, etc.


If any logic produces a syntax error anywhere in the sshd_config include chain, ssh is broken now. And you will have templating logic in automatic configuration one way or another, at least for different dns/ips.

I don't grep this argument at all. It feels like everyone's comparing to that "regular [bad] detergent" in this thread. A templating system will be as good and as error-prone to change and as modular etc as you make it, just like any program.

It applies only to local patchers (like e.g. certbot nginx) and manual changes, but that's exactly out of scope of templating and configuration automation. So it can't be better, cause these two things are in XOR relationships.

Edit to clarify: I don't disagree with foo.d approach in general. I just don't get the arguments that in automation setting it plays any positive role, when in fact you may step on a landmine by only writing your foo.d/00-my. Your DC might have put some crap into foo.d/{00,99}-cloud, so you have to erase and re-create the whole foo.d anyway. Or at least lock yourself into a specific cloud.


It's still possible to break the config with a syntax error, but there are less kinds of syntax errors that are possible if you aren't writing into the middle of an existing block of syntax. For example, there's no chance that you unintentionally close an existing open block due to incorrect nesting of options or anything like that. Plus, if you are writing into the middle of an existing file, there's a chance you could corrupt other parts of the file besides the part you intended to write. For example, if you have an auto-generated section that you intend to update occasionally, you will need to make sure you only delete and recreate the auto-generated parts and don't touch any hand-written parts, which could involve complicated logic with sentinel comments, etc. Then you need to make sure that users who edit the file in future don't break your logic. In addition it's harder to test your automation code when you're writing into an existing file because there's more edge cases to deal with regarding the surrounding context, etc.


Templating doesn't write in the middle. Writing in the middle is a medieval rudiment of manual configuration helpers. Automated config generation simply outputs a new file every time you run "build" and then somehow syncs it to the target server. All "user" changes go into templates, not outputs. What you're talking about can exist, but it is a mutable hell that is not reproducible and thus cannot be a part of a reliable infrastructure.

If this is not how modern devops/fleet management works, I withdraw my questions cause it's even less useful than my scripts.


If the manufacturer wanted to conduct a supply chain attack on you, they wouldn't need secure boot to do it. They could just design an implant of their own using proprietary technology.

So why does the presence of secure boot as a user-controlled feature affect that risk calculation?


Because manufacturers aren't trying to add surreptitious implants. They're trying to prevent you installing operating systems other than the one they get a bulk discount if they force you to have.


Whatever the intent, the point stands: why would they need secure boot to do that? They could just do it with proprietary controls. So how does the existence of secure boot as a user-controlled feature affect that risk?


The specific proprietary controls you're referring to are called "secure boot".


I think that is a uselessly reductive interpretation of what secure boot is because you could apply the same logic to any security technology. Why should we allow login passwords or user permissions or disk encryption, since those could be used as lock-out technologies by manufacturers, if they just ship them with defaults you can't control?

Manufacturers don't need any user-facing standardized controls to implement lockouts. So the possibility of a feature being used as a lockout shouldn't be a justification for taking away the option of having a user-controlled security feature. Taking it away from users isn't going to stop manufacturers from doing it anyway with proprietary technologies instead.


Because manufacturers do use secure boot to prevent you changing your OS and don't use fingerprint recognition to prevent you selling your device to someone else. If they did the latter, that would also be bad, but they don't.


They aren't doing that either. It's a tiresome point of FUD that comes up in every thread on secure boot.


Showing that you're willing to pay extra for green products (or products that respect animal welfare, etc) creates a competitive environment in which companies can compete on who provides the most green per dollar. Even if those marked up products are all just greenwashed today, it still creates a market opportunity for new companies to come in and outcompete today's greenwashers with products that deliver better green per dollar in the future.


Getting the right version of PyTorch installed to have the correct kind of acceleration on each different platform you support has been a long-standing headache across many Python dependency management tools, not just uv. For example, here's the bug in poetry regarding this issue: https://github.com/python-poetry/poetry/issues/6409

As I understand it, recent versions of PyTorch have made this process somewhat easier, so maybe it's worth another try.


uv actually handles thr issues described there very well (uv docs have have a page showing a few ways to do it). The issue for me is uv has massive amnesia about which one was selected and you end up trashing packages because of that. uv is very fast at thrashing though so it's not as bad as if poetry were thrashing.


I end up going to the torch website and they have a nice little UI I can click what I have and it gives me the pip line to use.


That's fine if you are just trying to get it running on your machine specifically, but the problems come in when you want to support multiple different combinations of OS and compute platform in your project.


I could see this information on the website being encoded in some form in pypi such that it could be updated to support various platforms.


On nvidia jetson systems, I always end up compiling torchvision, while torch always comes as a wheel. It seems so random.


> There are other ways to use the help function, but before we dive into those I'd like to address the *, and / symbols shown in the output above.

Where is this addressed? Is there a section missing here?


I am posting this because I always forget which one it is and searching for it is damn near impossible

https://peps.python.org/pep-0457/#syntax-and-semantics

"/" marks the boundary between "positional only" and "mixed" and then "*" does the same for "mixed" and "kwargs only"


Nice to see they're finally supporting dual external displays even out of clamshell mode on the Air!


Aside: What's that mouse pictured in the second pic?


Correct, it's MX Master 3S in gray


Would like to know that as well. At least GPT doesn't recognize it. Maybe it's a custom build, too?


Was wondering this as well. Looks a bit like a Logitech MX 3. Hard to find pictures from the same angle.


Wow, this is exactly what I've been looking for: lightweight, supports CONNECT and CONNECT-UDP with all three HTTP versions, and supports encrypted client hello. Thanks for the recommendation!


Thank you for the feedback! Some notes:

> The obvious feature gap I see is that it should be possible to provide both the encoder and decoder with a common "context" or prompt to preload the LLM with.

There's no option for this in the CLI interface, but you can modify the context in `src/textcoder/coding.py` by changing the `_INITIAL_CONVERSATION` variable. Right now it's configured to ask the LLM to output text in the style of a tweet.

> You may want to encode the user's data backwards so that it's easier for users to make arbitrary changes to their message to escape bad covertext output.

A random 16-byte value is prepended to each message to serve as the KDF salt and the AES-GCM-SIV nonce. This has the additional benefit that every output is always different, even if they encode the same message. So if the covertext is bad, you can just re-run it to get something different.

> Might also be fun to offer to use the an LLM as a compressor for the text before encrypting.

A similar project, https://github.com/harvardnlp/NeuralSteganography does exactly that and achieves very compact results. However, I was a bit weary about compressing before encrypting given the possible security risks associated with that pattern.


Too late for the prize money, but here's the solution I used: https://www.deepbounty.ai/share/73baadb3-b7d2-4d1d-9f34-8474...

It took 4 or 5 attempts to work around the different instructions -- seeing the reasoning made it much easier.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: