Safe pockets in ambient unsafety does have benefits though. For example, some code has a higher likelihood of containing undefined behavior (code that manipulates pointers and offsets directly, parsing code, code that deals with complex lifetimes and interconnected graphs, etc), so converting just that code to safe code would have a high ROI.
And once you get to the point where a large chunk of code is in safe pockets, any bugs that smell of undefined behavior only require you to look at the code outside of the safe pockets, which hopefully decreases over time.
There are also studies that show that newly written code tends to have more undefined behavior due to its age, so writing new code in safe pockets has a lot of benefit there too.
An article like this would be better if it didn't gloss over the different bash startup files and when they get loaded.
Many times when I help someone debug their shell, the root cause is not understanding what each of those files is for.
Adding an echo to the wrong one and not having it check if the shell is interactive can break all sorts of things like scp for example. And in weird and non-obvious ways.
It's outright wrong since it forgets ~/.bash_login
The startup files are mostly comprehensible if you understand the following two oddities:
* Some pseudo-terminal programs always start the shell as interactive; other pseudo-terminal programs always start the shell as non-interactive.
* Shells started from SSH are different than all other shells, regardless of interactivity.
* Shells other than bash are broken-by-design and really should not be used if you have any choice, but in the real world you do have to know at least a little about how to deal with their brokenness.
The shell man pages don’t really help either. They use terms like “login shell” and “interactive shell” without really defining or differentiating them.
They're defined, it just doesn't go into the weird details. From the man page:
A login shell is one whose first character of argument zero is a -, or
one started with the --login option.
(not mentioned: the login(1) program does the dash thing when you log into a VT (as opposed to graphically, which has a whole nother complicated dance); login(1) is traditionally spawned by getty(8) which is in turn spawned by init(8). Some terminal emulators aggressively spawn login sessions; this is wrong but a common workaround for people who failed to configure their shell correctly, or who aren't using bash and thus don't have all the configurability)
An interactive shell is one started without non-option arguments (un‐
less -s is specified) and without the -c option, whose standard input
and error are both connected to terminals (as determined by isatty(3)),
or one started with the -i option. PS1 is set and $- includes i if
bash is interactive, allowing a shell script or a startup file to test
this state.
(or in other words: an interactive shell is one that outputs a prompt and waits for you to input something, as opposed to shells that are used to execute a file/pipe/string)
Further down, and often forgotten:
Bash attempts to determine when it is being run with its standard input
connected to a network connection, as when executed by the historical
remote shell daemon, usually rshd, or the secure shell daemon sshd.
(this goes on to explain how .bashrc is loaded even when it normally wouldn't be)
These are orthogonal, and can be combined in various ways (there are also a few other possibilities mentioned in this part of the man page, but they're much less relevant):
Login shells are normally interactive, whether started sanely by login(1) or by unreasonable terminals.
Non-interactive login shells are rare and weird and sometimes declared "unsupported" by config-file authors, but do work if you're careful (a common error is to do a pseudo-motd in /etc/profile, which will break for non-interactive logins if you don't explicitly check for them; these days, a lot of things should really be done by PAM instead for better reliability).
Interactive non-login shells are what you normally create unless your terminal is dumb, or what you always create if you start bash within bash.
Non-interactive non-login shells are what scripts use.
The SSH special logic is ignored for the common case where you're doing an interactive login shell. But if you are doing a non-interactive non-login shell, it gives you chance to fix your PATHs, since no other startup files have been read. I think those are the only two possibilities with a normal SSH configuration (and in unusual configurations you are probably being restrictive); other shells would be created as a child of the initial shell. (Remember that SSH feeds its arguments to the shell directly; it does not quote them sanely!)
Fill-in-the-middle. If your cursor is in the middle of a file instead of at the end, then the LLM will consider text after the cursor in addition to the text before the cursor. Some LLMs can only look before the cursor; for coding,.ones that can FIM work better (for me at least).
I take a slightly different approach - I usually have AI assist in writing a script that does the task I want to do, instead of AI doing the task directly. I find it is much easier for me to verify the script does what I want and then run it myself to get guaranteed good output, vs verifying the AI output if it did the task directly.
I mean if I'm going to proof-read the full task output from the AI, I might as well do the task by hand... but proof-reading a script is much quicker and easier.
I don’t think thats how my HOA works. I live in a high rise; I believe the HOA owns the common areas but grants exclusive use of certain parts to owners/tenants.
> Security: Compiler binaries can contain malware and backdoors that insert viruses into programs they compile. Malicious code in a compiler can even recognize its own source code and propagate itself. Recompiling a compiler with itself therefore does not eliminate the threat. The only compiler that can truly be trusted is one that you've bootstrapped from scratch.
It is a laudable goal, but without using from-scratch hardware and either running the bootstrap on bare metal or on a from-scratch OS, I think "truly be trusted" isn't quite reachable with an approach that only handles user-space program execution.
Indeed! An eventual goal of Onramp is to bootstrap in freestanding so we can boot directly into the VM without an OS. This eliminates all binaries except for the firmware of the machine. The stage0/live-bootstrap team has already accomplished this so we know it's possible. Eliminating firmware is platform-dependent and mostly outside the scope of Onramp but it's certainly something I'd like to do as a related bootstrap project.
A modern UEFI is probably a million lines of code so there's a huge firmware trust surface there. One way to eliminate this would be to bootstrap on much simpler hardware. A rosco_m68k [1] is an example, one that has requires no third party firmware at all aside from the non-programmable microcode of the processor. (A Motorola 68010 is thousands of times slower than a modern processor so the bootstrap would take days, but that's fine, I can wait!)
Of course there's still the issue of trusting that the data isn't modified getting into the machine. For example you have to trust the tools you're using to flash EEPROM chips, or if you're using an SD card reader you have to trust its firmware. You also have to trust that your chips are legit, that the Motorola 68010 isn't a modern fake that emulates it while compromising it somehow. If you had the resources you'd probably want to x-ray the whole board at a minimum to make sure the chips are real. As for trusting ROM, I have some crazy ideas on how to get data into the machine in a trustable way, but I'm not quite ready to embarrass myself by saying them out loud yet :)
Memory safety and the borrow checker are useful even in the absence of dynamic memory allocation. This still doesn't bring rust and ada to the same place, but it is important to clarify that piece.
And once you get to the point where a large chunk of code is in safe pockets, any bugs that smell of undefined behavior only require you to look at the code outside of the safe pockets, which hopefully decreases over time.
There are also studies that show that newly written code tends to have more undefined behavior due to its age, so writing new code in safe pockets has a lot of benefit there too.