Hacker News new | past | comments | ask | show | jobs | submit | zkms's favorites login
   submissions | comments

Special Circumstances, LLC | Remote | Intern / Contract | Front-End Engineer

What if org-mode was an actual language?

What if Jupyter notebooks had legible source code?

What if Donald Knuth was right about literate programming being the future of the profession?

Special Circumstances† is looking for a talented front-end engineer with a full-stack mentality, to build tools for building tools. Points for experience in Lua(JIT), ZeroMQ, SQLite, and Parsing Expression Grammars. The browser-facing part of the project is green field, we're hoping to use a minimalist VDOM framework but will consider Vue or React if you're persuasive. ClojureScript, TypeScript, Elm? Talk me into it.

If you know your way around langsec, that's a plus.

The position is contract with the potential for full-time employment. All the software you write will be released under a FOSS license. We're located all over the world; if this sounds like you, reach out! sam@special-circumstanc.es, put [hn] in the subject line.

†An intelligence agency from the post-scarcity future, sent back in time to bootstrap itself into existence.


Hey Jesse,

I'm also formerly homeless, and now the cofounder of https://LambdaSchool.com (YC S17). We have a nonprofit fund designed specifically to help out homeless software engineers looking for a job.

I'd love to put you in a cheap airbnb or apartment, hook you up with Lambda School's career services team who will help review your resume, get your github and portfolio in shape, practice interviewing, and help you land a job. We've already seen multiple success stories (for example David https://www.kron4.com/news/bay-area/homeless-man-handing-out..., who is now making almost $100k/yr).

Generous donors have allowed us to continue to grow the fund after Lambda School's original $50,000 contribution.

I know how hard it is to focus on getting a job when you're just trying to survive, so let's try to eliminate that distraction. Hit me up at austen@lambdaschool.com and let's help you focus and get to where you need to be.

PS if anyone on HN wants to donate to the fund it's here - https://www.gofundme.com/lambda-perpetual-access-fund. Generally we can help someone who knows how to write code go from homeless to hired with a couple thousand dollars all-in.


Can people here please stop posting that ZFS needs ECC memory. Every filesystem, with any name like FAT, NTFS, EXT4 runs more safe with ECC memory. ZFS is actually one of the few that can still be safer if you don't run with ECC memory. Source: Matthew Ahrens himself: https://arstechnica.com/civis/viewtopic.php?f=2&t=1235679&p=...

In an old discussion regarding ECC/ZFS (in particular, whether hitting bad RAM while scrubbing could corrupt more and more data), user XorNot kindly took a look the ZFS source and wrote

"In fact I'm looking at the RAID-Z code right now. This scenario would be literally impossible because the code keeps everything read from the disk in memory in separate buffers - i.e. reconstructed data and bad data do not occupy or reuse the same memory space, and are concurrently allocated. The parity data is itself checksummed, as ZFS assumes it might be reading bad parity by default."

His full comment can be found here:

https://news.ycombinator.com/item?id=8294434


Read data -> compute checksum -> check against on-disk checksum -> rebuild from parity -> repeat.

But this also wouldn't be a random failure. ECC RAM with a consistent defect would give you the same issue. It would also proceed to destroy your regular filesystem/data/disk by messing inode numbers, pointers etc.

Your scenario would require an absurd number of precision defects: ZFS would have to always be handed the exact same block of memory, where it always stores only the data currently being compared, and then only ever uses that memory location to store the rebuilt data. And then also probably something to do with wiping out the parity blocks in a specific order.

This is a weird precision scenario. That's not a random defect (bit-flip, which is what ECC fixes) - that's a systematic error. And I'd be amazed if the odds of it were higher then the odds of a hash collision given the number of things you're requiring to go absolutely right to generate it.

EDIT: In fact I'm looking at the RAID-Z code right now. This scenario would be literally impossible because the code keeps everything read from the disk in memory in separate buffers - i.e. reconstructed data and bad data do not occupy or reuse the same memory space, and are concurrently allocated. The parity data is itself checksummed, as ZFS assumes it might be reading bad parity by default.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: