Hacker Newsnew | past | comments | ask | show | jobs | submit | ape4's commentslogin

Would a practical approach be parsing the source into clang's AST format. Then let it make the actual executable.

You'd more likely want to emit LLVM IR rather than try to match clang's internal AST. That's essentially what most new language projects do now (Rust, Swift, Zig all use LLVM as their backend). You get optimization passes and codegen for multiple architectures for free, and the IR is well-documented. The tradeoff is you skip learning about the backend, which is arguably the most interesting part.

Right that's what I meant ... the .ll file. Thanks

Why don't they charge by the Gigabyte

Because approximately no one wants that. Anyone who does already uses S3 etc.

They do, it's called B2 and is another product of them.

I use them for the b2 bucket style storage where this happens. Its expensive per gig compared to the cost of a working personal unlimited desktop account. I like to visit their reddit page occasionally and its a constant stream of desktop client woes and stories of restoring problems and any time b2 is mentioned its like "but muh 50 terabytes" lol

It's cheaper if you have multiple computers with normal amounts of data though. My whole family is on my B2 account (Duplicati backing up eight computers each to a separate bucket), and it's $10/month.

I guess some of the great symphonies doesn't count as "intellectual"?

I also nominate the invention of Clippy the friendly assistant.


Good on El Reg for doing some actual hands on fact finding.

I'd like to see some side-by-side before and after photos

I imagine it would be harder to make a design product that doesn't use pixels but it needs to be done - because that's the right way to make CSS

The music was trying too hard ;)

I don’t know, I like it.

You could make a local `man` page.

I have noticed I can make a less sharp sound with my bike bell by ringing it a certain way. I use this to let pedestrians know I am coming but that they don't have to jump out of the way.

If I recall correctly:

    dd if=/dev/urandom of=/home/myrandomfile bs=1 count=N

I just use fallocate to create a 1GB or 2GB file, depending on the total storage size. It has saved me twice now. I had a nasty issue with a docker container log quickly filling up the 1GB space before I could even identify the problem, causing the shell to break down and commands to fail. After that, I started creating a 2GB file.

Interesting! I did `fallocate -l 1G myfile` - very fast. Its all zeros but probably won't be compressed by the filesystem since its created with the fallocate() system call.

If you want to do it really quickly

    openssl enc -aes-256-ctr -pbkdf2 -pass pass:"$(date '+%s')" < /dev/zero | dd of=/home/myrandomfile bs=1M count=1024
Almost all CPUs have AES native instructions so you'll be able to produce pseudorandom junk really fast. Even my old system will produce it at about 3Gb/s. Much faster than urandom can go.

That's very cool. Sadly running that exact command gets an incomplete file and error "error writing output file". It suggests adding iflag=fullblock (to dd). Running that makes a file of the correct size. But still gives "error writing output file". I suppose that occurs because dd breaks the pipe.

Weird, I could have sworn that used to work, maybe I wrote the notes down wrong.

Easiest alternative I guess is to pipe through head. It still grumbles, but it does work

    openssl enc -aes-256-ctr -pbkdf2 -pass pass:"$(date '+%s')" < /dev/zero | head -c 10M > foo

Fwiw you can also do this with

    head -c 1G /dev/urandom > /home/myrandomfile
And not have to remember dd's bizarre snowflake command syntax.

My choice has always been `shred`:

  $ sudo truncate --size 1G /emergency-space
  $ sudo shred /emergency-space

I find it widely available, even in tiny distros.

Very cool. Reading the man page I see, by default, shred does 3 iterations. Adding --iterations=1 makes it random enough for this purpose and faster.

Terrific, thank you!

bs=1 is a recipe for waiting far longer than you have to because of the overhead of the system calls. Better bs=N count=1

That’s also not great if you’re trying to make a 10 gigabyte file. In that case, use bs=1M and count=SizeInMB.

Modern computers are crazily overengineered...

Most current desktops (smaller than your usual server) won't have any problem with the GP's command. Yours is still better, of course.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: