Could it get better than that?
We're ready for the revolution!
To all you naysayers: we'll see who gets the last laugh 10 years from now..
Then you'll modify it to be, "Have you heard of anyone running BrainFuck in production other than Nick P's BrainFuck-as-a-Service platform of questionable longevity?
You know what, since we're on HN, let's just apply to the next round of YC. They'd be crazy not to fund the first BFaaS in human history!
I already have a logo in mind. A cloud surrounding a brain, and inside the brain is "the finger".
Fast-forward time some to see it becomes a post-mortum, legacy system, or acquired by Novell as their entry into AI. They promise it will be the success of Netware all over again. Audience at conference pauses at the ambiguity of that statement unsure if they should cheer or charge out the door.
But it looks like there is a legitimate and interesting shift towards running stuff on FPGAs. Google, FB, and MS are all doing something with ML hardware acceleration.
I actually attended a talk a few weeks back by Microsoft's Doug Burger. He has been leading a team that has created a low-latency FPGA network to accelerate stuff within MS. The eventual goal is to allow customers to take advantage of this distributed FPGA fabric to run custom firmware.
He said that FPGAs now run several of Bing's core search algorithms and Azure has some stuff running on FPGAs too. I forgot the exact performance gains, but it was somewhere around 2x for Bing with extremely stable response time even at insane server loads.
One interesting factoid is that they were able to translate Wikipedia in its entirety from English to Russian using 90% of the currently deployed FPGAs in around 100 ms. Insane stuff.
In second, this jumps out at me: "The first issue is that the problem instances where the comparison is being done are basically for the problem of simulating the D-Wave machine itself. There were $150 million dollars that went into designing this special-purpose hardware for this D-Wave machine and making it as fast as possible. So in some sense, it’s no surprise that this special-purpose hardware could get a constant-factor speedup over a classical computer for the problem of simulating itself."
Actually gives me an idea. Instead of comparison to BF competitors, I could actually just compare a massively-parallel, BF CPU to 256 interpreters communicating with each other through IPC running on a general-purpose computer. I'd show the CPU performed many times better. It's the closest thing I can think of to how D-Wave is doing benchmarking. The difference is $150 million is not in either my bank account or addition of transaction history.
"One interesting factoid is that they were able to translate Wikipedia in its entirety from English to Russian using 90% of the currently deployed FPGAs in around 100 ms. Insane stuff."
Didn't know about that project. Pretty cool. Yeah, the FPGA projects have been doing all kinds of stuff like that going back to at least the 90's from my reading. The speedups could be over fifty fold. Some claimed three digits. Other programs harder to parallelize & reduce... which is basically what they do on FPGA... might have under 100% speed up, tiny speed up, or even a loss if it was sequential algorithm vs ultra-optimized, sequential CPU like Intel's. The latest work, which started in 90's projects I believe, was to create software that automatically synthesizes FPGA logic from the fast path of applications in a high-level language then glues them into the regular application on a regular CPU. You can't get speed-up of actual hardware design but makes boosts easier if problem supports good synthesis. Tensilica is another example of a company whose Xtensa CPU is one that's customized... from CPU to compilation toolchain... to fit your application. Container people are compiling and delivering containers. Tensilica compiles and delivers apps with a custom CPU.
That's a great expansion it being undocumented. Worked wonders for Microsoft's strategy of lock-in. We're going to have to make sure the CPU and platform's API's are considered copywritten on top of that so they might not be... legally allowed... to clone or port it without paying us. We could collect royalties on a BrainFuck CPU for our lifetime plus an arbitrary number of years decided by Congress reps collecting bribes.
I'm not sure it's worthy of a Bond villain but it's a start. ;)
Unfortunately, you can't outrun software's hidden assumptions behind correct functioning forever. There will come a time, much like Y2K for COBOL, that their critical database will just loose everything if they don't make an internal change that requires understanding all the Rust, BrainFuck, Verilog, and analog components I used "because I was learning analog at the time."
I can't predict what they will do facing such a situation. I can tell you to start a brewery of fine Scotch that sends flyers to them around the time of the feasibility study. You'll make a killing. :)
Honestly, it'd probably be worth the effort to just write the Rust up front. The kind of technical debt you'd land in otherwise... euurgh.
That's actually the point of my plan. :P
Seems legitimate to me. Seriously, why not explore the extremes? Thats where we often learn the most.
I suspect many here think this is a joke, but although does have it's funny side this definitely warrants the research IMHO
Surely, phrases such as "We observed the following Instructions per Second (IPC)" and "assuming a perfect 1 instruction per second on the general purpose processor" should have activated some neurons in some brain, even assuming it got distorted by doing this research?
Anyways, can we have a version of TIS-100 where you write brainfuck with added message sending primitives instead of the TIS-100 asm? Of course, each BF interpreter would be working with only, say, 2 or 3 cells of RAM.
I want to see either that, or a TIS-100 asm compiler for GreenArray chips. Or both.
Moore and your GreenArray chips: be afraid! Something even more incomprehensible is coming after your market share!
In some makeshift syntax and with one instruction too much, I got:
>, < = inc, dec al
+, - = inc, dec [al]
. = out output_port, [al]
, = in [al], input_port
[ = Label: cmp [al], #0
jnz (Labelend + sizeof (jmp instruction))
] = Labelend: jmp Label
I therefore suspect a 3-bit encoding would be more in the spirit of BrainFuck. :)
Fun fact: systemd now has an entire command devoted to setting locale, something tyically done through simple text files.
Lennart, just stop writing executables that do almost nothing. Despite what you may thing, we don't all have infinite disk space.
Not much, you could fit a thousand copies of this in 336 MB.
-197 copies of nethack
-10% of DarkPlaces
-36% of GCC
-44% of ZSH
-30 copies of CPython (just the executable, of course: that's all you counted)
-10% of ZSNES
-1.36 copies of XScreenSaver
-42% of Teeworlds
All of these are more valuable uses of my precious disk space, because all of them actually do something: They actually give me capabilities I previously didn't have.
So, you get a whole OS with some utilities in between 1/3 and 1/2 of SystemD binary. Also worth noting Oberon includes safety checks, too. ;)
However, from what I've seen, it's a bit too verbose for my tastes (the language, not the OS).
So, that's a quick review. The main one you'll find is older system where A2 Bluebottle is latest. Runs fast, too.
Yeah, it does look really cool, just verbose. And you say it's easy to implement....
Maybe I can get it running on my gameboy?
I kid. Sort of.
It is a compiler and OS. Easy means a different thing in this context versus average usage in software. I'd say vastly easier than trying to understand GCC or Linux. How about that? Also, the original version was done by 2 people and some change. Each port of compiler was done by 1-2 people in relatively short time. Mostly students with basic knowledge in CompSci. Helps it's well-documented.
So, it's not easy as throwing together a web app but can't be ridiculously hard if you take it a piece at a time. The use I had for it, other than learning or pleasure, would be for subversion-resistent, verified-to-assembly builds. It's super easy to learn Oberon with the OS itself straight-forward. People could code it up in a local language, the compiler too, compile those (or hand-done in ASM), and bootstrap into a trusted environment. That can be used to produce the rest with compilers built on top in a memory-safe language that handles foreign code more safely. Better, no patent suits or anything on Wirth-based tech like .NET or Java might get you.
Other than Oberon system, Modula-3 (not Wirth) and Component Pascal (Wirth et al) are most worthwhile to check out in terms of practical languages. BlackBox Component Builder is still in active use with Component Pascal, esp in Russia and Europe. They love it over there since it's got OOP & GUI with Oberon simplicity & safety.
Then why not finish off with PICL, the language/compiler for the PIC16. Includes the uploader. All in <700 lines of Oberon code. The best part is the amazing tutorials/documentation. Some amazing finds at Prof Wirths personal site. https://www.inf.ethz.ch/personal/wirth/
This has to be the most jam packed tutorial you could ever hope for:
Once again, Oberon looks neat. I am usually not a fan of the Wirthian languages, so I might not enjoy it in the same way I enjoy, say, Python, or Scheme, but it looks interesting.
You and I are appearing to be more similar than I thought on these kinds of things haha.
"Oberon looks neat. I am usually not a fan of the Wirthian languages, so I might not enjoy it in the same way I enjoy, say, Python, or Scheme, but it looks interesting."
In your case, rebooting PreScheme to do a small OS like Oberon or clone of it might be a better take. There's already books on quickly putting together a Scheme compiler. The PreScheme and VLISP papers are pretty detailed. Include some safety features from Clay (C-like) and Carp (Lisp) with Scheme's macros and simplicity. Mock-up assembly language in it so you can code & test it, too, in Scheme with extraction process to real thing.
That combo seems like it would work better for you plus result in at least one deliverable: a PreScheme implementation for system programming whose code was compiled with GCC or LLVM. You might find that useful over time esp if you made syntax compatible with one of your typical, Scheme implementations to port those libraries over easily. Split it between GC'd code and non-GC'd code like Modula-3 did.
Speaking of which, I should probably consider doing that project I was thinking of doing for a while: port SCSH to some other scheme implementations which are actually alive.
So yeah, the trick with doing a prescheme project is that I'd first have to build a PreScheme compiler (with added safety, etc.) and then I'd have to build one that can run on bare metal. The former ought to be possible, especially if I'm targeting C or LLVM (some bits might get a bit rough, but most of Scheme is pretty easy to implement). The latter would perhaps be possible, but making use of it would likely require a more in-depth knowledge of the hardware than I currently possess.
You know, I was joking about porting Oberon to the gameboy, but the GBA is a really well-defined piece of hardware with readily available tooling (which I actually have, because you need it for LSDJ), and a good deal less complexity than a modern x86 machine...
Man, why can't I just write videogames, like the normal people?
Well, my current side project is writing a version of the Tafl-inspired board game Thud that you can play over a network, so I guess I'm already doing that...
That's what they did originally. Should work.
" but the GBA is a really well-defined piece of hardware"
Then do a PreScheme wrapper on it supporting inline or calls to assembly for performance-critical routines. See if you can make the primitives on bottom close to how hardware works for efficiency. The AI research of the 80's indicate it might be able to handle a board game.
I checked with ldd; it doesn't have any other systemd dependencies as far as I can see.
I'm saying relative to disk space today which is on the magnitude of hundreds of gigabytes; the disk usage is not a concern.
And yes, disk usage is a concern. Because those kilobytes add up fast, and I've only got so much space.
And finally, the point isn't merely that it takes up disk: the point is that it's worthless. The old solution to that problem, having init set locale on boot from a config file, worked fine. I don't mind disk use that much. But I do mind pointless software.
All code has bugs: to minimize bugs, write less code.
EDIT: Read Godot. That was hilarious but I kept seeing too much reality in it.
There's quite a bit of reality in the article, but not quite enough to depress me. Maybe you've seen more of this sort of thing than I have.
On a daily basis, the worst I have to deal with is SystemD, which is creatively incompotent and best, and downright insane at worst. But it's got a pretty face, and the draw of easy-to-understand unit files (over, say, SVinit), so people don't realize the depths of the madness that lies beneath.
Something that can readily be observed in action with systemd.
Reading through it, you can tell the author had written it with a knowing smile
(I followed up an interview with a link to a brainfuck interpreter I did in Haskell https://github.com/serprex/bfhs I got a 2nd interview, but no offer)