Personally I wouldn't make one part of the name linked to the underlying development language, we make software for the users, and the source is often not part of something users have to deal with. Just as is sounds good in my opinion.
Yeah I agree with this, I don't care to emphasize the Rust thing. I actually had a previous incantation of this project I called 'room' -- but that's too generic.
Also don't care to have 'lambda' in the name, since the point here is this is really a net-new impl of the MOO runtime not derived/forked from the LambdaMOO sources (though compatible, at least to start.)
In fact MOO itself is also not necessarily something to emphasize because because the idea (after 1.0) is to support other language runtimes to run in the same shared persistent authoring environment. MOO support is just the initial 'hook'.
And I wouldn't emphasize the OO thing either. My intent is to explore a more logic/relational oriented model going forward. And "OO" kinda means something different now than it did back then when MOO started before things ossifying into the class-based dispatch model of Java/C++ etc.
It might just be I need something entirely arbitrary and brand-ish.
And in the end, it feels good and fresh. Just like any other (meaningful) skill, decent social skills put forth significant effects on one's awareness of their local surroundings, and with good social skills, invaluable experiences. I've struggled with it for the best half of my life, and later ignored the need to take action and improve. I moved on to finding out the subtle beauty of just pulling up a conversation with a completely new person, of course our perspectives and opinions might differ, and that's where I see the beauty and without it, the world wouldn't have been so dynamic so I think.
Such a transparent and clear post, therefore first, Thank you! Based on my experience with propitiatory IoT devices and protocols, vendors seem to be kind of unwary when it comes to potential security vulnerabilities and exploits in their protocol or firmware of the devices. As I understand, it's now all on us consumers to deliberately report insights to regulatory authorities and respective lawyers and hope everything comes together.
I'm not a mathematician but I enjoy maths both as an anchor to an objective reality and as an art of expression (or rather compression of axiomatic ideas into tight and elegant
equations). A shift in the framework of thinking can radically change viewpoints we have adopted so far (if it's not the otherwise) however often to come off as controversial.
It reminds me of my old days in high-school when I was intrigued by unconventional ways of doing maths, I often had the habit of mixing many different domains into one derivation (which often led to people interpreting it as nonsense). I realize that even if I were "right", whether it is or not is dependent on a subjective, mutual, and shared convention.
> They feel strongly drawn to C's virtues, often explicitly in contrast to other languages
Can someone point me to which set of virtues you are talking about here? Simplicity? Value semantics and imperatives, little abstractions over what the computer OS is actually doing? Manual control over almost everything being done? Are such "virtues" safe and dignifying under an universal context of computer programming? Isn't it a bit dangerous to let such Virtues take over in modern programming dealing with heavy abstraction layers because of the yield in complexity of modern hardware?
Last time I checked on blink, it couldn't run dynamic executables (or ELFs that run with the dynamic interpreter), and would result in a segfault. How has it improved since then?
Blink really isn't intended to run the binaries that come with your distro, because you're already able to run them. Blink is intended to let you transplant x86-Linux executables onto non x86-Linux systems.
For example, I like to build programs on my x86 Alpine Linux machine and then scp them onto my M1 MacBook, Raspberry Pi, FreeBSD, and Windows machines. Blink lets me run them once I do that. Copying program files only makes sense if the program files are static. Linux distro executables can't be distributed to somewhere other than the distros that created them, unless a tool like Docker is used which recreates the whole distro.
Blink is for people who want to be able to distribute Linux software without having to be in the Linux Distributor business too.
If the binary is linked against a old enough version of glibc, it should run on most non exotic Linux distributions, shouldn't it?
At least I feel I'm regularly installing things from people that are not in the Linux Distributor business and are not huge go binaries either (though admittedly, those seems to be trendy as well :) )
Yes... You can technically spin up a RHEL5 VM and compile your code using GCC 4.2.1. That's what I think the Python community still does for manylinux1. The problem with that is GCC got so much better in the last fifteen years, that you might as well be programming with one hand. We shouldn't need to do that. There's nothing wrong with modern GCC 9+ which can produce backwards compatible binaries just fine. The problem is the policy decisions made by the Glibc developers. So if you can avoid Glibc and dynamic linking, then distributing to many distros and operating systems while using modern tools becomes easy.
Those binaries might also just be statically linked with MUSL (at least that's how I'm building my distro-agnostic Linux command line tools). Same idea as Go binaries though (except that I haven't noticed that such binaries are particularly big versus binaries that link against glibc - of course you need some 'meat' in the tool, not just a plain 'hello world', but the difference is measured in kilobytes, not megabytes).
Automatic memory management is indeed the first thing one needs to look for writing performance critical software, and that's a first in my check-list. But
> in-memory storage of databases
Doesn't that sound a bit expensive to have large capacity memory? Although the expense of R/W IO is far cheaper for in-memory analysis. Is such trade-off worth it?
Excellent observation/question :D It depends; sometimes, it's worth it, and sometimes it is not (as always with tradeoffs). Graphs are a bit specific because most of the traversals or expensive graph analytics like PageRank touch the whole graph (even multiple times) -> the entire graph will end up in memory -> why not keep it in memory for faster performance?
But for a vast dataset, the hardware cost might be too much. I think we are aware of the tradeoff. We'll probably provide disk first storage option at some point because that's definitely a valid setup (sometimes the only possible setup). Ofc, we'll invest time in making it as performant as possible.
If a large graph is needed to be read multiple times, sure memory bandwidth will result in the most performance possible under the context of this workload like interacting with PageRank (and going further with optimization techniques on memory allocation and management, will boost the performance even further).
So to my understanding (and a novice one at that), the graph should be stored on disk first, upon initializing the objects will have to be an one-time copy to volatile memory but I question, memory regions are more likely to yield faults and get corrupt and thus graph stored in-memory is also completely flushed? (unless the results are being saved to disk in-between specific intervals of time?) Does that make any sense?
I'm not sure I understand the part about corruption. How would data in memory become corrupted?
How Memgraph currently works, it stores data in memory, and async starts writing data to disk in small data chunks called deltas, later these chunks are deleted and replaced with the whole graph snapshot (there is also a sync option, but that's slower in terms of committing a transaction, letting the user know data is written, e.g., RocksDB works similarly). All disk-related stuff is purely for durability (recovery after the Memgraph process restarts and all interactions with the disk are made automatically in the background during standard system runtime and startup time).
> it stores data in memory, and async starts writing data to disk in small data chunks called deltas, later these chunks are deleted and replaced with the whole graph snapshot
Thanks, that fairly answers my question of recoverability of in-memory graphs.
Rather than "evil", I would dub it as a little too much confidence on statistical models running on blackbox computers without checking for edge cases of false inference(which could always be plausible).