
Impending kOS (2014) - tosh
http://archive.vector.org.uk/art10501320
======
jnordwick
"As the code base improved, kdb+ releases became faster – and smaller."

This is what I always strive for. Although I've only met Arthur a couple of
times, he has been a huge influence on me as a developer. Besides some shared
personal interests, his development philosophy that simplicity and consistency
produces performance and malleability (to be able to constant add, subtract,
or write).

He's on the top 3 developers I try to model myself after. While he's a god
among men, his products (A+, K, KDB, Q, and now Sakti) make others mortal also
gods among men by just learning to think and use his tools. My co-workers I'm
sure get annoyed at my code-reviews ("delete this", "rthisewrite fir first
with that then delete the second", "delete all this and return this",. ...)
where I consistenly call for deleting a remodeling code to be as minimal as
needed. I'm brutal about it - as I learn from K and it is partially Arthur's
fault (and another dev I look up to). Maybe now they know where I picked part
of it up.

[https://shakti.com/](https://shakti.com/) is his new distributed database
project that is still somewhat hush hush, (and he not the kind of person that
plays buzzword bingo - so I assume he has something special planned). I cannot
wait to see how that turns out. I don't know if he's ever produced a bad
product.

~~~
gmueckl
Remember that as a reviewer, you always have an advantage over the coder: as
the coder, you need to crack the problem and find an algorithmic solution.
It's easy to have a couple of false starts with this. These do stick around
through iterations and changes. Eventually, you submit something that works,
but you're overlooking that the code could probably be less convoluted in
places, because you know the solution and it is perfectly reasonable.

The reviewer who gets to read the code gets to walk the shortcut to the
solution after the trail was blazed for them: they get to think purely about
the working solution without the intermediate steps. And they naturally will
find places where the original developer was thinking in terms of his failed
earlier attempts.

I bet that if I gave you a non-trivial problem to solve in a reasonable amount
of time I could do the same kind of review on your code, forcing you to
shorten it. And I _will_ find something. Guaranteed. That is just how these
iterations work.

~~~
jodrellblank
_I bet that [..] I could do the same [..] And I will find something.
Guaranteed._

Why are you framing this as though it's adversarial and the parent comment is
stating some kind of superiority, and you're here to turn it around and put
them in their place?

Isn't teamwork all about doing better than one person working alone?

~~~
gmueckl
My take on the parent comment is that it has some adversity and feeling of
superiority to it. And I don't like that. I agree that in the end it is not a
competition, I didn't think that I would have to point it out.

In fact I am in the middle of fixing up code that I wrote as novel research
because the powers that be want it in a production system. The code review I
got was detailed and - frankly - embarassing at times because all the
uncertainties and meanderings of research had me change pieces of code over
and over and over again to a point where remnants of old approaches didn't
stand out to me anymore and where I had at times built on top of bad decisions
because they were meant as a temporary hack and experiment, but worked well
enough and stayed in. And by the time I submitted the code for review I had
become blind to these mistakes. I know how to write good and clean code and I
get through code reviews easily _when I know what I am doing_. Not this time,
though.

------
userbinator
APL and the culture surrounding it is what you get if you assume that
programming languages are, like human languages, meant to be learned and fully
exploited, unlike the lowest-common-denominator dumbing down that's more
common in mainstream programming languages. I find it very interesting how
much people can learn if they try.

~~~
smikhanov
What in particular stops you from learning and fully exploiting the C
programming language?

~~~
userbinator
The notion that code should be dumbed down --- it's not as prevalent in C as
it is in C# and Java, but definitely still there.

------
anonu
Wish there was an open source version of kdb and q.

Kdb is fantastic as an app server framework. The q language is just enough
syntax help on top of cryptic apl to make it useful to a mass audience.

We need a free and open project that replicates this functionality.

~~~
3xblah
I don't like the fact that it is closed source but not having source code does
not stop me from utilising it on a personal basis. I think it makes sense to
use it in this capacity and learn the language as best one can in the event
that an open source option besides kona does someday become available. k is
one of the very few closed source programs I use.

It is interesting how the closed source issue is consistently raised on HN
with respect to k when there are numerous other examples of closed source
software used by HN readers where I never see this issue raised. In those
cases it does not seem to be important. For some reason however, it seems
availablity of source code is important to some folks in the case of k.

I am curious; What would you do with the source code? Is it an issue of
verifying what the software is doing, or perhaps being able to modify the
software or make "fixes"? Is it something else? Maybe you simply want a free
option for commercial use. If the later, please disregard my curiosity.

For me the issue I have with k being closed source has always been ports,
i.e., limited OS support. They used to have a FreeBSD port many years ago, but
today there is no support for OS other than Windows/Solaris/Linux, e.g.,
NetBSD.

~~~
adrianN
I'd never use a closed source programming language because it makes me
dependent on the vendor. With FOSS, if the original company goes away I can
pay someone else to keep the software running on a modern computer. With
closed source software I don't have that option.

~~~
3xblah
Not disagreening with you, but in addition to the more recent versions, I run
an old version of k from around 2000, and it keeps running even with the
latest kernels and hardware, thanks to availablity of legacy Debian libraries.

~~~
adrianN
At some point in my career I was forced to use a closed source compiler from
the early nineties. With each new version of Windows it got harder to keep it
running. We also discovered bugs in the compiler and had to rewrite our code
to avoid them. I quit that job eventually, but I imagine that in one or two
more Windows iterations the developers will be forced to compile inside a VM.

------
adrianN
I too wish we could get rid of all the bloat and use the marvelous machines at
our fingertips to their full potential. But I seriously doubt that you can
write a bloat-free OS that runs on multiple brands of laptops with a bloat-
free webbrowser that allows me to read most websites with a small team in
reasonable time, no matter how brilliant the people involved are.

~~~
AnIdiotOnTheNet
As others have mentioned, a bloat-free web browser is impossible currently
because the web itself is far too bloated.

However, I'm not convinced that our OSs can't be much, much simpler. You can,
almost reasonably, use DOS as a daily driver still. I have very fond memories
of DOS because it was simple. Even as late as the early 2000s, if you wanted
something like a MAME cabinet you'd often use DOS. You just plop some
application and data folders onto a disk with like 4 system files and you were
pretty much done.

Where is today's DOS? I suppose it's Linux, which is orders of magnitude more
complicated (especially once you start talking about a GUI) and even Linus
admits it has become quite bloated. I personally think we can do a lot better
and I've been toying with the idea of putting something together because I'm
just so sick of modern computing's bullshit.

PS: If anyone is aware of an organized effort towards this goal that already
exists, please let me know.

~~~
cturner
Interested. I have been stumbling around on this for years. Old comment,
[https://news.ycombinator.com/item?id=6972431](https://news.ycombinator.com/item?id=6972431)

I like your rule-of-thumb that you should not need a package manager.

Muratori has a theory that USB has made the OS scene complex
([https://www.youtube.com/watch?v=kZRE7HIO3vk&t=1350s](https://www.youtube.com/watch?v=kZRE7HIO3vk&t=1350s)).

Two other things - the browser and TCP/IP.

Goal: a system where you can fit the whole stack in your head, yet participate
in a networked world.

Approach A. Outline a reference hardware platform. Port NetBSD or minix3 or
plan9. Then simplify. e.g if unix, get rid of users and groups, get rid of x,
get rid of package mgmt, get rid of nfs. You could bootstrap this in qemu.

Approach B. Establish an alternate browser. Something like gopher, but with
async events between the 'page' and the user. This allows for chat and form
manipulation. You can send non textual content through this async link -
video, audio. Codecs are a trap, they lead to pkg management and complexity.
How to stop proliferation?

~~~
AnIdiotOnTheNet
I think Casey hit the nail on the head here. We used to have hardware with
standardized ABIs, some of which had extensions (like Tandy graphics compared
to CGA). That made it relatively trivial to make OSs, which is what PC booter
games did really. DOS itself barely abstracted anything at all, because you
didn't need drivers for standardized hardware ABIs.

Unfortunately the only way to deal with that is to give up on the idea of
running on a majority of existing PC hardware and target only a small subset
of commodity hardware. Ideally it would all be open, to ensure it isn't going
anywhere for a while, but sadly there is no such system.

I'm glad to see I'm not the only one who thinks Users and Groups are the wrong
abstraction for systems like this.

------
traverseda
Here is a text editer written in k

    
    
        \l view.k
    
        /key(return back ^delete) 
        kx:kr:{u,:,(j,j+#x;*k_a);e[k]x};kb:{$[=/k;J j-1 0;];kx""};cd:{J j+!2;kx""}
    
        /edit(undo cut paste)
        e:{a::?[a;x;y];J(*x)+#y};cz:{$[#u;e/_`u;]};cx:{kx cc`};cv:{kx@9'`}
    
        /save
        cs:{n::$[#n;n;0'""]1:a;r::a}
    
    

[http://kparc.com/$/](http://kparc.com/$/)

It's not something I find appealing, personally. Then again I am a python guy,
which is about as far from this as you can get.

~~~
all2
[http://kparc.com/lisp.txt](http://kparc.com/lisp.txt)

Is he saying k is a lisp? I'm sure there are important differences here, but I
don't know enough to say what they are. (There is a question implied there.)

~~~
Volt
It's a comparison to Lisp.

------
pvg
Previously (in 2014)
[https://news.ycombinator.com/item?id=8475809](https://news.ycombinator.com/item?id=8475809)

------
dchest
Found this demo of kOS
[https://www.youtube.com/watch?v=kTrOg19gzP4](https://www.youtube.com/watch?v=kTrOg19gzP4)

------
undecidabot
For those who want to see it in action, there's actually a talk about kOS on
YouTube:
[https://www.youtube.com/watch?v=kTrOg19gzP4](https://www.youtube.com/watch?v=kTrOg19gzP4).

The talk is by the fourth member, Geo, who also frequents HN (geocar).

------
riffraff
so, what happened with kOS since 2014?

~~~
jnordwick
[https://shakti.com/](https://shakti.com/)

the new project. Arthur has a tendency to turn mid stream sometimes. I'm sure
much of that code was used in this new DB he's working on (db's and os's are
quite similar).

~~~
edwintorok
I thought this refers to a RISC-V processor
([https://shakti.org.in/](https://shakti.org.in/)), which would've been quite
a turn indeed.

------
andrewflnr
Wow, that's... terrifying? I feel vaguely terrified. It makes me feel like a
big dinosaur about to be supplanted by tiny birds and mammals.

------
aasasd
I'm guessing kOS will be released at about the same time as the first article
about K that actually says how it achieves the performance.

~~~
tluyben2
Besides Whitney being being very good at finding and sidestepping bottlenecks
(like writing his own memory management because the OS one is too slow), the
entire interpreter, db engine and 3rd party software fits into L1 or at least
L2 cache. This is a marketing blurb saying that as well
[https://kx.com/discover/in-memory-computing/](https://kx.com/discover/in-
memory-computing/) . But when you read about and try out Whitney software you
will see that terseness in every way does makes things faster.

~~~
aasasd
I've heard about the cache thing, but that doesn't explain much. How is it
different from machine code or VMs? Does it remove the need to read program's
_data_? How much time does a non-K program usually spend moving the code from
memory? Why wouldn't generations of programmers make the same optimizations―or
just copy them from K? Or what are the tradeoffs, considering that K programs
are essentially bytecode?

Also, there are plenty of memory allocators to choose from, and every runtime
that doesn't use the libc has to ship its own (if it does memory management,
of course). The OS only gives you big chunks to use as you please, and you
can't sidestep the syscall. So that's not a super novel thought.

~~~
tluyben2
> I've heard about the cache thing

It's very optimized for in memory use, so assuming the db is all in working
memory and the code operating on the data is in L1/L2, and for the type of
data in the column store (Kdb+), you usually pull related chunks into the Lx
data cache on which you filter. I would say k is fast for the use case where
column memory based storage is a good fit, so relatively small data sets that
live in memory and where the operations on the data fit column stores well; I
would bet almost all k programmers (including myself) mostly have experience
with those datasets or have shoehorned less suitable cases into that case
because it did not perform otherwise.

There is obviously no magic, besides, for many 'new' programmers and for
current standards, the incredibly small binaries for so much functionality.
And that helps with always staying in cache vs almost every other software you
will use these days where there is continues chunks of instruction data going
to and from cache. That's why C (and maybe Rust some time in the future but
the binaries were too large when I last checked) is still important if you
really want to get to that point.

Ofcourse in that 200something kb k/kdb+/q binary there is no room for much
optimizing, that is why it absolutely falls down for cases it was not
optimized for. And when you use a lot of k you know how (and automatically do)
to shoehorn basically anything in those cases.

It is not magic by any means, but it comes closer to magic for many as it is
alien like assembly compared to JS or C# or something like that I guess. And
the speed comes from keeping to your niches.

> Also, there are plenty of memory allocators

As I understand it, he did it because he did not find the Windows memory
allocator efficient enough specifically; not sure why he didn't check for
other allocators; I guess because he was optimizing a runtime, he just rolled
his own instead.

------
officemonkey
The original MacWrite fit on a 400 kb floppy.

~~~
pvg
WordStar 1 fits in about the size of this page with a few comments and that
gargantuan 7k favicon.

