Hacker News new | past | comments | ask | show | jobs | submit | mark-probst's comments login

Glide | https://glideapps.com | Senior Platform Engineer, Data | Remote | Full Time

Glide is building a simpler, faster way for anyone to build custom software for their business without any technical or design skills. Tens of thousands of non-technical people use Glide to build apps to power their businesses, organizations, personal projects, and more.

Glide is looking for a Senior Platform Engineer to help evolve our data sources strategy, become more efficient with our infrastructure utilization as we scale, and achieve a high level of operational excellence for availability, security, and performance.

The ideal candidate has a specialty in provisioning and managing large-scale, persistent data sources (e.g., PostgreSQL, Apache Kafka, NATS, etc…) in a variety of cloud environments. They are comfortable evolving legacy services already in production by using progressive rollout techniques. And they leverage observability and appropriate testing to build confidence in a service.

Full description and application at: https://glideapps.com/jobs/senior-platform


Glide | Principal Front-end Engineer | Remote | Full-time

Glide is creating a simpler, faster way for anyone to build custom software for their business, without any technical or design skills. Tens of thousands of non-technical people use Glide to build apps to power their businesses, organizations, personal projects, and more. As customers create apps to visualize, interact with, take action on, and analyze their growing data sets, we must keep this big picture in mind as we improve various levels of our engineering stack.

We're looking for a Principal Frontend Engineer to help evolve the Glide product from a single large React-based application to a modern distributed system. We have a large backlog of features and need to establish the next generation of our platform so we can serve our rapidly growing customer base.

The ideal candidate has experience planning, leading, and executing on large architectural efforts and has front-end design opinions forged from real world experience. They are highly productive engineers in their own right, but also level up their team with their technical leadership, clear communication, and mentorship.

More info: https://www.glideapps.com/jobs/principal-frontend


Glide | Senior Frontend Engineer | Remote | Full-time

Glide's mission is to put the power, beauty, and magic of software development into the hands of a billion new creators. We're doing this by making software dramatically easier to build, with a spreadsheet-like programming model underneath easily configurable UI components.

We're looking for a Senior Frontend Engineer to help us push the limits of our Glide builder UI and glideapps.com.

You'll spend your time

- Creating amazing UI experiences across Glide Apps, Glide Pages, and glideapps.com

- Developing the Glide 'builder' environment where users create their apps.

This job might be perfect for you if...

- You have at least four years of experience as a software engineer

- You are passionate about software quality and correctness

- You're a great communicator

- You have a love and respect for customer-centric, design-driven software development

Glide's frontend is built in TypeScript and React.

https://www.glideapps.com/jobs/senior-frontend-engineer


At Glide we've set up Replay for a few power users who report a lot of bugs. It works when you can onboard them and explain what's important in a usable replay, in particular that they have to do as little as possible to show the bug, and make it clear what the bug is. My guess is that the replays we'd get from regular users would be much less helpful, though I'd love to try.


I've been using Replay for the past few months, and I miss it so much now whenever I can't use it for some reason (usually because I'm debugging backend stuff).

The main value propositions of Replay are:

* you can record a bug once and then debug it as often as you want, with exactly the same results * you can share that recording with somebody else who might have more knowledge about the bug * you can time travel within the recording when debugging

All three of these are game-changers.


But look at 95% and 99% latency numbers - Rust is on top. And surprisingly (to me) Go totally tanks in 99% latency.


It's not that surprising -- Go is designed for speed by simplicity, and a low-pause-latency GC is anything but simple.


In 2021, Go is supposed to have GC pauses on the order of several ms at worst. Not 40 ms. So it kind of is surprising, something seems to be broken there. I'm wondering whether this isn't a limitation caused by forcing a single core operation, the runtime might not be designed for that.

EDIT: Someone else noted (https://news.ycombinator.com/item?id=27085507) a discussion on Reddit where a <5ms latency was achieved in 99.9% cases, so perhaps this is indeed a subpar result.


When you allocate against a running GC it will penalize you for this (literally sleep your thread) - hence garbage tail latency. The solution is to not do allocation which is what gogo library strives for


Most of the phases of GC can be done parallel, so allocation should stop only for a really small amount of time. For comparison, Java's low-latency GC (which optimize for latency but may reduce throughput), ZGC can do worst-case <1ms pause times, that is the OS scheduler will cost more than GC.


I’m talking about Go garbage collector specifically which does mark and sweep and will slow down functions doing frequent allocations on purpose in order to catch up. This is different from stw kind of scenario e.g shenandoah


Iirc, the official protobuf module for Go still uses reflection in the generated code as opposed to fully generating the encoding and decoding code, so maybe that creates additional garbage or performance issues or lock contention. I think I remember there being an alternative module that fully generates the code, and it would be interesting to see that in the table as well.


Why is it surprising that Go tanks in 99% latency? That's what I would've expected.


You would? I wouldn't. It definitely looks like somewhat pathological case to me, at least in 2021. Maybe five years earlier the number would be appropriate, but there seems to be something wrong with Go slowing down this much at that small a heap. I'm wondering if it was tuned at all.


Because Golang was touted as a systems programming language (a better C).


Go is generally slower than Java and C#, many languages pretend C performance but this is just false advertising


I was about to say because of GC, but doesn’t java also GC ?


Java also has a JIT compiler which might eliminate object allocations at runtime.


It also has better GCs


I'm the original author. Thank you for your explanations! I added a link to this post to the README.

I believe the reason I did all the stack juggling was because I wanted to write it "the Forth way", or maybe the "pure stack-based way", and using variables seemed like cheating.

I certainly won't be pursuing this further (I wrote it 20 years ago as a programming exercise), but I hope somebody will learn from your exposition :-)


I'm glad they were enjoyable!

I think "using variables seemed like cheating" was a lot of my motivation, too, and it led me into a great deal of mischief. Despite what I thought at first, I think "the Forth way" does use variables pretty often, although I guess different people's "Forth way" is different. But consider Chuck Moore's Forth Way:

> A Forth word should not have more than one or two arguments. This stack which people have so much trouble manipulating should never be more than three or four deep. ... But as to stack parameters, the stacks should be shallow. On the i21 we have an on-chip stack 18 deep. This size was chosen as a number effectively infinite.

> The words that manipulate that stack are DUP, DROP and OVER period. There's no ..., well SWAP is very convenient and you want it, but it isn't a machine instruction. But no PICK[,] no ROLL, none of the complex operators to let you index down into the stack. This is the only part of the stack, these first two elements, that you have any business worrying about.

> The others are on the stack because you put them there and you are going to use them later after the stack falls back to their position. They are not there because [you're] using them now. You don't want too many of those things on the stack because you are going to forget what they are.

> So people who draw stack diagrams or pictures of things on the stack should immediately realize that they are doing something wrong. Even the little parameter pictures that are so popular. You know if you are defining a word and then you put in a comment showing what the stack effects are and it indicates F and x and y

    F ( x - y )
> I used to appreciate this back in the days when I let my stacks get too complicated, but no more. We don't need this kind of information.

http://www.ultratechnology.com/1xforth.htm

I was trying to find the "sheesh, just use a variable" quote I seem to remember from him, but I can't find it. Maybe I'm inadvertently attributing my own ideas to him. But if you look at his code (there are some excerpts in http://www.ultratechnology.com/fsc98.htm and http://www.ultratechnology.com/tape1.htm) you'll see he's pretty sparing with stack operations and uses variables (in memory) pretty regularly.

Certainly my recommendation here—start with statements and expressions, use lots of variables—differs from, say, Jeff Fox's recommendation. And I'm pretty sure Jeff Fox was a better Forth programmer than I am. And I think it's common that, with enough thought, you can find a better way to design the code that reduces the amount of state you have to keep at memory addresses. But I think a programmer already experienced in another language is much more likely to shoot herself in the foot in Forth by using too many stack operations and too many values on the stack, than by using too many variables, so I think it's probably a better learning path.

(FWIW, I think the advice to not write stack comments is probably bad advice, even though Chuck Moore was and probably is a much better Forth programmer than I am.)

Also, though, and I feel like I should have emphasized this more from the outset, I have never shipped code to users in Forth. In fact, I don't even have any personal utility programs written in Forth. That's because I still find Forth hard to read, write, and debug, despite being fascinated with it for 25 years. I think my motivation for writing the above readprint.fs code was to sort of see how terrible I was at writing Forth (answer: it took me 2 hours to write 30 lines of code, so still pretty terrible, but at least I did manage to write a working parser.) So please take my opinions on the matter with a grain of salt.


The Lisp adage about '<something, something> 100 functions with 12 data types vs <something, something>' seems to relate treating the argument stack as the tuple so that you can have multiple functions that take the var-arg union of the shape of the stack. I don't think I am explaining it very well, but I think this is the gist of how one constructs algebras or monoids over the shape of the stack.


Perlis's epigram: "It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures."

It's not strictly about Lisp; Perlis was fond of Lisp but his true love was APL. But you could use it to advocate JSON, bytestream shell pipelines, or even TCP/IP. Or a flat byte-addressable memory, I suppose, like Forth or amd64.

Are you thinking of something like the static typing system of Christopher Diggins's "Cat" language, or its children Kitten and Mlatu?


Heh, the kragen[hashmap] comes through. Thanks for the quote.

While those are interesting, probably in the same way that Shen is interesting, I was thinking more along the lines that stack is an open ended product type (I made that up) and that operations on the stack are like a zipper, map, fold, product. That there is a projectional aspect to the stack, its expansion and contraction and shape over time.

The engines that do protein folding feel like they have similarities.

I still haven't grokked your whole description of your Lisp reader, I'll have to sleep on it. Is it related in structure to the METAII meta compiler or parser combinators?


That sounds like the insight cdiggins based Cat's type system on, but I'm not entirely sure in part because I don't really understand Cat's type system. As an example, though, for the code

    popop    = { pop pop}
Cat infers the parametrically polymorphic typing judgment

    popop    : ('t0 't1 't2 -> 't2) 
where 't2 is the type of the rest of the stack, I guess, and juxtaposition of types is a sort of product operation (noncommutative, but I think associative, and thus perhaps a monoid).

Not sure what you mean about "projectional" — do you mean that "pop" is a "projection" in the relational sense, in that it maps, for example, the stack 3 8 1 (with the top on the left) and the stack 7 8 1 to the same resulting stack state 8 1?

I don't know anything about protein folding.

> I still haven't grokked your whole description of your Lisp reader, I'll have to sleep on it. Is it related in structure to the METAII meta compiler or parser combinators?

Well, I didn't really write much of a description of the reader. It's a recursive-descent predictive parser, same as Probst's reader, and probably most Lisp readers. READ sets up the input pointers and calls (READ), which is ((READ)), which discriminates between lists and atoms (which are numbers) by looking for a "(". If it doesn't find one, it calls READ-NUM to read a number. But if it does, it calls READ-TAIL, which recursively reads the list contents (by calling (READ)) until it finds a ")", then returns back up, consing up the list as it goes. Probst's code works the same way, with the correspondences READ ↔ LISP-LOAD-FROM-STRING, (READ) ↔ LISP-READ-LISP, ((READ)) ↔ _LISP-READ-LISP, READ-NUM ↔ LISP-READ-TOKEN/-NUMBER/-SYMBOL, and READ-TAIL ↔ LISP-READ-LIST.

META II has the similarity that it generates recursive-descent parsers with predictive parsing, but the dissimilarity that it's a domain-specific language for writing parsers. Parser combinators are a technique for embedding any domain-specific parsing language in a general-purpose host language, regardless of what parsing algorithm is used, though Packrat may be the most common choice, and Packrat has certain similarities to recursive-descent parsing.

Is that helpful?


You could follow Lisp's car/caar/caaar cdr/cddr/cdddr cdadar/cadadar/caaddaar naming conventions in Forth ("waiting for the other shoe to drop") or PostScript ("waiting for the other shoe to pop"):

Forth:

    : droop drop drop ;
    : drooop drop drop drop ;
    : droooop drop drop drop drop ;
PostScript:

    /poop { pop pop } def
    /pooop { pop pop pop } def
    /poooop { pop pop pop pop } def


This is interesting from the standpoint of the different mentality of Forth coders. If they really needed a fast removal of 4 stack items they would first code what you did.

Then after things were working they might replace droooop with something like this:

( Forth assembler psuedo-code follows)

     code droooop   sp 4 cells addi, next, endcode
One instruction to move the CPU stack pointer. Unthinkable to touch the stack pointer in most other environments but Toto we're not in Kansas. :)

*Next, is the traditional name of the "return to Forth function" in threaded Forth. A return instruction would be used in a native code Forth. Carnal knowledge of the internals is required and used by Forth coders.


In a lot of FORTH implementations, constant numbers like 0, 1, -1 and others are hard coded, not just for speed but also for space: a call to a code word only takes one cell, instead of using two cells with a LIT [value].

Here's some Breshenham line code I first wrote in FORTH (Mitch Bradley's SunForth on a 68k Sun-2 with cg2 graphics board), then translated to 68k code (using the rpn FORTH assembler) -- the FORTH code is commented out before the corresponding assembly code:

https://donhopkins.com/home/archive/forth/cg/line.f

The cg2 board wasn't directly memory mapped -- it had a really weird way of selecting and accessing one row and one column at a time, which was kinda convenient for drawing lines and curves, and ropping blocks around, but nowhere as convenient as direct memory mapped access would have been.

https://donhopkins.com/home/archive/forth/cg/cg.f

I had to reverse engineer it from the map of the registers in this helpful include file:

https://donhopkins.com/home/archive/forth/cg/cg2reg.h

    * To save on virtual address space, we don't map the plane or pixel mode
    * memory (first two megabytes).  However, when calling mmap the user has
    * to add 2M to the desired offset anyway (goofy, huh?).


I never had a cgtwo, myself (because I never used a Sun2). Why do you suppose they used that weird "bank"-switching scheme? It occupied 4MiB of address space anyway!

Why were you in 640×480? I thought the cgtwo was 1152×900 like God intended, and that's what the .h says too.

Was the reason it was important to save virtual address space important that the 68010 ignored the upper 8 bits of virtual addresses? (Helllo, pointer typetag...)

The assembler looks pretty pleasant, though all the 68k operand size suf-, uh, prefixes make the code a bit LONGer than it could be. In gas I really miss having a macro system that can express nested control structures (so I guess I should quit my bitching and write one and use it). I suppose the tests for the IF and WHILE are limited to <, =, <>, >, 0<, 0>, and 0<>?

I'm curious what you think of my analogy upthread between stack-manipulation words and goto. Does it reflect your experience? I'd forgotten you'd done a bunch of Forth stuff.


The wonderful thing about FORTH assemblers is that you have the full power of FORTH to write macros and code generation code with!

That particular assembler had structured control flow like if, while, etc.

It might have actually been a cgone, since the device name was /dev/cgone0. But the header file said cg2. Whatever it was, it was quite slow!

Years later, John Gilmore mentioned that he wrote that .h file with the C structures/unions that mapped out all the device registers.

I bought a copy of Aviator by Curt Priem and Bruce Factor, that ran on my SS2 pizzabox's GX "LEGO: Low End Graphics Option" SBus card (an 8 bit color + 2 bits monochrome overlay plane graphics accelerator):

https://techmonitor.ai/techonology/aviator_15_for_sun_networ...

>AVIATOR 1.5 FOR SUN NETWORKS OPENS UP GRAPHICS WORKSTATION GAMES MARKET. By CBR Staff Writer, 08 Jul 1991.

Not sure why the memory mapping was so weird -- but at least it wasn't as bizarre as the Apple ][! It did have some weird undocumented limitations, like you could only write to the colormap during vertical retrace (which I discovered the hard way -- it didn't seem to work for no apparent reason, except for the occasional times when it did kinda work).

Here's a reference to the cgone device that sounds about right:

http://ftp.uni-bayreuth.de/Digital/alphaserver/archive/magic...

    * SUN120  A Sun Microsystems workstation, model Sun2/120 with
    *   a separate colorboard (/dev/cgone0) and the
    *   Sun optical mouse.  Also works on some old Sun1s with
    *   the 'Sun2 brain transplant'.
http://www.sunhelp.org/faq/FrameBufferHistory.html

    Frame Buffer History Lesson 

    Last Updated: 24th November 1998

    cg1/bw1: device name : "/dev/cgoneX" "/dev/bwoneX"
    The color and monochrome framebuffer of sun100u. 
    It is not a crime not knowing anything about these. (and this was 7 years ago!)


Interesting, thanks!


If you were to implement such a PostScript-based programming language for a Racal PDP-11 clone (or an HP calculator, or a P-code machine), whether in NoCal or in SoCal, I think you'd have to call it FECAL.

If you really wanted to provide such a set of operators for, say, the top N stack items, you could give them systematic names with some distinctive scheme; limiting such stack operators to the top 3, for example, with no more than 2 extra results, you could provide the operators x→ (drop), x→x (nop), x→xx (bad Mexican beer), x→xxx (dupup), xy→ (2drop), xy→x (nip), xy→y (drop again, for consistency), xy→xx (nip dup), xy→xy (nop), xy→yx (exch†), xy→yy (drop dup), xy→xxx (nip dup dup), xy→xxy (dup again), xy→xyx (tuck), xy→xyy, xy→yxx, xy→yxy, xy→yyx, xy→yyy, xy→xxxx, xy→xxxy, xy→xxyx, xy→xxyy, xy→xyxx, xy→xyxy, xy→xyyx, xy→xyyy, xy→yxxx, xy→yxxy, xy→yxyx, xy→yxyy, xy→yyxx, xy→yyxy, xy→yyyx, xy→yyyy, xyz→, xyz→x, xyz→y, xyz→z, xyz→xx, xyz→xy (condescending answer on Stack Overflow), xyz→xz, xyz→yx, xyz→yy, xyz→yz, xyz→zx, xyz→zy, xyz→zz, xyz→xxx (programmers over 18 only), xyz→xxy (Klinefelter syndrome), xyz→xxz, xyz→xyx, xyz→xyy (Jacobs syndrome), xyz→xyz, xyz→xzx, xyz→xzy, xyz→xzz, xyz→yxx, xyz→yxy, xyz→yxz, xyz→yyx, xyz→yyy (bargaining, denial), xyz→yyz, xyz→yzx, xyz→yzy, xyz→yzz, xyz→zxx, xyz→zxy, xyz→zxz, xyz→zyx, xyz→zyy, xyz→zyz, xyz→zzx, xyz→zzy, xyz→zzz, xyz→xxxx, xyz→xxxy, xyz→xxxz, xyz→xxyx, xyz→xxyy, xyz→xxyz, xyz→xxzx, xyz→xxzy, xyz→xxzz, xyz→xyxx, xyz→xyxy, xyz→xyxz, xyz→xyyx, xyz→xyyy, xyz→xyyz, xyz→xyzx, xyz→xyzy, xyz→xyzz, xyz→xzxx, xyz→xzxy, xyz→xzxz, xyz→xzyx, xyz→xzyy, xyz→xzyz, xyz→xzzx, xyz→xzzy, xyz→xzzz, xyz→yxxx, xyz→yxxy, xyz→yxxz, xyz→yxyx, xyz→yxyy, xyz→yxyz, xyz→yxzx, xyz→yxzy, xyz→yxzz, xyz→yyxx, xyz→yyxy, xyz→yyxz, xyz→yyyx, xyz→yyyy, xyz→yyyz, xyz→yyzx, xyz→yyzy, xyz→yyzz, xyz→yzxx, xyz→yzxy, xyz→yzxz, xyz→yzyx, xyz→yzyy, xyz→yzyz, xyz→yzzx, xyz→yzzy, xyz→yzzz, xyz→zxxx, xyz→zxxy, xyz→zxxz, xyz→zxyx, xyz→zxyy, xyz→zxyz, xyz→zxzx, xyz→zxzy, xyz→zxzz, xyz→zyxx, xyz→zyxy, xyz→zyxz, xyz→zyyx, xyz→zyyy, xyz→zyyz, xyz→zyzx, xyz→zyzy, xyz→zyzz, xyz→zzxx, xyz→zzxy, xyz→zzxz, xyz→zzyx, xyz→zzyy, xyz→zzyz, xyz→zzzx, xyz→zzzy, xyz→zzzz (sleep 4), xyz→xxxxx, xyz→xxxxy, xyz→xxxxz, xyz→xxxyx, xyz→xxxyy, xyz→xxxyz, xyz→xxxzx, xyz→xxxzy, xyz→xxxzz, xyz→xxyxx, xyz→xxyxy, xyz→xxyxz, xyz→xxyyx, xyz→xxyyy, xyz→xxyyz, xyz→xxyzx, xyz→xxyzy, xyz→xxyzz, xyz→xxzxx, xyz→xxzxy, xyz→xxzxz, xyz→xxzyx, xyz→xxzyy, xyz→xxzyz, xyz→xxzzx, xyz→xxzzy, xyz→xxzzz, xyz→xyxxx, xyz→xyxxy, xyz→xyxxz, xyz→xyxyx, xyz→xyxyy, xyz→xyxyz, xyz→xyxzx, xyz→xyxzy, xyz→xyxzz, xyz→xyyxx, xyz→xyyxy, xyz→xyyxz, xyz→xyyyx, xyz→xyyyy, xyz→xyyyz, xyz→xyyzx, xyz→xyyzy, xyz→xyyzz, xyz→xyzxx, xyz→xyzxy, xyz→xyzxz, xyz→xyzyx, xyz→xyzyy, xyz→xyzyz, xyz→xyzzx, xyz→xyzzy (Nothing happens), xyz→xyzzz, xyz→xzxxx, xyz→xzxxy, xyz→xzxxz, xyz→xzxyx, xyz→xzxyy, xyz→xzxyz, xyz→xzxzx, xyz→xzxzy, xyz→xzxzz, xyz→xzyxx, xyz→xzyxy, xyz→xzyxz, xyz→xzyyx, xyz→xzyyy, xyz→xzyyz, xyz→xzyzx, xyz→xzyzy, xyz→xzyzz, xyz→xzzxx, xyz→xzzxy, xyz→xzzxz, xyz→xzzyx, xyz→xzzyy, xyz→xzzyz, xyz→xzzzx, xyz→xzzzy, xyz→xzzzz, xyz→yxxxx, xyz→yxxxy, xyz→yxxxz, xyz→yxxyx, xyz→yxxyy, xyz→yxxyz, xyz→yxxzx, xyz→yxxzy, xyz→yxxzz, xyz→yxyxx, xyz→yxyxy, xyz→yxyxz, xyz→yxyyx, xyz→yxyyy, xyz→yxyyz, xyz→yxyzx, xyz→yxyzy, xyz→yxyzz, xyz→yxzxx, xyz→yxzxy, xyz→yxzxz, xyz→yxzyx, xyz→yxzyy, xyz→yxzyz, xyz→yxzzx, xyz→yxzzy, xyz→yxzzz, xyz→yyxxx, xyz→yyxxy, xyz→yyxxz, xyz→yyxyx, xyz→yyxyy, xyz→yyxyz, xyz→yyxzx, xyz→yyxzy, xyz→yyxzz, xyz→yyyxx, xyz→yyyxy, xyz→yyyxz, xyz→yyyyx, xyz→yyyyy, xyz→yyyyz, xyz→yyyzx, xyz→yyyzy, xyz→yyyzz, xyz→yyzxx, xyz→yyzxy, xyz→yyzxz, xyz→yyzyx, xyz→yyzyy, xyz→yyzyz, xyz→yyzzx, xyz→yyzzy, xyz→yyzzz, xyz→yzxxx, xyz→yzxxy, xyz→yzxxz, xyz→yzxyx, xyz→yzxyy, xyz→yzxyz, xyz→yzxzx, xyz→yzxzy, xyz→yzxzz, xyz→yzyxx, xyz→yzyxy, xyz→yzyxz, xyz→yzyyx, xyz→yzyyy, xyz→yzyyz, xyz→yzyzx, xyz→yzyzy, xyz→yzyzz, xyz→yzzxx, xyz→yzzxy, xyz→yzzxz, xyz→yzzyx, xyz→yzzyy, xyz→yzzyz, xyz→yzzzx, xyz→yzzzy, xyz→yzzzz, xyz→zxxxx, xyz→zxxxy, xyz→zxxxz, xyz→zxxyx, xyz→zxxyy, xyz→zxxyz, xyz→zxxzx, xyz→zxxzy, xyz→zxxzz, xyz→zxyxx, xyz→zxyxy, xyz→zxyxz, xyz→zxyyx, xyz→zxyyy, xyz→zxyyz, xyz→zxyzx, xyz→zxyzy, xyz→zxyzz, xyz→zxzxx, xyz→zxzxy, xyz→zxzxz, xyz→zxzyx, xyz→zxzyy, xyz→zxzyz, xyz→zxzzx, xyz→zxzzy, xyz→zxzzz, xyz→zyxxx, xyz→zyxxy, xyz→zyxxz, xyz→zyxyx, xyz→zyxyy, xyz→zyxyz, xyz→zyxzx, xyz→zyxzy, xyz→zyxzz, xyz→zyyxx, xyz→zyyxy, xyz→zyyxz, xyz→zyyyx, xyz→zyyyy, xyz→zyyyz, xyz→zyyzx, xyz→zyyzy, xyz→zyyzz, xyz→zyzxx, xyz→zyzxy, xyz→zyzxz, xyz→zyzyx, xyz→zyzyy, xyz→zyzyz, xyz→zyzzx, xyz→zyzzy, xyz→zyzzz, xyz→zzxxx, xyz→zzxxy, xyz→zzxxz, xyz→zzxyx, xyz→zzxyy, xyz→zzxyz, xyz→zzxzx, xyz→zzxzy, xyz→zzxzz, xyz→zzyxx, xyz→zzyxy, xyz→zzyxz, xyz→zzyyx, xyz→zzyyy, xyz→zzyyz, xyz→zzyzx (Soda Springs), xyz→zzyzy, xyz→zzyzz, xyz→zzzxx, xyz→zzzxy, xyz→zzzxz, xyz→zzzyx, xyz→zzzyy, xyz→zzzyz, xyz→zzzzx, xyz→zzzzy, and xyz→zzzzz. You could certainly argue about the utility of many of these operators individually, not to say their mental risk as attractive nuisances, but their mnemonic value is indisputable.

______

† Where do you get an old PostScript printer? At an exch meet.


I like to perform no-ops that keep the top of the stack fresh:

: NOP DUP SWAP DROP ;

Also:

Q: How many Northern Californians does it take to change a lightbulb?

A: Hella!!!

Q: How many Southern Californians does it take to change a lightbulb?

A: Totally!!!

https://escholarship.org/uc/item/6492j904

http://people.duke.edu/~eec10/hellanorcal.pdf

Hella Nor Cal or Totally So Cal?: The Perceptual Dialectology of California

Mary Bucholtz, Nancy Bermudez, Victor Fung, Lisa Edwards and Rosalva Vargas. Journal of English Linguistics 2007; 35; 325. DOI: 10.1177/0075424207307780

>Abstract

>This study provides the first detailed account of perceptual dialectology within California (as well as one of the first accounts of perceptual dialectology within any single state). Quantitative analysis of a map-labeling task carried out in Southern California reveals that California’s most salient linguistic boundary is between the northern and southern regions of the state. Whereas studies of the perceptual dialectology of the United States as a whole have focused almost exclusively on regional dialect differences, respondents associated particular regions of California less with distinctive dialects than with differences in language (English versus Spanish), slang use, and social groups. The diverse socio linguistic situation of California is reflected in the emphasis both on highly salient social groups thought to be stereotypical of California by residents and nonresidents alike (e.g., surfers) and on groups that, though prominent in the cultural landscape of the state, remain largely unrecognized by outsiders (e.g., hicks).

Extra credit question:

Can you locate the isogloss designating the "101" / "The 101" line?

https://en.wikipedia.org/wiki/Isogloss

https://www.youtube.com/watch?v=zIklKPzND20&ab_channel=Satur...


Don, does your operand stack ever get that... not-so-fresh feeling? Well, what do you do? I use OVER SWAP NIP.

OVER SWAP NIP. Trusted by more hackers.


xyz→yyz (Rush the stack)


A modern-day hacker

Mean mean code

Today's Chuck Sawyer

Mean mean node

.

Though his stack is not for rent

Don't push dynamic environment

His compiler, a quiet defense

Riding out the stream of events —

The socket


> Is that helpful?

Absolutely. It made my day. Thank you.


Very


Author here. I never benchmarked it, but probably not fast. This was just a fun programming exercise.


Glide (YC W19) | Multiple roles | REMOTE WORLD-WIDE | Full-time | https://glideapps.com

At Glide we believe that software development should be dramatically easier. We're starting by making it possible to build mobile apps from spreadsheets, without writing any code. If you want to help us bring software development to the masses, please apply. We don't care which languages or frameworks you're most familiar with—if you're passionate and willing to learn, we have no doubts that you'll be productive in our stack in no time.

We're currently hiring for multiple roles:

- Senior Backend Engineer: Connect Glide to more data sources and improve the performance of our data sync engine. https://www.glideapps.com/jobs/senior-software-engineer

- Senior QA Engineer: Build our testing capabilities and help ensure that apps never break. https://www.glideapps.com/jobs/senior-qa-engineer

- Senior Frontend Engineer: Help us develop the Glide 'builder' environment, where users build their apps. https://www.glideapps.com/jobs/senior-frontend-engineer


Glide (YC W19) | Multiple roles | REMOTE WORLD-WIDE | Full-time | https://glideapps.com

At Glide we believe that software development should be dramatically easier. We're starting by making it possible to build mobile apps from spreadsheets, without writing any code. If you want to help us bring software development to the masses, please apply. We don't care which languages or frameworks you're most familiar with—if you're passionate and willing to learn, we have no doubts that you'll be productive in our stack in no time.

We're currently hiring for multiple roles:

- Senior Backend Engineer: Connect Glide to more data sources and improve the performance of our data sync engine. https://www.glideapps.com/jobs/senior-software-engineer

- Senior QA Engineer: Build our testing capabilities and help ensure that apps never break. https://www.glideapps.com/jobs/senior-qa-engineer

- Senior Frontend Engineer: Help us develop the Glide 'builder' environment, where users build their apps. https://www.glideapps.com/jobs/senior-frontend-engineer


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: