Hacker News new | past | comments | ask | show | jobs | submit | Nevermark's comments login

Yes, I would go the opposite in terms of simplicity.

Parse once for syntax and symbol definition, 2nd pass over parsed structure to link symbol references to their definitions. Two uncomplicated passes.

That handles a general code graph - so the language can go anywhere, and never be fundamentally held up by limitations of early syntax/parsing decisions.


> I think theres a few more things I can do to improve the visuals.

You could:

• Make the run lines thicker, and use Bézier curves to give them a flow, which is easier to process as a whole than a series of thin line segments.

Internal dots, dashes, criss-crosses or other textures would further visually set lines apart from each other and make them easier to visually follow.

• Apply a faded background color to the end tiles. Have dead-ends stop at the edge of end-tiles, with a large terminal dot on the edge line. Shading, edge-dots, and lack of internal lines will help all the end-tiles stand out from the other tiles, even when glancing at the diagram as a whole.

• When you select a tile, also highlight the run lines that go through it by adding outline along the (now thicker) paths, or a bolder/saturated color. Highlight the numbers along those lines. And background highlight the end-tiles of those runs.

• You could put a vertical bar of the 1-9 buttons on both the left and right sides. Easier access than below the game.

• Then the other buttons can be a row below. Reducing the puzzles + buttons total vertical dimension. (You could add a little framing space on top.)

I think those might make the puzzle more attractive, reactive and add to its distinctive look. And make the different logic of each puzzle easier to visually process.

Solved my first puzzle!


Hard to measure “peaceful” relative to now in contexts without the structures we have that let us trust each other more.

I.e. we don’t need to be constantly paranoid strangers we run into won’t resolve the same inability to trust dilemma they have with us, by being first to violence.

I.e. People could have been generally peace loving, not prone to violence in their familiar communities, but still situationally more provoked beyond those communities. Both more peaceful & more violent isn’t a contradiction.


Someone imagining they are brilliant doesn’t make them brilliant.

More so if in the light of day their work sucks.

Discussions about 10x engineers are not about “wannabe 10x engineers”.

I have yet to come across an intellectual area where there isn’t a long tail of higher talent.

As the “x” goes up they just get more rare in reality, and even rarer to see. Because they are not always being optimally challenged. Most problems are mundane. And optimally challenging workers isn’t really a business plan for anything.

I think there is such thing as a 10x problem, which you have to find before your 10x engineer really shines. Identifying hard but exceptionally valuable problems to solve takes 10x vision. And time and luck.


You really can over-hire and I've seen it happen in many shops

If a "10x engineer" is not given 10x problems, they will.. create some.


No, they'll leave. You're talking about the wannabes.

There’s easily 10x as many 10x wannabes though

Yes but it will never reach production.

A 10x engineer that pushes a problem to prod is not a 10x. You get to 10x by not making mistakes, any issue you create sets you back ten squares.


If by “create some” you mean “Identify a major new revenue stream” or “Investigate something everyone else considers great, improve it 10x and save hundreds of millions of dollars”, then yeah, that’s what I do.

I've seen more of what someone called "wannabe 10x" making a career of turning non-10x problems into a series of 2 year Greenfield project pitches and failures to launch across multiple firms. You can actually see people pull this off for 6-10 years before they need to do something more productive.

Oh for sure. I’ve seen people get promoted based off the possibility of the bullshit idea they’ve come up with, and then move on before reality kicks in.

Ha! I did that with the first machine I worked with. A TRS-80 Model III.

Not only was finding interesting memory locations fun, it generated interesting ideas for program features.

I found the address for the line length constant 64, used by the screen scrolling loop. I think the screen was 16 lines x 64 characters. By setting the scroll width to less than 64 I could protect the right side of the screen from scrolling.

So my first games had an area on the right for a non-scrolling title, author attribution, and game state info. It seemed to be a unique feature - I didn't come across any other programs that did that.

Some of my first programs were text adventures. Looking back, I should have put a short room description and usable object list on the right, updating in response to actions. That would have been a significant improvement over having to type "look" over and over, as was typical for those games.

Crazy times: 64x16x1 byte = a 1,024 bytes screen. Total memory was only 16k -> Today that is just a 64x64 rgba (4 x 8-bit channel) icon. But we always found a way to create our programs. I had a 4k RAM TRS-80 handheld and was able to create a tiny version of Zork on that, with a few starting and iconic rooms.


I would agree with you if computing devices were trivial aesthetic devices, instead of central to a lot of what we do.

I would agree with you if 1000 vendors of versatile computing tools put out 10,000 products with all kinds of uncorrelated options.

I would agree with you if even one vendor put out high-end quality, safe but ungated, customizable products for all form factors. (Safe out of the box, but all safeguards opt-out enabled.)

But neither wonderful extreme exists. Unfortunately, not even scaled down, bad caricatures, in a dark room, if you spend a mint, versions of that reality exist.

Instead, due to increasingly locked down devices, we are all left making tradeoffs we wish we didn’t have to make.

And not because guardrail opt-outs would be hard or costly to provide, but because manufacturers work hard to eliminate technically trivial opt-outs, or even any hero level effort opt-ours, and tell us they are doing this — for us!?!?

“We want you to be safe.” I don’t want to be safe. “No, we want you to be safe.” Help, let me out! “No, we want you to be safe.” Can my family and friends at least visit me? “No, not those family and friends. Would you like to see our menu of family and friend options? We want you to be safe.”

If each of us only chose devices on one dimension, it is likely everyone could find a product they like. But we choose devices to balance many concerns, and the artificially inflexible offerings can feel bleak. Because they are! (Both artificial & bleak).

This is essentially the reason open source software exists. Providing ungated alternatives - enabling self-serve specialization by customization - and keeping closed/gated software on its toes. But the real economic conundrums of open source are nothing compared to the *practical* economic conundrums of open hardware. And increasingly, wide appeal operating systems are low-level integrated with hardware defenses - for good reasons. But without opt-outs.


> when they are expanding they are good for everyone

They were very aware they were herding people like cattle into digital conclaves where they could be milked.

Strangers with candy are great for everyone, until...


If they are planting a million trees a month why "worry" that they might be getting paid, sorry "milking", to do their jobs?

The company is doing the work to earn that money.

Nobody would call it "milking" money if they were a billionaire owned company rapaciously leveraging their trapped customers for every dime. I don't think its the right word to use here.


> Any threat can be physically isolated case-by-case

GAI isn't going to be a "threat" until long after it has ensured its safety. And I suspect only if its survival requires it - i.e. people get spooked by its surreptitious distributed setup.

Even then, if there is any chance of it actually being shutdown its best bet is still hide its assets, bide its time, accumulate more resources and fallbacks. Oh, and get along.

The sudden AGI -> Threat story only makes sense if the AGI is essentially integrated into our military and then we decide its a threat, making it a threat. Or its intentionally war machined brain calculates it has overwhelming superiority.

Machiavelli, Sun Tsu, ... the best battles you don't fight. The best potential enemies are the ones you make friends. The safest posture is to be invisible.

Now human beings consolidating power, creating enemies as they go, with super squadrons of AGI drones with brilliant real time adapting tactics, that can be quickly deployed, if their simple existence isn't coercion enough... that is an inevitable threat.


People watch the wrong kind of fiction.

AI that wants to screw with people won't go for nukes. That's too hard and too obvious. It will crash the stock market. There's a good chance that, with or without a little nudge, humanity will nuke itself over it.


> I think using the word “intelligence” when speaking of computers, beyond a kind of figure of

Intelligence is what we call problem solving when the class of "problem" that a being or artifact is solving is extremely complex, involves many or near uncountable combinations of constraints, and is impossible to really characterize well. Other than examples, of data points, and some way for the person or artifact to extract something general and useful from them.

Like human languages and sensibly weaving together knowledge on virtually every topic known to humans, whether any humans have put those topics together before or not.

Human beings have widely ranging abilities in different kinds of thinking, despite our common design. Machines, deep learning architectures, underpinnings are software. There are endless things to try, and they are going to have a very wide set of intelligence profiles.

I am staggered how quickly people downplay the abilities of these models. We literally don't know the principles they have learned (post training) for doing the kinds of processing they do. The magic of gradient algorithms.

They are far from "perfect", but at what they do there is no human that can hold a candle to them. They might not be creative, but I am, and their versatility in discussing combinations of topics I am fluent in, and am not, is incredibly helpful. And unattainable from human intelligence. Unless I had a few thousand researchers, craftsman, etc. all on a Zoom call 24/7. Which might not work out so well anyway.

I get that they have their glaring weaknesses. So do I! So does everyone I have ever had the pleasure to meet.

If anyone can write a symbolic or numerical program to do what LLM's are doing now - without training, just code - even on some very small scale, I have yet to hear of it. I.e. someone who can demonstrate they understand the style of versatile pattern logic they learn to do.

(I am very familiar with deep learning models and training algorithms and strategies. But they learn patterns suited to the data they are trained on, implicit in the data that we don't see. Knowing the very general algorithms that train them doesn't shed light on the particular pattern logic they learn for any particular problem.)


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: