Hacker News new | past | comments | ask | show | jobs | submit | lucideer's comments login

Has anyone done a comparative breakdown between Radicle & git-ssb?

This looks really nice & polished - love the web ui - but I know there's a pretty active network & community on SSB.


I had not heard of SSB or git-ssb, but this gives me a lot of pause...

>This seems to work well: the SSB network thrives off of being a group of kind, respectful folks who don't push to each other's master branch. :)

https://github.com/hackergrrl/git-ssb-intro?tab=readme-ov-fi...

Can't imagine that working at any real level of popularity, but maybe that just goes in the "if you have to solve that problem, its a good problem to have" bucket.


It seems odd at first until you realise that it's a host setting & a convention, rather than an inherent limitation.

The familiar model of "centralised" Git hosts like Github come bundled with their own protocol-specific permissions models. E.g. if you were to imagine a team working from a simple .git directory hosted on a local SMB/whatever LAN share, the permissions model would be file permissions on the networked filesystem.

Blocking users committing to an "owned" branch on SSB would require implementing an ACL-tracking & ownership model attached to namespaces (likely branch prefixes here by convention) - very doable, nothing about the protocol prohibits it architecturally. So if it ever becomes one of those "good problems", it's not an inherently concrete design decision.


> The age of Smalltalk is over. It's now the age of ML [...] The languages that dominated the 2000s (Ruby, Python, Javscript, PHP) were all more or less derived from Smalltalk [...] The new languages (Rust, Scala, Swift, Kotlin, etc.) are ML family languages.

What drew people to a lot of your examples (Javascript, PHP especially) was the runtimes rather than the language features. Your example sets aren't just demarcated by Smalltalk-ishness/ML-ness but more by runtime type (Scala & Kotlin are odd examples given the VM but for the most part your latter examples have build-time compilation while the former have plaintext interpreters).

We're definitely in a tools-heavy era of programming where, unless you're writing bash scripts, even very basic applications in interpreted languages are layered with a slew of transpilation/compilation/etc., but generally speaking I still don't see the majority of people moving wholesale away from plaintext runtime interpreters. Where are the ML-ish competitors in that space?


Good point.

The nitpicky take: a lot of these languages come with a repl / console. E.g. the most recent version of Scala builds Scala CLI[1] into the language. You can run `scala repl` and just type code into it, or run `scala SomeFile.scala` and it will compile and run `SomeFile.scala`. There is special syntax for writing dependencies so that a single file can pull in the libraries it needs.

The 5head thought leader take: the traditional model for typed languages has two phases: compile time and run time. Types exist at compile time. This is inadequate for many applications, particularly interactive ones. E.g. a data scientist doesn't know the shape (type) of the data until it is loaded. It should be possible to infer the type from the data once it is loaded and make that type available to the rest of the program. We know how to do this (it's called staging) but it's just not available in the vast majority of languages. Staging, and metaprogramming in general, is perhaps the next great innovation in programming languages (which will take us from ML to Lisp).

In general, the challenge for these new languages is to reach "down" into the simpler scriptier applications, instead of the "serious" programming they are usually built for.

[1]: https://scala-cli.virtuslab.org/


Greppability is really a proxy metric here - these changes all have other benefits even if you never grep (mostly readability tbh).

    const getTableName = (addressType: 'shipping' | 'billing') => {
        return `${addressType}_addresses`
    }
This is a simplified example but in a longer function, readability of the `return` lines would be improved as the reader wouldn't have to reference the union type (which may or may not be defined in the signature). The rewrite is also safer as it errors out if a runtime `addressType` value doesn't match the union type (above code would not throw an error, just return an indeterminate value which would cause undefined behaviour).

"Flat is better than nested" also greatly improves readability in both examples: either reading the i18n line, or reading the classname at definition / call will be more readable when the name contains full context of function.


I don't think you can count something as blogspam unless it's rehashing something in the same medium as itself. Video to text is fine imo.


spam is advertising; blogspam is a blog that appears to be about something, but it's not really (and as a result, it's very thin, regurgitated, etc), it's an attempt to lure you into a relationship with a predator

this particular article is not a good replacement for the video, it adds nothing, it subtracts some things... but the video also doesn't say all you want to hear either. The article could have used a small animated gif like one might see in a wikipedia article. It would be very nice to see what simple patterns when overlaid would give you >> and << from different angles.


> If people keeps creating their own extensions of Markdown it's because there's a need

This is a general misconception about why people create. People don't always create to fill a need - in most cases, motivation to create is intrinsic rather than extrinsic.

As for having a comprehensive markup language - every product & creative avenue in every field in the world needs to contend with balancing complexity & expressiveness with accessibility & simplicity. Markdown errs heavily on the latter side for the writer but comes with natural trade-offs (e.g. it leans heavily on the complexity side for parser-developers) - that push-and-pull will always motivate folk to try & strive for the magical best of both worlds.


People often create to fulfill a need. The need might be intrinsic -- the need to create something.

But back to the point: Agreed, you described well the existing tension. At one end of the spectrum you've systems like LaTeX and DITA; on the opposite side, Markdown. I don't think Markdown should ever be extended. If anything, it should be encapsulated.


No hello is very reasonable, but only to a point. It's a specific adaptation to asynchronous communication that kicked off in the IRC days where channel idling was common - async is a radically different form of comms to in-person & this etiquette aids in adapting to those differences.

But it's important to remember that it is an adaptation for a specific comms medium & applying it too broadly may really just be a way of shirking socialisation. That's fine if you're most productive as an engineer working alone on your fully-self-contained owned project, but in most cases collaboration is beneficial. Collaboration introduces communication inefficiencies but its a known trade-off.

Especially extending this barrier-to-entry to other things like calls (verbal comms) & meetings (in-person) can lead to significant inaccessibility, exclusion & siloing. It's worth stepping back & looking at problems you may be trying to solve here: e.g. too-many-meetings or long meeting run-on. These are problems that frankly this doesn't do anything to solve whatsoever; you'll just end up with managers setting boilerplate agendas for the same "too many long meetings" & meanwhile some of the peers you may need to have a valuable short meet with will be too hung up by your requirements to contact you at all.


Agree with the sentiment of your comment in isolation but when I went to the article to see the quoted line in context, the author isn't saying anything of the sort.

They're not taking a narrow definition of knowledge & extrapolating that once one has that specific knowledge it explains everything. Instead they're broadening the definition of "hacker" (& also invoking the idea of continuous interrogation) to describe an approach to always seeking & finding "how the world works" in any given context.


It's there three times, which is partly what triggered the comment, as it came across as something of a theme.

Certainly this article is less dogmatic that many, but I still got the sense that author was using effects like causes.

The less lazy version is to do with treating the metric as the measure. Sure, the quant revolution is in full swing, but it's a terrible way to gauge the success (or failure) of society, and is perhaps a better metric for describing detrimental human activity.

That said, I fully appreciate the author's efforts to break with convention, but I felt that the points made actually give creedence to the system that, in my view, is actively corrupting the values that might get us out of this mess


If the above is what you need there's not strong reasons not to use React or similar. But for most things that will lead to an interest in "Vanilla JSX" this line of thinking is premature optimization - the advantages of vdom are extant but enormously overstated. JSX has many other advantages.

It's also not even an either-or. I've worked on a codebase that did both: React was loaded for some views & others were served with a lightweight JSX renderer.


> I never liked this term. Why is the term "Open Source" used to refer to publicly available information?

I assume your line of thinking is that you associate "Open Source" with software freedom (warm fuzzy feelings) & dislike that being tainted by stalkers & military espionage. Leaving aside that OSINT pre-dates the software term, I think it's quite fitting given the context of the very capitalist-/corporate-friendly "Open Source" licensing trend subsuming the original corporate-unfriendly/copyleft "Free Software" movement. The former enables the military-industrial complex by taking advantage of publicly available data, the latter enables the corporate world by taking advantage of publicly available code.


The problem with this is that OSINT has been established as a term at a government level (likely before Open Source Software came about?), so any push to change it will be a greater feat than the inverse.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: