It’s a catchy term, but loaded. Copyright protects only original expression, not ideas and information. So if a computer algorithm reads the former and outputs the latter, arguably copyright isn’t involved at all.
There are plenty of good counterarguments to this as well, when you consider the effects of automation and scale. I’m definitely interested in seeing how the jurisprudence develops as these cases go through the courts.
I was here wondering if there was a specific reason for MLX behind this model, but (thankfully thinking of openness) nothing to do with the original model.
CommonCrawl is composed of copyrighted contents too. You gain copyright on your work automatically the moment you created it, including this very comment.
One could argue that using copyrighted content in LLMs, much like reposting, should fall under fair use. This is also Microsoft's claim in the GitHub Copilot lawsuits. It's up to the court to decide though. (IANAL)
Yep! Docs is using our editor BlockNote (https://www.blocknotejs.org) which builds upon Prosemirror (and we're also proud to be sponsors of Marijn from Prosemirror who's done an amazing job, indeed)
AssemblyScript (for WASM) and Huawei's ArkTS (for mobile apps) already exist in this landscape. However, they are too specific in their use cases and have never gained public attention.
Do a n00b a favour... would you ever run wasm outside of a client browser? Are you suggesting that wasm is a viable platform for local services or commands?
Or do you mean that there's a use case for a compilation in the browser?
In my experience it is pretty difficult to make WASM faster than JS unless your JS is really crappy and inefficient to begin with. LLVM-generated WASM is your best bet to surpass vanilla JS, but even then it's not a guarantee, especially when you add js interop overhead in. It sort of depends on the specific thing you are doing.
I've found that as of 2025, Go's WASM generator isn't as good as LLVM and it has been very difficult for me to even get parity with vanilla JS performance. There is supposedly a way to use a subset of go with llvm for faster wasm, but I haven't tried it (https://tinygo.org/).
I'm hoping that Microsoft might eventually use some of their wasm chops to improve GO's native wasm compiler. Their .NET wasm compiler is pretty darn good, especially if you enable AOT.
I think the Wasm backends for both Golang and LLVM have yet to support the Wasm GC extension, which would likely be needed for anything like real parity with JS. The present approach is effectively including a full GC implementation alongside your actual Golang code and running that within the Wasm linear memory array, which is not a very sensible approach.
The major roadblocks for WasmGC in Golang at the moment are (A) Go expects a non-moving GC which WasmGC is not obligated to provide; and (B) WasmGC does not support interior pointers, which Go requires.
These are no different than the issues you'd have in any language that compiles to WasmGC, because the new GC'd types are (AIUI) completely unrelated to the linear "heap" of ordinary WASM - they are pointed to via separate "reference" types that are not 'pointers' as normally understood. That whole part of the backend has to be reworked anyway, no matter what your source language is.
Go exposes raw pointers to the programmer, so from your description i think those semantics are too rudimentary to implement Go's semantics, there would need to be a WasmGC 2.0 to make this work.
It sounds like it would be a great fit for e.g. Lua though.
That's not the base language, it's an unsafe superset. There's no reason why a Wasm-GC backend for Golang should be expected to support that by default.
If it is part of the language reference it is part of the language.
Usually when language reference books used to be printed, or we used ISO languages, what is there on paper, is the language.
We are only discussing semantics, if it is hardcoded primitives, or made available via the standard library, specially in case of blessed packages like unsafe which aren't fully implemented, rather magical types for the compiler.
Which is nothing new since the 1960's that there are systems languages with some way to mark code unsafe, the C linage of languages are the ones that decided to ignore this approach.
The standard library uses unsafe for syscalls, for higher-performance primitives like strings.Builder, etc, so it's support is mandatory to run any non-trivial Go program
For a while the GOOS=nacl port and the Google App Engine ports of Go disallowed unsafe pointer manipulation too, so there is some precedent. Throughout some of the ecosystem you can see pieces of "nounsafe" build tag support (e.g. in easyjson).
Most programming languages that offer unsafe, either as language keyword, or meta package (unsafe/SYSTEM/UNSAFE whatever the name), have similar option, that doesn't make it less of a feature.
Somehow I don't think Wasm-GC is going to support bare metal syscalls anytime soon. That stuff all has to be rewritten anyway if you want to target WASM.
It's not just system calls. E.g. reflection package uses unsafe too: https://github.com/golang/go/blob/master/src/reflect/value.g... .
Many packages from Go standard library use unsafe one way or the other, so it's not fair to say that unsafe package is separate from the rest of the language
The GC extension is supported within browsers and other WASM runtimes these days - it's effectively part of the standard. Compiler developers are dropping the ball.
I did some perf benchmarks a few years ago on some JS code vs C code compiled to WASM using clang and running on V8 vs the same C code compiled to x64 using clang.
The few cases that performed significantly better than the JS version (like >2x speed) were integer-heavy math and tail-call optimized recursive code, some cases were slower than the JS version.
What I was surprised was that the JS version had similar performance to the x64 version with -O3 in some of my benchmakrs (like float64 performance).
This was a while ago though when WASM support had just landed in browsers, so probably things got better now.
Interop with a WASM-compiled Go binary from JS will be slower but the WASM binary itself might be a lot faster than a JS implementation, if that makes sense. So it depends on how chatty your interop is. The main place you get bogged down is typically exchanging strings across the boundary between WASM and JS. Exchanging buffers (file data, etc) can also be a source of slowdown.
reply