Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Mojo is now available on Mac (modular.com)
177 points by tosh on Oct 19, 2023 | hide | past | favorite | 160 comments


They do very much plan to open source Mojo.

From Chris Lattner when asked about the date that will happen:

I would tell you if I knew . Our priority is to build the "right thing" not build a demo and get stuck with the wrong thing. My wild guess is that the language will be very usable for a lot of things in 18 months, but don't hold me to that.

It WILL be open-sourced, because Modular on its own isn't big enough to build the whole ecosystem.


Hard needle to thread. I understand the risk of getting community momentum in a direction you don't want to go, but I'm also doubtful that much of the audience they want is interested in a closed source programming language. Hoping they can thread it because I think it's a good idea and Swift is pretty nice.


I think they're just playing a larger distribution game to get people locked into Mojo then eventually pay for their Modular engine hosting services, just about the same as most open core, VC backed startups.


I see what you're saying, but it's pretty different from an open core analytics product or something. This is a language/dev environment and the bar is a lot higher because 1) I know it's going to be a big investment to learn the language and ecosystem 2) I'm deeply locked-in technically by writing my stuff in this language 3) the competitors are open source and at this point I'm not worried about python, numpy, typescript etc going away. My feedback is basically that despite this project looking cool, I will not be writing any Mojo until it's open source.


> They do very much plan to open source Mojo.

They do very much claim to plan. There's a bit of difference.


Chris has a pretty solid reputation. I doubt he’d throw that away.

Sometimes it’s okay to trust people in this space. To me, this is one of those times.


What if it’s less his choice and more about convincing investors that it makes sense?


That shouldn’t be hard I think. My confident guess is that there will be multiple backends for the language, an open source bare bones one, and one which integrates with their “value add“ highly scalable backend, with optimization for distributing work across GPU farms or even custom hardware.


They should just open source it now, but closed to contributions (no PR).

That’s how SQLite operates and it works well.


Not a bad idea at all.


So, it's basically the same approach they took with Swift. It makes sense.


It doesn’t. Free software is an ideology, not a license. You can’t be said to believe in software freedoms if you release nonfree software.

An analogy would be that you don’t believe in human rights if only 10% of your business operations violate them.

Software freedoms are important. Don’t release proprietary software. Doing so is bad and makes the world worse.


It's hard to fund development without releasing something, and at the same time, releasing it prematurely runs the risk of entrenching undesirable features/behaviors/etc that stifle development of something better.

Releasing it closed source allows them to point to something that justifies further fund raising and allowing them to control the direction of development while getting user feedback.

This isn't some esoteric language where if mojo vanished you'd lose your whole project and have to rewrite it into something else. It's python with some syntax sprinkled on top, so vendor lockin isn't a huge concern.


They didn't open source Swift out of the gate because they wanted to guide its development according to the features and plans they had for the language. Wanting to have a firm grasp of the language's core features and implementation is reasonable. After all, I believe Lattner was unhappy with some early directions they took with Swift, but it was too late to change anything because libraries and such had already been written. I may be fuzzy on that, but that's what I seem to recall.

Also, I am not an absolutist where free software is concerned. It doesn't make the world worse to release proprietary software. Especially if you have the stated goal of eventually open sourcing it.


Mojo claims a 90,000x speedup over python for matrix multiplication. No one does matrix multiplication with for loops. What is the speedup over numpy?


Here's the same benchmark with np.matmul instead of native python (on M2 MBP)

    Python             4.216 GFLOPS
    Naive:             6.400 GFLOPS            1.52x faster than Python
    Vectorized:       22.232 GFLOPS            5.27x faster than Python
    Parallelized:     52.591 GFLOPS           12.47x faster than Python
    Tiled:            60.888 GFLOPS           14.44x faster than Python
    Unrolled:         62.514 GFLOPS           14.83x faster than Python
    Accumulated:     506.209 GFLOPS          120.07x faster than Python


Does that use Apple accelerate? Depending on the matrix size, that sees a bit low, even the M1 Pro can easily reach 2.2 TFLOPS.


What is your BLAS backend?


Yeah this is confusing for me: I'm non an expert in numpy * but I had assumed that it would do most of those things - vectorize, unroll, etc, either when compiled or through any backend it's using. I understand that numpy's routines are fixed and that mojo might have more flexibility, but for straight up matrix multiplication I'd be very surprised if it's really leaving that much performance on the table. Although I can appreciate that if it depends separately on what BLAS backend has been installed that is a barrier to getting default fast performance.

* For context I do have done some experience experimenting on the gcc/intel compiler options that are available for linear algebra, and even outside of BLAS, compiling with -o3 -ffast-math -funroll-loops etc does a lot of that, and for simple loops as in matrix vector multiplication, compilers can easily vectorize. I'm very curious if there is something I don't know about that will result in a speedup. See e.g. https://gist.github.com/rbitr/3b86154f78a0f0832e8bd171615236... for some basic playing around


Even OpenBLAS (the default iiuc) does all of that and more to optimize for different levels of the cache hierarchy: https://www.cs.utexas.edu/~flame/pubs/GotoTOMS_revision.pdf

I'm not sure where/how they'd be squeezing out more performance unless its better compilation/compatibility with Apple Silicon intrinsics.

Edit: ..Is Mojo using more than 1 core? I'm not sure I understand their syntax and if they are parallel constructs.

Edit2: Yeah Mojo seems to be parallelizing, so the comparison really isn't fair. The np.config posted elsewhere shows that OpenBLAS is only compiled with MAX_THREADS=3 support, and its not clear what their OPENBLAS_NUM_THREADS/OPENMP_NUM_THREADS was set to at runtime.


I'm not super familiar with Mac but I also notice that numpy here is using openblas64. I had thought the go-to was the Accelerate framework? Or is that part of it somehow? If so it would be interesting to see how that impacts performance. Of course it's all kind of an argument for something like Mojo that gives better performance out of the box. Also an argument for why Mojo would be way more interesting if it was open source.


Just whatever you get by default with pip install numpy... Changing the benchmark to run with a 1024x1024x1024 matrix instead of a 128x128x128 does speed up numpy significantly though

    Python           119.189 GFLOPS
    Naive:             6.275 GFLOPS            0.05x faster than Python
    Vectorized:       22.259 GFLOPS            0.19x faster than Python
    Parallelized:     50.258 GFLOPS            0.42x faster than Python
    Tiled:            59.692 GFLOPS            0.50x faster than Python
    Unrolled:         62.165 GFLOPS            0.52x faster than Python
    Accumulated:     565.240 GFLOPS            4.74x faster than Python
np.__config__:

    Build Dependencies:
      blas:
        detection method: pkgconfig
        found: true
        include directory: /opt/arm64-builds/include
        lib directory: /opt/arm64-builds/lib
        name: openblas64
        openblas configuration: USE_64BITINT=1 DYNAMIC_ARCH=1 DYNAMIC_OLDER= NO_CBLAS=
          NO_LAPACK= NO_LAPACKE= NO_AFFINITY=1 USE_OPENMP= SANDYBRIDGE MAX_THREADS=3
        pc file directory: /usr/local/lib/pkgconfig
        version: 0.3.23.dev
      lapack:
        detection method: internal
        found: true
        include directory: unknown
        lib directory: unknown
        name: dep4364960240
        openblas configuration: unknown
        pc file directory: unknown
        version: 1.26.1
    Compilers:
      c:
        commands: cc
        linker: ld64
        name: clang
        version: 14.0.0
      c++:
        commands: c++
        linker: ld64
        name: clang
        version: 14.0.0
      cython:
        commands: cython
        linker: cython
        name: cython
        version: 3.0.3
    Machine Information:
      build:
        cpu: aarch64
        endian: little
        family: aarch64
        system: darwin
      host:
        cpu: aarch64
        endian: little
        family: aarch64
        system: darwin
    Python Information:
      path: /private/var/folders/76/zy5ktkns50v6gt5g8r0sf6sc0000gn/T/cibw-run-27utctq_/cp310-macosx_arm64/build/venv/bin/python
      version: '3.10'
    SIMD Extensions:
      baseline:
      - NEON
      - NEON_FP16
      - NEON_VFPV4
      - ASIMD
      found:
      - ASIMDHP
      not found:
      - ASIMDFHM


If you are looking for improved performance, you will always go with NumPy + vectorization. That's what is important. So I don't know what is the argument here, am I missing something?


> No one does matrix multiplication with for loops

Because it's slow as dirt right? Isn't the point that they are trying to make is that one could with Mojo?


If you take a look at the optimized Mojo code doing the matrix multiply [1], it takes an expert to understand. It’s not just some simple for-loops in Mojo they’re comparing against.

[1] https://github.com/modularml/mojo/blob/5ce18c47a27c0c4123de1...


Slower and less convenient. If I have two matrices and want to get a dot product I’d rather just call np.dot than zip and iterate.


It'll definitely be faster (less latency for a single matmul) than numpy since you're using all the cores (and from the throughput measurements, fairly efficiently). Better algorithms (like Strassens) don't take over till the sizes involved are much greater than this benchmark is doing.

In a single-threaded comparison to numpy (or just measuring total throughput -- many applications have lots of slightly smaller matmuls they can do which make it trivially to parallelize without having to parallelize each matmul, and throughput increases slightly when you do so) though, the details start to matter. Numpy is bad with small dimensions (hasn't optimized for them at all really, and overhead moving data from a Python context to a Numpy context starts to dominate), and performance can vary 10-50x just based on whether you've set up an optimized BLAS library for it to link to or not. Mojo side-steps some of that because it provides the fast primitives you need in the language itself and doesn't present the opportunity to execute more slowly. Single-core Mojo shouldn't be meaningfully faster than a properly installed single-core Numpy on large matmuls, and the given implementation should be meaningfully slower on large enough problems.

I don't really care for the benchmark though. It's potentially okay at showing how easy it can be to write fast code, but it comes across as being presented to show how much faster mojo is than Python. That latter is misleading for at least a couple reasons:

- By some magic, my $300 old dev laptop (swift 3) is 6x faster than their brand new m2 pro max with a vanilla Python triple for loop. Is Mojo adding some overhead as it runs that benchmark? Is something wrong with their Python installation?

- Many of the optimizations they applied in Mojo apply just as well in vanilla Python. Tiling, parallelization, ... Some of those have a higher ratio of improvement in Python than Mojo (depending on some fiddly GC details) because their purpose is to cut back on cache/page/... misses enough to make the problem compute-bound rather than IO-bound, and the Python representation takes enough extra space that the benefits accumulate faster. The serialization overhead for stdlib parallelization is dwarfed by the matmul cost, so it doesn't wind up mattering much that you have slow copies all over the place, and you really do find yourself bound by the interpreters rate of interpreting.

Like, it's still faster than Vanilla Python by a lot, and it's neat that the code is so easy to write, but 90k isn't the speedup I'd headline with.

To be fair, I think their point is just that you can write fast code easily in Mojo, and matmul is something easy to understand, so it makes a good case study. The optimization primitives are fairly intuitive, so presumably you should be able to apply the same naive approach (code it, slap on optimization primitives) and get decent speedups in less well studied domains.


So it seems to be behind a sign-up. Is this just a temporary measure to have some notion of "control" while its still being developed, or is this how they are positing it as? Is it interesting project enough that people will jump through hoops to get it? I didn't sign up, but I'm hazarding a guess that its binary-only also?


Has anyone here actually used mojo? I’m curious to hear people’s experiences who don’t have a vested interest in its success :)


I tried it out and compared it to C++ at the last release. Here was what I found: https://github.com/dsharlet/mojo_comments

Some of the issues I pointed out there are pretty low hanging fruit for LLVM, so they may have improved a lot in more current releases.


I skimmed your post and I wonder if mojo is focusing on such small 512x512 matrices? What is your thinking on generalizing your results for larger matrices?


I think for a compiler it makes sense to focus on small matrix multiplies, which are a building block of larger matrix multiplies anyways. Small matrix multiplies emphasize the compiler/code generation quality. Even vanilla python overhead might be insignificant when gluing small-ish matrix multiplies together to do a big multiply.


for anyone else wondering this is apparently a programming language?


A programming language created by Chris Lattner, creator of LLVM and Swift. Mojo uses the MLIR infrastructure of LLVM.


I don't know why providing basic context is so hard for some submitters here.


The site could also do a much better job. If doesn't mention what Mojo is it all on the linked page.


It is a closed source programming language, yes.


Compiled using electricity from coal power, yes.


Except for having to follow their "manual installation" directions, everything is working great on my Apple Silicon Mac.

I have had some frustration with Mojo in that their standard libraries, as far as I have found, lack a lot of general programming support, like file I/O, etc. Are we meant to import the Python package and just use that?

That said, I love the idea of Mojo.


Yes, you are meant to import the Python package and just use that. Mojo is a Python superset. It speeds up math but then relies on CPython for grunt work.


Thank you. I will adjust to that.


Does “90,000x speed up over pure python” with matrix multiplication mean vs using numpy arrays in python?


No, plain python code: https://github.com/modularml/mojo/blob/5ce18c47a27c0c4123de1...

I'd like to know how fast numpy is here, but they didn't compare... which is weird because that's what almost everyone would use.


It’s not just weird, it’s an intentionally dishonest evaluation to build hype around their language.

If you ask your average Python programmer what “pure Python” means they’d think numpy is included.

Their Mojo code just does the same optimizations numpy certainly has.


My experience is that most Python programmers would consider "pure Python" to basically be using the standard library.

There's also the issue of doing complicated work with NumPy and you start looping and revert back to slow (pure) Python because you are crossing the NumPy/ Python interface. Tools like Cython, Numba, and probably Mojo help to solve this.


For scientific users like they're targeting, `numpy` effectively _is_ part of the standard library.


Pure Python means written wholly in Python to most Python programmers in my experience. For some Python programmers this excludes the 3rd party libraries they care about.


The python/numpy boundary is a problem, which is sort of the value proposition of efforts like mojo or julia - no argument.

But nobody doing any significant numeric work would ever consider a "pure" python matrix multiply meaningful. It's disingenuous to present that as your comparison, at least without also including the way it's actually done in practice.

Honestly, it undermines their presentation with anyone involved in the area.


The difference and the whole sales pitch is based on that detail - that you write everything in one language; there is no high level and low level language switch, everything is written in one langauge.


AI developers expect these sorts of comparisons to be taken against accelerated code anyways. Modern Python NN frameworks don't spend significant time in the Python interpreter, so that's not where users would expect the bottleneck to be for their comparison.

It's like claiming that your code is faster than MATLAB's for loops. Why write gemm yourself?

Even if this language is a good idea, I worry that they don't seem to understand the audience.


But that is basically the core value proposition of mojo. You don’t need to drop down into another language to do low level, high performance kernels, you can do it all in mojo. Scalable in terms of target domains, (somewhat similar in that regard to swift).


> It’s not just weird, it’s an intentionally dishonest evaluation to build hype around their language.

Not too dissimilar to Swift, which was launched as being faster than C/C++.


I skimmed the linked code. Is there enough info there to run the same benchmark with numpy? And then if we find out that numpy is also 90000X faster then it's fair to conclude that numpy and Mojo have the same performance on this task? I'm a benchmarking n00b, go easy on me. Just trying to see if my basic understanding is correct...


I think it is just pure python

I don't know about mojo, but they seem to import python module for this benchmark. https://github.com/modularml/mojo/blob/5ce18c47a27c0c4123de1...

And for some reason they don't compare against NumPy https://github.com/modularml/mojo/blob/5ce18c47a27c0c4123de1...

What would be more interesting is to use NumPy and do the vectorized opreations.


I did just that, it's 120x faster than numpy. See my comment to OP.


I really don't understand why they do this. I was really excited for the compilation and type checking features but this whole speedup thing is pretty dumb. And I know the people developing mojo knows that too. Kind of seems like some marketing team is pushing them to do all this but their core customers are people who knows what python and numpy is and this kind of talk puts just feels weird

I think that mojo in it's current form is not on par with numpy performance (if it was they would be saying that). Even if the performance is same I would still give it a try. Their whole marketing though is making me reconsider


They do this because they want to showcase writing code in single language that can do things that other languages can't. Other langauges are either low level or slow, but they are high level and fast.


Naive question, but how do they distinguish themselves from Julia, which is also in that space?


There is large overlap with julia, yes. Both are addressing two-language problem.

Mojo is python syntax first where they want to be proper superset, which gives them access to wide python ecosystem and community. If executed well, this alone can absorb community similarly to how ie. typescript absorbed javascript community. Also similar thing happened with objective-c -> swift - also led by Chris, which gives a lot of credibility to the whole initiative.

Julia is proper new language you need to learn, use new tooling around it, ecosystem is quite academia skewed, ie. writing web services is probably not the best idea etc.

Additionally Julia suffers from "time to first plot" problem, which alienates a lot of newcomers who are not familiar or simply don't want to switch to programming mode where it becomes less of a problem (repl/notebook style where runtime is always active).

Both are very interesting languages, but mojo's starting point and trajectory seem to be at different level, ie. adoption may be very sharp.


Julia can cache JIT code between executions nowadays.


Julia is a language that was developed by academics, with not the best aesthetics in the implementation. I see a lot of value in starting with a clean, well engineered layered approach. If anything this is something I would expect Lattner to be able to deliver, whereas Julia while certainly powerful is tremendously messy.


Python was also developed by academics.


Primarily by compatibility with Python, which is the de facto standard in deep learning.


I think there's a Julia forum thread where they are casually benchmarking comparisons between the two. Not sure if it is still active though.


PyTorch recently added support for JIT compiling numpy code[0]. And then there are libraries like Numba[1]. I wonder how Mojo compares with exiting OSS Python JIT libraries like these?

[0]: https://pytorch.org/blog/compiling-numpy-code/

[1]: https://numba.pydata.org/


No It's vs a pure python implementation[1]

[1] https://github.com/modularml/mojo/blob/5ce18c47a27c0c4123de1...


Seems a bit deceiving then, nobody does matrix multiplications in "pure python." They use numpy or some derivative of it.


Here's the same benchmark with numpy instead of native python (on M2 MBP)

    Python             4.216 GFLOPS
    Naive:             6.400 GFLOPS            1.52x faster than Python
    Vectorized:       22.232 GFLOPS            5.27x faster than Python
    Parallelized:     52.591 GFLOPS           12.47x faster than Python
    Tiled:            60.888 GFLOPS           14.44x faster than Python
    Unrolled:         62.514 GFLOPS           14.83x faster than Python
    Accumulated:     506.209 GFLOPS          120.07x faster than Python


no


I've been following for a while - I eagerly await when I can build this from source, or install without an auth token!

This is a blocker for me to ever adopt this as more than a toy language. These days, if I can't use Nix to build + deploy the whole set of requirements (I'm fine building out the packages themselves), it's basically a non-starter!

I know they are claiming that it will eventually be open-sourced. Just a bit sad that there's no timeline on that


I am a bit surprised by the negativity here. Python's ecosystem fragmentation is legendary at this point, and it's lack of portability is well known. Let's face it - a lot of the "ML" applications for the next decade are going to be boring, but super important applications. Stuff like self-driving labs, process modules in factories and so on.

And the idea that you will do everything with a conda package at every point is laughable. You need a compilable language, where code can be ported over from Python very rapidly, and where ML/AI tools such as differentiability is a first class citizen. That language doesn't really exist today, but multiple billions of actual industrial applications need it.

In fact, what the negativity shows here is how skewed Hacker News is towards the bit folks, and how little they talk to the atoms side of things.


Is Mojo interactive like Python? I mean will it work well with notebook style programming? I've been looking for something that works well in that interactive mode (like Python) but also has types (like, say, Swift). Swift playgrounds have always been buggy as hell for me.


glad I can finally play with it. Hopefully the pace of development does not slow down because Rust features with Python syntax and a smarter compiler is a pretty attractive proposition (regardless of all other goodness being promised).


Will there be Mojo for Intel Mac?


It sounds like it already is. From the article: "NOTE: This guide is for Mojo on Apple Silicon-based Macs only. Mojo on Intel-based Macs can be installed via Docker containers. Select “Set up on Mac (Intel)” in the developer console for instructions."


So, that's a no then, since that would suggest installing it in a Linux VM?


Your Mac has a real CPU in it, so you can just run the Linux version.


Didn't everyone who requires their code to run fast already upgrade to M1/M2 processors? Considering they already run on Intel it'll probably be done by someone, probably after they open up the code.


Well, I didn't. I am not ready yet to throw my 18 CPU cores and my 16GB graphics card away.


Anyone else find this Mojo hype and release cycle strange? Maybe it's because FOSS has driven so much innovation in the open for the last few years, but something feels off about how Mojo is being teased and released.


Yeah, I do. It's what ultimately discouraged me from using the language. They are keeping a tight grip on Mojo, they may promise open sourcing it, but that didn't stop them from doing things like preventing an online repl for Mojo being built[0]. Presumably because they want to use Mojo to gather as many sign ups as possible for their business. Fair I guess, but leaves a bad taste in my mouth.

0 - https://mojo.picheta.me/


I'm wondering what the business model is exactly.

Or rather what it will become once the VCs who are paying for all this start the squeeze.

I'd be nervous about getting in too deep with tools from a company with what looks to me like an utterly unsustainable business model. The house of card will have to come down, won't it? You don't want your stuff to go down with it.

IDK, been wrong before, could be here too. It's disconcerting, though.


If it gets open sourced before that happens, it'll get forked if it's good enough. Just happened recently with OpenTF. I'm really interested in it, but I'll wait until it gets put under a permissive license before getting too invested.


Same. I don’t feel strongly about project governance or specific OSI-approved license choice, but I’m not stepping on this rug while VC is still holding on with both hands.


nod.ai was just acquired by AMD for a chunk of money. I expect they hope to find a similar exit once they can show an end-to-end product: Take a model written in Pytorch, upcast it to mojo, and put it through their unified compiler stack to accelerate on <hardware>


Wild speculation:

1) Have a compelling enough product to become an essential tool in data-focused workflows across the massive Python ecosystem.

2) Get acquired by someone interested in a deeply wedged stake across the massive Python ecosystem.

At that point, it doesn’t matter if they’re open source or not.


> Or rather what it will become once the VCs who are paying for all this start the squeeze.

How do we know that squeeze isn’t already happening?

The mojo launch was extremely underwhelming, and I wonder if it was forced by early investors to help drive hype and the next round of funding.


Agreed. Here is a serious contender[0] minus all the hype and the $100M in VC money. You would expect a minimum of interest given how Mojo is received by the community, but not really in practice.

[0]: https://chapel-lang.org/


I'm sure Chapel has its merits, but one of the main selling points of Mojo is the aspiration to be part of the Python ecosystem, and so far I haven't seen any other programming language offering a similar promise, other than Python itself coupled with DSLs or other extensions for high performance.


Those interested in the intersection between Python, HPC, and data science may want to take a look at Arkouda, which is a Python package for data science at massive scales (TB of memory) at interactive rates (seconds), powered by Chapel:

* https://github.com/Bears-R-Us/arkouda * https://twitter.com/ChapelLanguage/status/168858897773200179...


> to be part of the Python ecosystem

I'd rather use Python if I'm in the Python ecosystem. So many attempts were made in the past to make a new language compatible with the Python ecosystem (look up hylang and coconu -- https://github.com/evhub/coconut). But at the end of the day, I'd come back to Python because if there's one thing I've learnt in recent years it's this:

    minimize dependencies at all costs.


I don't think those fill the same niche. They're nice-to-haves on top of Python. The promise of Mojo is that it's for when Python isn't good enough and you need to go deeper, but you want the Python ecosystem and don't want to write C.


I believe the main Mojo use cases are scenarios in which you'd need dependencies anyway. Code that you can't write in Python due to performance concerns, so you'd need to call C/C++/Rust/CUDA/Triton/etc anyway.


honestly that is the main thing that makes me pretty sure Mojo will fail. Right now, the types of things it doesn't support include keyword arguments and lists of lists. The place where python compatibility really matters is C API compatibility and they are hilariously far away from that for now.


I mean honestly, the closest language to Mojo really is Nim. In the latest Lex Fridman interview with Chris Lattner [0] when he talks about his ideas behind Mojo it pretty much sounds like he's describing Nim. Ok fair, he wants Mojo to be a full superset of Python, but honestly with nimpy [1] our Python interop is about as seamless as it can really be (without being a superset, which Mojo clearly is not yet). Even the syntax of Mojo looks a damn lot like Nim imo. Anyway, I guess he has the ability to raise enough funds to hire enough people to write his own language within ~2 years so as not have to follow random peoples whim about where to take the language. So I guess I can't blame him. But as someone who's pretty invested in the Nim community it's quite a shame to see such a hyped language receive so much attention by people who should really check out Nim. ¯\_(ツ)_/¯

[0]: https://youtu.be/pdJQ8iVTwj8?si=LfPSNDq8UKKIsJd3

[1]: https://github.com/yglukhov/nimpy


For what it's worth, I think what Mojo does do better is that it's trying to be a better Python. Chapel's syntax immediately discourages me from using it.


How many of Python's warts does it fix?


It is not aimed at fixing Python’s warts. It seeks to provide (near?) 100% source compatibility, so those warts need to work the same.


So, we'll have to run every fucking Python project in a separate Docker image as usual, then. Got it.

https://www.youtube.com/watch?v=PivpCKEiQOQ


I wonder if there's a happy medium between the two. Mojo definitely didn't need $100M in funding... but maybe Chapel could use $100K or so for better marketing.


Chapel has at least several full-time developers at Cray/HPE and (I think) the US national labs, and has had some for almost two decades. That's much more than $100k.

Chapel is also just one of many other projects broadly interested in developing new programming languages for "high performance" programming. Out of that large field, Chapel is not especially related to the specific ideas or design goals of Mojo. Much more related are things like Codon (https://exaloop.io), and the metaprogramming models in Terra (https://terralang.org), Nim (https://nim-lang.org), and Zig (https://ziglang.org).

But Chapel is great! It has a lot of good ideas, especially for distributed-memory programming, which is its historical focus. It is more related to Legion (https://legion.stanford.edu, https://regent-lang.org), parallel & distributed Fortran, ZPL, etc.


The idea of a closed source Nim in 2023 is bananas. Getting vcs to pay to make you famous instead of just contributing to better projects is certainly an option I suppose.


Lattner wrote LLVM and created Swift. It’s pretty easy to see how he’s able to secure 100M funding probably at pretty good terms even in a tighter market. Honestly given the current VC climate anyone who can get such funding numberwang would be foolish not to - good to have a long long runway for something like this.


Chris Lattner is already famous


It's because making money with a programming language is very hard. They are trying to build a moat around their product while courting people who abhor moats to use it.


It's ironic isn't it? On one hand we lament the plight of the OSS developer getting a pithy amount of donations relative to their value added, on the other hand if someone dare try to charge for their work, out come the pitchforks.


Yeah, it is the only profession where a group of people expect to be paid while actively refusing to pay for the work of others, that actually make their own work possible in first place.


I'm fine with charging for your work, but I have a very high bar for the value you create if I am putting man-years of effort behind building on your ecosystem.


Yup and that’s the thing, is that programming languages require a lot of work, more than anyone is willing to fund (these days most accurately measured in man-decades). Except corporations and sometimes VCs, and that’s why we most of us use the same handful of mostly identical (in the space of PLs they are all in the same corner) corporate-sponsored programming languages.


The thing is though, it used to be the norm to pay for software. At some point, I think with the rise of "free" social network platforms that gave you tons of features, people then began expect high quality software for free (obviously not realizing they are the product).


Well thats fairly similar to how Swift was done so not very surprising.


And similarly Swift has failed to get any traction outside the Apple garden


thats a separate issue Kotlin was not immediately open sourced and is doing fine


It helps having a platform owner saying "thou shall write Koltin", while using the IDE from the company owning Kotlin.

Likewise most seats from Kotlin Foundation are from JetBrains and Google.


Not even close Swift was created inside biggest corporation in modern history, this is just some grab on AI VC money with some strange Sillicon Valley (tv show) vibes.


This was re: going closed source until you get to something meaning-full and then open sourcing. Not a funding model.


> biggest corporation

Nit: it's worth the most, or has been at times, but Apple is not a big company or organisation compared to others, and I think the number of people involved is perhaps the more useful comparison here rather than market cap. They're roughly an order of magnitude below the top end.


It is intresting given how much hype mojo has especially when there are many similar projects for Python like Cython, Numba, Taichi Lang, PyPy, etc.

Mojo must have better marketing.


Yeah, specially how Swift for Tensorflow ended up.

I keep looking into Julia and Chapel instead.


I’m not saying a lot of the comments aren’t without merit but I was actually surprised to see so much negativity. There are only a handful of funding approaches for languages. I would also say most of the funding they raised is going towards the platform and inference accelerator.

I can also buy the argument why setting up all the dev-tel infra to make an odd language project successful is a lot of work and possibly a distraction. Even though the source isn’t currently open, they’re certainly developing it in the open.

Maybe it’s a combination of a bunch of these things that’s throwing people off?


Initial reactions were much more positive (https://news.ycombinator.com/item?id=35790367). I think their marketing is putting people off.


Ok, even after reading the initial announcement for Mojo on their blog, I still have no clue what this is. A python like language but closed source?A language to build prompts? Something else?

If I were on a PC I would have to ask GPT4 to summarize it for me.

How is it so difficult to write a two sentence elevator pitch that anyone outside of your bubble can understand?

If Feynman could do it, you should be able to do it too.


I had never heard of it until I was listening to this podcast and the creator of Mojo was their guest. I think the episode clarifies it pretty well, worth a listen

https://syntax.fm/show/679/creator-of-swift-tesla-autopilot-...

I would start at the jump link "12:13 is Mojo a programming language"

They also cover its plan for becoming open source (at 37:36)


Is it that hard to google "mojo programming language"? Have we finally gotten so lazy that either answers need to be in the link itself or we reach for chat bots to fill in the gaps?

Second result on google: https://docs.modular.com/mojo/why-mojo.html


Mojo feels like an attack IMO mostly because I don't trust anyone.

Promise greatness but deliver vague closed source breadcrumbs.


In this case, the co-founder behind Mojo is "former Google and Tesla employee[1] and co-founder of LLVM, Clang compiler, MLIR compiler infrastructure[2] and the Swift programming language", so he has some credentials and ability to deliver.

https://en.wikipedia.org/wiki/Chris_Lattner


We know.

But I too have a hard time trusting this.

First, because I feel that Swift has been a boondoggle that everyone was forced to accept and now it's just normal. I'm still not convinced Steve Jobs would have given Swift a go-ahead (release).

From the start, Swift felt like, as a language, that it didn't know what it wanted to say, with a big part of that coming from having just taken a kitchen sink approach to syntax and features. Looking at their same code, 4 lines of python becomes like 12+ lines. I'm not sure who this attracts.

More importantly, he clashed at Tesla. He left a gaping hole in the Swift Tensorflow stuff at Google. He also left Apple and promised that he would still be involved in Swift, until he got pushed out.

It's not that he isn't productive or brilliant, I'm just not sure I trust someone with a track record of being volatile as a great maintainer of a platform.

Overall, this appears to have the same markings of the Cappuccino web framework project.


Steve was still around when Swift started as a toy project, and most likely gave it a go.

Objective-C 2.0 was released in 2006, and Steve Jobs died in 2011.

As per Chris Lattner interviews, Objective-C 2.0 and latter improvements were the ground work to have a good interoperability story between Objective-C and the new language prototype that would be eventually be known as Swift in 2014.

Note that Apple already toyed with Java as possible Objective-C replacement, when they weren't sure if Apple folks educated in Object Pascal and C++ would be keen in adopting Objective-C, latter tried with Ruby (MacRuby) which didn't went anywhere, and the develoeprs left Apple founding RubyMotion still going on.


> Steve was still around when Swift started as a toy project, and most likely gave it a go.

You can't say for a fact if he "gave it a go", and neither can I.

> As per Chris Lattner interviews, Objective-C 2.0 and latter improvements were the ground work to have a good interoperability story between Objective-C and the new language prototype that would be eventually be known as Swift in 2014.

Even if Lattner says this, it still seems as bogus to me as saying it's "objective c without the c." They're two completely different things. Swift is a language. Objective-C is a runtime on top of a language disguised to look like a language. They only say this stuff because people probably would have had a meltdown if they were blunt about what swift actually meant - foundational incompatibility. I'm sure their PR was informed by what happened with the massive amount of software that got lost during the Classic MacOS-Max OS X needs-to-be-rewritten transition.

To the point, I'm not sure what the syntax sugar they added to the 2.0 release had to do with the fact that swift still relies on message passing to access Foundation et al. MacRuby and PyObC used those same mechanisms to access the same stuff.

> Note that Apple already toyed with Java as possible Objective-C replacement, when they weren't sure if Apple folks educated in Object Pascal and C++ would be keen in adopting Objective-C

We know. Java was also just buzzyworthy to include in your OS back then.


Their keynote at the llvm conference looked quite substantial


You never met a salesman who believes in the product they're hired to sell?

A lot of life's bullshit is people convinced of their own bullshit.

I know I'm likely to be very wrong here... and they may easily remedy that, I hope they do because I really like the idea's they've presented.


Something about Mojo and the company behind it really rubs me the wrong way.


“Let’s run a matrix multiplication example using matmul.mojo. On My Apple MacBook Pro M2 Max, I get about 90,000x speedup over pure Python”


A very unfortunate choice of comparison on their end. People use numpy, they don't write matrix multiplications in pure python. This feels like an own-goal, because bad benchmarks reduce trust.


Actually, the point here is that with Mojo, you no longer need to use C/C++ libraries for things that are performance critical since it will just compile down to native speed if you need it to. So you can write nice python code (well with the Mojo additions) and it can be fast and optimized and use things like GPUs, multiple cpu cores and it won't be bogged down by things like the global interpreter lock. Kind of nice.

The point of this benchmark is making it clear that things you wouldn't dream of doing with python are fine in mojo.

People use numpy because python is stupidly slow. Mojo isn't. You can still use numpy of course. But you don't have to.


I see your point but respectfully disagree with the conclusion. Comparing to numpy, not native python, would be the right move. Again, I don't write matmuls in pure python. I do compute matmuls in numpy. I don't actually know whether numpy is 90,000x faster than pure python, so this benchmark is not useful for me.

If they want to show all 3: mojo, numpy, and pure python, then that might be the best of all worlds. They could brag about being 90,000x faster than python, while at the same time showing the actual slowdown of using a pure python-like language (mojo) compared to a compiled numpy library. Let's say the mojo code ends up being 0.5x as fast as numpy; that would still be a pretty great tradeoff for being able to do it all in one language. If you're a python programmer and want to do something that isn't possible with the existing compiled libraries, this would be a good sell. To me, that's still the real comparison of interest.


That's cool, but this runs contrary to the drop-in interop they're touting. It's a pretty confusing initial benchmark to show off. On the Mojo Overview page, they show some numpy+Python using np.max, then they...rewrite np.max using their SIMD parallel map magic?

So is this meant to replace vanilla python matmul (which nobody uses IRL)? No? That was just a benchmark to show off their compiler tricks? Okay, how does it fare against numpy (which is actually used)? Well, it's faster? But I have to write little wrappers around basic functions like np.max to parallelize them myself? Shouldn't that just be in your std lib / invisible to the programmer? I thought it was supposed to be a drop-in speed improvement...

I don't really get it. Am I stupid? Maybe I'm stupid.


> People use numpy because python is stupidly slow.

People use Python because numpy isn't slow! It works quite well for its domain.


well kinda... Numpy is rather slow relative to what you can build with a pure C/C++/Fortran/Rust pipeline. Its API usually prevents memory reuse, putting memory allocations on the critical path. You can't fuse operators. And, all of its operations aside from calls into BLAS (for example it's ufunc elementwise processing) are single threaded.

People use numpy bc of the python ecosystem and all the domain specific libraries it is compatible with. It very fast relative to Python and provides aa stable, easy to use, array API.


My theory is there is a "persona" they are targeting with their marketing and it's not someone who knows about optimizing ML code.


That’s just insulting to the intelligence of their potential users (presumably, smart ML engineers that can see through that gimmick of a benchmark)


That's the reason for my comment, if they were targeting smart ML engineers that can see through the gimmick, you'd expect them to market differently.


How many people fall into the intersection of “wants to do matrix multiplication really fast, and willing to pay for it”, “only knows pure python and can’t touch numpy”? Is that segment of the market worth investing $100M into?


https://archive.ph/tPOdx (original website requires JS and cookies to be enabled - which I don't have)


(totally off-topic comment) Couldn't help but notice that fire playing with laptop looked a bit like a AI generated. Then looking closely it definitely is.


How can you tell?


One of the things you will start to notice after seeing loads of these is the wrong perspective. There are bunch of odities too other than keyboard or fingers.

It is difficult to explain but let me try. It looks like the fire is facing laptop screen, but look closely, is it really facing laptop screen or you. It is so confusing, you can never tell.

If this was actually a 3d rendering, all lines would follow the perspective from camera in a certain direction. Look at where keyboard is going. It seems to be going upwards somewhere. If you extend the top and bottom end of keyboard behind the screen, those lines won't make a correct rectangle.

Look at the yellow part of the flame just blending into red/orange near eye on the right side of image. Try following any shadows or edges of things. They are often wrong.


Also the pupil-like white dots, not at the same place between the two eyes.

All of these could have easily fixed with a bit of time in photoshop/image editing software, which leads me to believe it has not been retouched at all an is a direct output from the AI. Quite impressive.


Just zoom in on the keyboard. It’s pretty characteristic of AI generated images.

Edit: Just saw the other comments saying the same, whoops


After seeing a lot of AI art, you start to notice things, it's like the same thing that happened with photoshop years ago, a lot of people can identify if an image photoshopped just looking at it. Both are not an exact science, so there are still false positives and false negatives. As generate AI progresses it will become harder to identify, while with photoshop it depends on the user's skill.

In this case, what makes me think it's AI generated, is an inconsistent mix between a 3d style render and digital painted art. The way it's lit. And after I already suspected, I looked at the keyboard because it's usually where AI fumbles the bag.


The keys on the keyboard and the index finger on the (character’s) left hand immediately jumped at me. They’re distorted.


'I can tell from some of the pixels and from seeing quite a few shops in my time.'


The fingers look a bit weird and the keys are totally off. Still an OK illustration though. There are far worse being published.


Immediate giveaway is that the fireball has a lazy eye. Then you can look at the keyboard.


[flagged]


what does this mean?


They are probably alluding to this line:

“Let’s run a matrix multiplication example using matmul.mojo. On My Apple MacBook Pro M2 Max, I get about 90,000x speedup over pure Python”


matrix multiplication? but i didn't get the joke either


[flagged]


The vibe to me is targeting people who pivoted from crypto to AI.


The heavy use of emojis is really nauseating.


Cool, but why would I use consumer electronics for my server needs?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: