From Chris Lattner when asked about the date that will happen:
I would tell you if I knew . Our priority is to build the "right thing" not build a demo and get stuck with the wrong thing. My wild guess is that the language will be very usable for a lot of things in 18 months, but don't hold me to that.
It WILL be open-sourced, because Modular on its own isn't big enough to build the whole ecosystem.
Hard needle to thread. I understand the risk of getting community momentum in a direction you don't want to go, but I'm also doubtful that much of the audience they want is interested in a closed source programming language. Hoping they can thread it because I think it's a good idea and Swift is pretty nice.
I think they're just playing a larger distribution game to get people locked into
Mojo then eventually pay for their Modular engine hosting services, just about the same as most open core, VC backed startups.
I see what you're saying, but it's pretty different from an open core analytics product or something. This is a language/dev environment and the bar is a lot higher because 1) I know it's going to be a big investment to learn the language and ecosystem 2) I'm deeply locked-in technically by writing my stuff in this language 3) the competitors are open source and at this point I'm not worried about python, numpy, typescript etc going away. My feedback is basically that despite this project looking cool, I will not be writing any Mojo until it's open source.
That shouldn’t be hard I think. My confident guess is that there will be multiple backends for the language, an open source bare bones one, and one which integrates with their “value add“ highly scalable backend, with optimization for distributing work across GPU farms or even custom hardware.
It's hard to fund development without releasing something, and at the same time, releasing it prematurely runs the risk of entrenching undesirable features/behaviors/etc that stifle development of something better.
Releasing it closed source allows them to point to something that justifies further fund raising and allowing them to control the direction of development while getting user feedback.
This isn't some esoteric language where if mojo vanished you'd lose your whole project and have to rewrite it into something else. It's python with some syntax sprinkled on top, so vendor lockin isn't a huge concern.
They didn't open source Swift out of the gate because they wanted to guide its development according to the features and plans they had for the language. Wanting to have a firm grasp of the language's core features and implementation is reasonable. After all, I believe Lattner was unhappy with some early directions they took with Swift, but it was too late to change anything because libraries and such had already been written. I may be fuzzy on that, but that's what I seem to recall.
Also, I am not an absolutist where free software is concerned. It doesn't make the world worse to release proprietary software. Especially if you have the stated goal of eventually open sourcing it.
Yeah this is confusing for me: I'm non an expert in numpy * but I had assumed that it would do most of those things - vectorize, unroll, etc, either when compiled or through any backend it's using. I understand that numpy's routines are fixed and that mojo might have more flexibility, but for straight up matrix multiplication I'd be very surprised if it's really leaving that much performance on the table. Although I can appreciate that if it depends separately on what BLAS backend has been installed that is a barrier to getting default fast performance.
* For context I do have done some experience experimenting on the gcc/intel compiler options that are available for linear algebra, and even outside of BLAS, compiling with -o3 -ffast-math -funroll-loops etc does a lot of that, and for simple loops as in matrix vector multiplication, compilers can easily vectorize. I'm very curious if there is something I don't know about that will result in a speedup. See e.g. https://gist.github.com/rbitr/3b86154f78a0f0832e8bd171615236... for some basic playing around
I'm not sure where/how they'd be squeezing out more performance unless its better compilation/compatibility with Apple Silicon intrinsics.
Edit: ..Is Mojo using more than 1 core? I'm not sure I understand their syntax and if they are parallel constructs.
Edit2: Yeah Mojo seems to be parallelizing, so the comparison really isn't fair. The np.config posted elsewhere shows that OpenBLAS is only compiled with MAX_THREADS=3 support, and its not clear what their OPENBLAS_NUM_THREADS/OPENMP_NUM_THREADS was set to at runtime.
I'm not super familiar with Mac but I also notice that numpy here is using openblas64. I had thought the go-to was the Accelerate framework? Or is that part of it somehow? If so it would be interesting to see how that impacts performance. Of course it's all kind of an argument for something like Mojo that gives better performance out of the box. Also an argument for why Mojo would be way more interesting if it was open source.
Just whatever you get by default with pip install numpy... Changing the benchmark to run with a 1024x1024x1024 matrix instead of a 128x128x128 does speed up numpy significantly though
Python 119.189 GFLOPS
Naive: 6.275 GFLOPS 0.05x faster than Python
Vectorized: 22.259 GFLOPS 0.19x faster than Python
Parallelized: 50.258 GFLOPS 0.42x faster than Python
Tiled: 59.692 GFLOPS 0.50x faster than Python
Unrolled: 62.165 GFLOPS 0.52x faster than Python
Accumulated: 565.240 GFLOPS 4.74x faster than Python
If you are looking for improved performance, you will always go with NumPy + vectorization. That's what is important. So I don't know what is the argument here, am I missing something?
If you take a look at the optimized Mojo code doing the matrix multiply [1], it takes an expert to understand. It’s not just some simple for-loops in Mojo they’re comparing against.
It'll definitely be faster (less latency for a single matmul) than numpy since you're using all the cores (and from the throughput measurements, fairly efficiently). Better algorithms (like Strassens) don't take over till the sizes involved are much greater than this benchmark is doing.
In a single-threaded comparison to numpy (or just measuring total throughput -- many applications have lots of slightly smaller matmuls they can do which make it trivially to parallelize without having to parallelize each matmul, and throughput increases slightly when you do so) though, the details start to matter. Numpy is bad with small dimensions (hasn't optimized for them at all really, and overhead moving data from a Python context to a Numpy context starts to dominate), and performance can vary 10-50x just based on whether you've set up an optimized BLAS library for it to link to or not. Mojo side-steps some of that because it provides the fast primitives you need in the language itself and doesn't present the opportunity to execute more slowly. Single-core Mojo shouldn't be meaningfully faster than a properly installed single-core Numpy on large matmuls, and the given implementation should be meaningfully slower on large enough problems.
I don't really care for the benchmark though. It's potentially okay at showing how easy it can be to write fast code, but it comes across as being presented to show how much faster mojo is than Python. That latter is misleading for at least a couple reasons:
- By some magic, my $300 old dev laptop (swift 3) is 6x faster than their brand new m2 pro max with a vanilla Python triple for loop. Is Mojo adding some overhead as it runs that benchmark? Is something wrong with their Python installation?
- Many of the optimizations they applied in Mojo apply just as well in vanilla Python. Tiling, parallelization, ... Some of those have a higher ratio of improvement in Python than Mojo (depending on some fiddly GC details) because their purpose is to cut back on cache/page/... misses enough to make the problem compute-bound rather than IO-bound, and the Python representation takes enough extra space that the benefits accumulate faster. The serialization overhead for stdlib parallelization is dwarfed by the matmul cost, so it doesn't wind up mattering much that you have slow copies all over the place, and you really do find yourself bound by the interpreters rate of interpreting.
Like, it's still faster than Vanilla Python by a lot, and it's neat that the code is so easy to write, but 90k isn't the speedup I'd headline with.
To be fair, I think their point is just that you can write fast code easily in Mojo, and matmul is something easy to understand, so it makes a good case study. The optimization primitives are fairly intuitive, so presumably you should be able to apply the same naive approach (code it, slap on optimization primitives) and get decent speedups in less well studied domains.
So it seems to be behind a sign-up. Is this just a temporary measure to have some notion of "control" while its still being developed, or is this how they are positing it as? Is it interesting project enough that people will jump through hoops to get it?
I didn't sign up, but I'm hazarding a guess that its binary-only also?
I skimmed your post and I wonder if mojo is focusing on such small 512x512 matrices? What is your thinking on generalizing your results for larger matrices?
I think for a compiler it makes sense to focus on small matrix multiplies, which are a building block of larger matrix multiplies anyways. Small matrix multiplies emphasize the compiler/code generation quality. Even vanilla python overhead might be insignificant when gluing small-ish matrix multiplies together to do a big multiply.
Except for having to follow their "manual installation" directions, everything is working great on my Apple Silicon Mac.
I have had some frustration with Mojo in that their standard libraries, as far as I have found, lack a lot of general programming support, like file I/O, etc. Are we meant to import the Python package and just use that?
Yes, you are meant to import the Python package and just use that. Mojo is a Python superset. It speeds up math but then relies on CPython for grunt work.
My experience is that most Python programmers would consider "pure Python" to basically be using the standard library.
There's also the issue of doing complicated work with NumPy and you start looping and revert back to slow (pure) Python because you are crossing the NumPy/ Python interface. Tools like Cython, Numba, and probably Mojo help to solve this.
Pure Python means written wholly in Python to most Python programmers in my experience. For some Python programmers this excludes the 3rd party libraries they care about.
The python/numpy boundary is a problem, which is sort of the value proposition of efforts like mojo or julia - no argument.
But nobody doing any significant numeric work would ever consider a "pure" python matrix multiply meaningful. It's disingenuous to present that as your comparison, at least without also including the way it's actually done in practice.
Honestly, it undermines their presentation with anyone involved in the area.
The difference and the whole sales pitch is based on that detail - that you write everything in one language; there is no high level and low level language switch, everything is written in one langauge.
AI developers expect these sorts of comparisons to be taken against accelerated code anyways. Modern Python NN frameworks don't spend significant time in the Python interpreter, so that's not where users would expect the bottleneck to be for their comparison.
It's like claiming that your code is faster than MATLAB's for loops. Why write gemm yourself?
Even if this language is a good idea, I worry that they don't seem to understand the audience.
But that is basically the core value proposition of mojo. You don’t need to drop down into another language to do low level, high performance kernels, you can do it all in mojo. Scalable in terms of target domains, (somewhat similar in that regard to swift).
I skimmed the linked code. Is there enough info there to run the same benchmark with numpy? And then if we find out that numpy is also 90000X faster then it's fair to conclude that numpy and Mojo have the same performance on this task? I'm a benchmarking n00b, go easy on me. Just trying to see if my basic understanding is correct...
I really don't understand why they do this. I was really excited for the compilation and type checking features but this whole speedup thing is pretty dumb. And I know the people developing mojo knows that too. Kind of seems like some marketing team is pushing them to do all this but their core customers are people who knows what python and numpy is and this kind of talk puts just feels weird
I think that mojo in it's current form is not on par with numpy performance (if it was they would be saying that). Even if the performance is same I would still give it a try. Their whole marketing though is making me reconsider
They do this because they want to showcase writing code in single language that can do things that other languages can't. Other langauges are either low level or slow, but they are high level and fast.
There is large overlap with julia, yes. Both are addressing two-language problem.
Mojo is python syntax first where they want to be proper superset, which gives them access to wide python ecosystem and community. If executed well, this alone can absorb community similarly to how ie. typescript absorbed javascript community. Also similar thing happened with objective-c -> swift - also led by Chris, which gives a lot of credibility to the whole initiative.
Julia is proper new language you need to learn, use new tooling around it, ecosystem is quite academia skewed, ie. writing web services is probably not the best idea etc.
Additionally Julia suffers from "time to first plot" problem, which alienates a lot of newcomers who are not familiar or simply don't want to switch to programming mode where it becomes less of a problem (repl/notebook style where runtime is always active).
Both are very interesting languages, but mojo's starting point and trajectory seem to be at different level, ie. adoption may be very sharp.
Julia is a language that was developed by academics, with not the best aesthetics in the implementation. I see a lot of value in starting with a clean, well engineered layered approach. If anything this is something I would expect Lattner to be able to deliver, whereas Julia while certainly powerful is tremendously messy.
PyTorch recently added support for JIT compiling numpy code[0]. And then there are libraries like Numba[1]. I wonder how Mojo compares with exiting OSS Python JIT libraries like these?
I've been following for a while - I eagerly await when I can build this from source, or install without an auth token!
This is a blocker for me to ever adopt this as more than a toy language. These days, if I can't use Nix to build + deploy the whole set of requirements (I'm fine building out the packages themselves), it's basically a non-starter!
I know they are claiming that it will eventually be open-sourced. Just a bit sad that there's no timeline on that
I am a bit surprised by the negativity here. Python's ecosystem fragmentation is legendary at this point, and it's lack of portability is well known. Let's face it - a lot of the "ML" applications for the next decade are going to be boring, but super important applications. Stuff like self-driving labs, process modules in factories and so on.
And the idea that you will do everything with a conda package at every point is laughable. You need a compilable language, where code can be ported over from Python very rapidly, and where ML/AI tools such as differentiability is a first class citizen. That language doesn't really exist today, but multiple billions of actual industrial applications need it.
In fact, what the negativity shows here is how skewed Hacker News is towards the bit folks, and how little they talk to the atoms side of things.
Is Mojo interactive like Python? I mean will it work well with notebook style programming? I've been looking for something that works well in that interactive mode (like Python) but also has types (like, say, Swift). Swift playgrounds have always been buggy as hell for me.
glad I can finally play with it. Hopefully the pace of development does not slow down because Rust features with Python syntax and a smarter compiler is a pretty attractive proposition (regardless of all other goodness being promised).
It sounds like it already is. From the article: "NOTE: This guide is for Mojo on Apple Silicon-based Macs only. Mojo on Intel-based Macs can be installed via Docker containers. Select “Set up on Mac (Intel)” in the developer console for instructions."
Didn't everyone who requires their code to run fast already upgrade to M1/M2 processors?
Considering they already run on Intel it'll probably be done by someone, probably after they open up the code.
Anyone else find this Mojo hype and release cycle strange? Maybe it's because FOSS has driven so much innovation in the open for the last few years, but something feels off about how Mojo is being teased and released.
Yeah, I do. It's what ultimately discouraged me from using the language. They are keeping a tight grip on Mojo, they may promise open sourcing it, but that didn't stop them from doing things like preventing an online repl for Mojo being built[0]. Presumably because they want to use Mojo to gather as many sign ups as possible for their business. Fair I guess, but leaves a bad taste in my mouth.
Or rather what it will become once the VCs who are paying for all this start the squeeze.
I'd be nervous about getting in too deep with tools from a company with what looks to me like an utterly unsustainable business model. The house of card will have to come down, won't it? You don't want your stuff to go down with it.
IDK, been wrong before, could be here too. It's disconcerting, though.
If it gets open sourced before that happens, it'll get forked if it's good enough. Just happened recently with OpenTF. I'm really interested in it, but I'll wait until it gets put under a permissive license before getting too invested.
Same. I don’t feel strongly about project governance or specific OSI-approved license choice, but I’m not stepping on this rug while VC is still holding on with both hands.
nod.ai was just acquired by AMD for a chunk of money. I expect they hope to find a similar exit once they can show an end-to-end product: Take a model written in Pytorch, upcast it to mojo, and put it through their unified compiler stack to accelerate on <hardware>
Agreed. Here is a serious contender[0] minus all the hype and the $100M in VC money. You would expect a minimum of interest given how Mojo is received by the community, but not really in practice.
I'm sure Chapel has its merits, but one of the main selling points of Mojo is the aspiration to be part of the Python ecosystem, and so far I haven't seen any other programming language offering a similar promise, other than Python itself coupled with DSLs or other extensions for high performance.
Those interested in the intersection between Python, HPC, and data science may want to take a look at Arkouda, which is a Python package for data science at massive scales (TB of memory) at interactive rates (seconds), powered by Chapel:
I'd rather use Python if I'm in the Python ecosystem. So many attempts were made in the past to make a new language compatible with the Python ecosystem (look up hylang and coconu -- https://github.com/evhub/coconut). But at the end of the day, I'd come back to Python because if there's one thing I've learnt in recent years it's this:
I don't think those fill the same niche. They're nice-to-haves on top of Python. The promise of Mojo is that it's for when Python isn't good enough and you need to go deeper, but you want the Python ecosystem and don't want to write C.
I believe the main Mojo use cases are scenarios in which you'd need dependencies anyway. Code that you can't write in Python due to performance concerns, so you'd need to call C/C++/Rust/CUDA/Triton/etc anyway.
honestly that is the main thing that makes me pretty sure Mojo will fail. Right now, the types of things it doesn't support include keyword arguments and lists of lists. The place where python compatibility really matters is C API compatibility and they are hilariously far away from that for now.
I mean honestly, the closest language to Mojo really is Nim. In the latest Lex Fridman interview with Chris Lattner [0] when he talks about his ideas behind Mojo it pretty much sounds like he's describing Nim. Ok fair, he wants Mojo to be a full superset of Python, but honestly with nimpy [1] our Python interop is about as seamless as it can really be (without being a superset, which Mojo clearly is not yet). Even the syntax of Mojo looks a damn lot like Nim imo. Anyway, I guess he has the ability to raise enough funds to hire enough people to write his own language within ~2 years so as not have to follow random peoples whim about where to take the language. So I guess I can't blame him. But as someone who's pretty invested in the Nim community it's quite a shame to see such a hyped language receive so much attention by people who should really check out Nim. ¯\_(ツ)_/¯
For what it's worth, I think what Mojo does do better is that it's trying to be a better Python. Chapel's syntax immediately discourages me from using it.
I wonder if there's a happy medium between the two. Mojo definitely didn't need $100M in funding... but maybe Chapel could use $100K or so for better marketing.
Chapel has at least several full-time developers at Cray/HPE and (I think) the US national labs, and has had some for almost two decades. That's much more than $100k.
Chapel is also just one of many other projects broadly interested in developing new programming languages for "high performance" programming. Out of that large field, Chapel is not especially related to the specific ideas or design goals of Mojo. Much more related are things like Codon (https://exaloop.io), and the metaprogramming models in Terra (https://terralang.org), Nim (https://nim-lang.org), and Zig (https://ziglang.org).
But Chapel is great! It has a lot of good ideas, especially for distributed-memory programming, which is its historical focus. It is more related to Legion (https://legion.stanford.edu, https://regent-lang.org), parallel & distributed Fortran, ZPL, etc.
The idea of a closed source Nim in 2023 is bananas. Getting vcs to pay to make you famous instead of just contributing to better projects is certainly an option I suppose.
Lattner wrote LLVM and created Swift. It’s pretty easy to see how he’s able to secure 100M funding probably at pretty good terms even in a tighter market. Honestly given the current VC climate anyone who can get such funding numberwang would be foolish not to - good to have a long long runway for something like this.
It's because making money with a programming language is very hard. They are trying to build a moat around their product while courting people who abhor moats to use it.
It's ironic isn't it? On one hand we lament the plight of the OSS developer getting a pithy amount of donations relative to their value added, on the other hand if someone dare try to charge for their work, out come the pitchforks.
Yeah, it is the only profession where a group of people expect to be paid while actively refusing to pay for the work of others, that actually make their own work possible in first place.
I'm fine with charging for your work, but I have a very high bar for the value you create if I am putting man-years of effort behind building on your ecosystem.
Yup and that’s the thing, is that programming languages require a lot of work, more than anyone is willing to fund (these days most accurately measured in man-decades). Except corporations and sometimes VCs, and that’s why we most of us use the same handful of mostly identical (in the space of PLs they are all in the same corner) corporate-sponsored programming languages.
The thing is though, it used to be the norm to pay for software. At some point, I think with the rise of "free" social network platforms that gave you tons of features, people then began expect high quality software for free (obviously not realizing they are the product).
Not even close Swift was created inside biggest corporation in modern history, this is just some grab on AI VC money with some strange Sillicon Valley (tv show) vibes.
Nit: it's worth the most, or has been at times, but Apple is not a big company or organisation compared to others, and I think the number of people involved is perhaps the more useful comparison here rather than market cap. They're roughly an order of magnitude below the top end.
I’m not saying a lot of the comments aren’t without merit but I was actually surprised to see so much negativity. There are only a handful of funding approaches for languages. I would also say most of the funding they raised is going towards the platform and inference accelerator.
I can also buy the argument why setting up all the dev-tel infra to make an odd language project successful is a lot of work and possibly a distraction. Even though the source isn’t currently open, they’re certainly developing it in the open.
Maybe it’s a combination of a bunch of these things that’s throwing people off?
Ok, even after reading the initial announcement for Mojo on their blog, I still have no clue what this is.
A python like language but closed source?A language to build prompts? Something else?
If I were on a PC I would have to ask GPT4 to summarize it for me.
How is it so difficult to write a two sentence elevator pitch that anyone outside of your bubble can understand?
If Feynman could do it, you should be able to do it too.
I had never heard of it until I was listening to this podcast and the creator of Mojo was their guest. I think the episode clarifies it pretty well, worth a listen
Is it that hard to google "mojo programming language"? Have we finally gotten so lazy that either answers need to be in the link itself or we reach for chat bots to fill in the gaps?
In this case, the co-founder behind Mojo is "former Google and Tesla employee[1] and co-founder of LLVM, Clang compiler, MLIR compiler infrastructure[2] and the Swift programming language", so he has some credentials and ability to deliver.
First, because I feel that Swift has been a boondoggle that everyone was forced to accept and now it's just normal. I'm still not convinced Steve Jobs would have given Swift a go-ahead (release).
From the start, Swift felt like, as a language, that it didn't know what it wanted to say, with a big part of that coming from having just taken a kitchen sink approach to syntax and features. Looking at their same code, 4 lines of python becomes like 12+ lines. I'm not sure who this attracts.
More importantly, he clashed at Tesla. He left a gaping hole in the Swift Tensorflow stuff at Google. He also left Apple and promised that he would still be involved in Swift, until he got pushed out.
It's not that he isn't productive or brilliant, I'm just not sure I trust someone with a track record of being volatile as a great maintainer of a platform.
Overall, this appears to have the same markings of the Cappuccino web framework project.
Steve was still around when Swift started as a toy project, and most likely gave it a go.
Objective-C 2.0 was released in 2006, and Steve Jobs died in 2011.
As per Chris Lattner interviews, Objective-C 2.0 and latter improvements were the ground work to have a good interoperability story between Objective-C and the new language prototype that would be eventually be known as Swift in 2014.
Note that Apple already toyed with Java as possible Objective-C replacement, when they weren't sure if Apple folks educated in Object Pascal and C++ would be keen in adopting Objective-C, latter tried with Ruby (MacRuby) which didn't went anywhere, and the develoeprs left Apple founding RubyMotion still going on.
> Steve was still around when Swift started as a toy project, and most likely gave it a go.
You can't say for a fact if he "gave it a go", and neither can I.
> As per Chris Lattner interviews, Objective-C 2.0 and latter improvements were the ground work to have a good interoperability story between Objective-C and the new language prototype that would be eventually be known as Swift in 2014.
Even if Lattner says this, it still seems as bogus to me as saying it's "objective c without the c." They're two completely different things. Swift is a language. Objective-C is a runtime on top of a language disguised to look like a language. They only say this stuff because people probably would have had a meltdown if they were blunt about what swift actually meant - foundational incompatibility. I'm sure their PR was informed by what happened with the massive amount of software that got lost during the Classic MacOS-Max OS X needs-to-be-rewritten transition.
To the point, I'm not sure what the syntax sugar they added to the 2.0 release had to do with the fact that swift still relies on message passing to access Foundation et al. MacRuby and PyObC used those same mechanisms to access the same stuff.
> Note that Apple already toyed with Java as possible Objective-C replacement, when they weren't sure if Apple folks educated in Object Pascal and C++ would be keen in adopting Objective-C
We know. Java was also just buzzyworthy to include in your OS back then.
A very unfortunate choice of comparison on their end. People use numpy, they don't write matrix multiplications in pure python. This feels like an own-goal, because bad benchmarks reduce trust.
Actually, the point here is that with Mojo, you no longer need to use C/C++ libraries for things that are performance critical since it will just compile down to native speed if you need it to. So you can write nice python code (well with the Mojo additions) and it can be fast and optimized and use things like GPUs, multiple cpu cores and it won't be bogged down by things like the global interpreter lock. Kind of nice.
The point of this benchmark is making it clear that things you wouldn't dream of doing with python are fine in mojo.
People use numpy because python is stupidly slow. Mojo isn't. You can still use numpy of course. But you don't have to.
I see your point but respectfully disagree with the conclusion. Comparing to numpy, not native python, would be the right move. Again, I don't write matmuls in pure python. I do compute matmuls in numpy. I don't actually know whether numpy is 90,000x faster than pure python, so this benchmark is not useful for me.
If they want to show all 3: mojo, numpy, and pure python, then that might be the best of all worlds. They could brag about being 90,000x faster than python, while at the same time showing the actual slowdown of using a pure python-like language (mojo) compared to a compiled numpy library. Let's say the mojo code ends up being 0.5x as fast as numpy; that would still be a pretty great tradeoff for being able to do it all in one language. If you're a python programmer and want to do something that isn't possible with the existing compiled libraries, this would be a good sell. To me, that's still the real comparison of interest.
That's cool, but this runs contrary to the drop-in interop they're touting. It's a pretty confusing initial benchmark to show off. On the Mojo Overview page, they show some numpy+Python using np.max, then they...rewrite np.max using their SIMD parallel map magic?
So is this meant to replace vanilla python matmul (which nobody uses IRL)? No? That was just a benchmark to show off their compiler tricks? Okay, how does it fare against numpy (which is actually used)? Well, it's faster? But I have to write little wrappers around basic functions like np.max to parallelize them myself? Shouldn't that just be in your std lib / invisible to the programmer? I thought it was supposed to be a drop-in speed improvement...
I don't really get it. Am I stupid? Maybe I'm stupid.
well kinda... Numpy is rather slow relative to what you can build with a pure C/C++/Fortran/Rust pipeline. Its API usually prevents memory reuse, putting memory allocations on the critical path. You can't fuse operators. And, all of its operations aside from calls into BLAS (for example it's ufunc elementwise processing) are single threaded.
People use numpy bc of the python ecosystem and all the domain specific libraries it is compatible with. It very fast relative to Python and provides aa stable, easy to use, array API.
How many people fall into the intersection of “wants to do matrix multiplication really fast, and willing to pay for it”, “only knows pure python and can’t touch numpy”? Is that segment of the market worth investing $100M into?
(totally off-topic comment)
Couldn't help but notice that fire playing with laptop looked a bit like a AI generated. Then looking closely it definitely is.
One of the things you will start to notice after seeing loads of these is the wrong perspective. There are bunch of odities too other than keyboard or fingers.
It is difficult to explain but let me try. It looks like the fire is facing laptop screen, but look closely, is it really facing laptop screen or you. It is so confusing, you can never tell.
If this was actually a 3d rendering, all lines would follow the perspective from camera in a certain direction. Look at where keyboard is going. It seems to be going upwards somewhere. If you extend the top and bottom end of keyboard behind the screen, those lines won't make a correct rectangle.
Look at the yellow part of the flame just blending into red/orange near eye on the right side of image. Try following any shadows or edges of things. They are often wrong.
Also the pupil-like white dots, not at the same place between the two eyes.
All of these could have easily fixed with a bit of time in photoshop/image editing software, which leads me to believe it has not been retouched at all an is a direct output from the AI. Quite impressive.
After seeing a lot of AI art, you start to notice things, it's like the same thing that happened with photoshop years ago, a lot of people can identify if an image photoshopped just looking at it.
Both are not an exact science, so there are still false positives and false negatives. As generate AI progresses it will become harder to identify, while with photoshop it depends on the user's skill.
In this case, what makes me think it's AI generated, is an inconsistent mix between a 3d style render and digital painted art. The way it's lit. And after I already suspected, I looked at the keyboard because it's usually where AI fumbles the bag.
From Chris Lattner when asked about the date that will happen:
I would tell you if I knew . Our priority is to build the "right thing" not build a demo and get stuck with the wrong thing. My wild guess is that the language will be very usable for a lot of things in 18 months, but don't hold me to that.
It WILL be open-sourced, because Modular on its own isn't big enough to build the whole ecosystem.