I briefly tried it : I don't see the point, there is no way to connect it to your online collabora instance or directly to Nextcloud or anything except your local files.
Just use LibreOffice at this point, at least it has native performances and is not an app bundled inside a browser.
> Just use LibreOffice at this point, at least it has native performances
I don't think you've ever used LibreOffice if you think it in any way fits the description "performant". It's a great project but I wouldn't exactly call it snappy.
I use regularly both libreoffice and collabora online and I can say the former is snappy compared to the second. It can take a longer time to open thought, mostly on Windows.
People actually use this kind of software today ?
When I read OpenClaw description :
"The AI that actually does things.
Clears your inbox, sends emails, manages your calendar, checks you in for flights.
All from WhatsApp, Telegram, or any chat app you already use.".
It does not appeal to me at all. I wouldn't trust an IA agent near my mails, calendars, messages, flights or anything it could mess-up with. It sounds like a security nightmare waiting to happen.
I've seen this issue before, they're making progress but theirs no firm release date.
Plus you then have to extensive testing to see what works in Web builds and what doesn't. I REALLY enjoy vibe coding in Godot, but it's still behind Unity in a lot of ways.
I'll add that C# have better performances than gdscript. It doesn't make a difference for most of the things you code in a game, but it comes in handy when needed.
For mathy stuff, 100% c# is going to be better. But if you need to round trip to the engine a lot getting stuff in and out of the dotnet heap can actually hurt performance. You also have to be _really_ careful because there are a lot of cases you generate accidental garbage (biggest one is when you use strings that are getting implicitly converted to StringNames every time you run a function, you can avoid this by pre-generating them as consts but I've run into a fair few people who never ran dotmemory or the like to see the issues).
Yes, it tooks me 2 years to see how much garbage strings conversion to String Names generates and how a fool I was calling something like Input.IsActionPressed("move_right") every frame (sadly it's the example given in the input documentation).
Yup. I remember running dotmemory on a whim and being confused by all the stringnames until I noticed what was in them. They really should put that in the docs to just make a const stringname somewhere. I use a global static class for anything I want in multiple files. But I also tend to just use statics instead of autoloads if I'm doing everything in c#.
Congrats for the launch !
How does it compare to Matomo ?
Matomo is a well known open source Analytics alternative, GDPR compliant, used by millions, and it seems to solve the same problem while being freely self-hostable and offering more features. Its cloud version price is more expensive than yours thought.
C# support is great. But yes, if you need to call a library/extension written in gdscript from the C# code, you'll need to write some C# bindings to make it practical.
Not sure if this helps but this is from tinkering with Mistral 7B on both my M1 Pro (10 Core, 16 GB RAM) and WSL 2 w/ CUDA (Acer Predator 17, i7-7700HK, GTX 1070 Mobile, 16GB DRAM, 8GB VRAM).
- Got 15 - 18 Tokens / sec on WSL 2 with slightly higher on M1. Can think of that to about 10 - 15 words per second. Both were using GPU. Haven’t tried CPU on M1 but on WSL 2 it was low single digits - super slow for anything productive.
- Used Mistral 7B via llamafile cross-platform APE executable.
- For local-uses I found increasing the context size increased the RAM a lot - but it’s fast enough. I am considering adding another 16x1 or 8x2.
Tinkering with building a RAG with some of my documents using the vector stores and chaining multiple calls now.
I haven’t seen on how it fares on uncensored use-cases, but from what I see Q5_K variants of Mistral 7B are not very far from Mixtral 8x7B (the latter requires 64GB of RAM which I don’t have).
Tried open-webui yesterday with Ollama for spinning up some of these. It’s pretty good.
Right now the minimum amount of RAM I would recommend is 16gb, I think it can run with less memory but that will require a few changes here and there (although they might reduce performance). I would also strongly recommend using a GPU over CPU, in my experience it can make the LLM run twice as fast if not more. Only Nvidia GPUs are supported for now and the CUDA toolkit 12.2 is required to run Dot.
The big "Get started" button does nothing (or maybe show a modal and close it really quickly). This is a good way to scare users away.
The loaders are nice thought.
Yes, we found that other online video editors are not very usable, pay to remove watermarks, can't be used on mobile and export is very slow etc. So we make chillin,if you want to simply add effects, keyframes, animations, merge several clips, or crop a video, Chillin is a good choice.
Just use LibreOffice at this point, at least it has native performances and is not an app bundled inside a browser.