That is a ridiculous statement. Nobody would even care to break this thing. Look at it's base price, then lookat their customers. It makes no sense to break it.
>Look at it's base price, then lookat their customers. It makes no sense to break it.
You're not thinking the same way the motivated pirates think. Some pirates (especially in Eastern Europe, Asia, etc) rip new releases as fast as possible to illegally re-sell or re-stream for lower prices (or show along with ads for revenue). In this way, the pirates get the revenue instead of the legitimate movie studios.
So pirate groups in combination with illegal streaming websites can be thought of as a black market financial arbitrage. So far, the video sources they used include Blu-Ray rips and streaming Netflix or Amazon Prime Video webrips.
However, the Kaleidescope players could theoretically also be included as rip sources ... if the DRM was broken. The math for profitable arbitrage isn't that ridiculous. E.g. :
- it would take only ~80 of those titles to recoup the cost of $1995 Kaleidescope player + the $7.95 rental fees for 80 downloads. All downloads after that break-even threshold is extra money for the pirates. Another bonus is pirating 4k UHD content that's not available on physical Blu-rays.
But the Kaleidescope DRM isn't broken. Therefore, the $7.95 rental downloads can't be used as a new vector for pirate releases. Of course, Kaleidescape doesn't want this scenario to happen so they're incentivized to continue paying for the DRM licensing protection.
And to recap the specifics I was replying to, it was this:
>"If you're going to allow playback on devices in "adversarial" hands (streaming, home physical media playback), it's going to be incredibly difficult to restrict copying."
Kaleidescape is one counterexample to that. So far, they have actually restricted copying with success.
The issue is the so-called "DRM" isn't just the encryption of the harddrive files. The DRM protection also includes the watermarks in the video images that survive the HDMI capture. If pirates don't want their $2000 Kaleidescape player blacklisted and bricked, they have to figure out how to remove all forensic watermarks (the invisible low-level "noise" in the image frames) so the illegal copies can't be traced back to that specific compromised player.
It's not impossible but it raises the threshold of difficulties. E.g. using differential analysis to reverse-engineer watermarking now requires buying TWO players for $4000 instead of just one for $2000; and paying for 2 download rentals instead of just 1. And add hours of analysis work on top of that. DRM doesn't have to make piracy impossible; it just has to make the cost/effort equation not attractive. For now, the Kaleidescape DRM scheme is "good enough" for the cost/effort equation to not make sense for pirates.
I was talking digital. The output has to hit a device that does something with pixels at some point. At that stage it isn’t encrypted. (Think ribbon cable to LCD, or equivalent). No reason why an FPGA or some custom hardware can’t grab that, just requires engineering effort.
In practice, it is not what happens. I've been doing AI assisted Rust for some time, and it is very convincing that this is the way. I expect 6mo to 1y to be basically fully automated.
Rust has tons of code out there, and quality code. Different from js or Python that has an abundance of low quality to pure garbage code.
> In computer vision, there has been a similar pattern. Early methods conceived of vision as searching for edges, or generalized cylinders, or in terms of SIFT features. But today all this is discarded.Modern deep-learning neural networks use only the notions of convolution and certain kinds of invariances, and perform much better.
I was there, at that moment where pattern matching for vision started to die.
That was not completely lost though, learning from that time is still useful on other places today.
I was an undergrad interning in a computer vision lab in the early 2010s. During group meeting, someone presented a new paper that was using abstract machine learning like stuff to do vision. The prof was so visibly perturbed and agnostic. He could not believe that this approach was even a little bit viable, when it so clearly was.
Best lesson for me - vowed never to be the person opposed to new approaches that work.
> Best lesson for me - vowed never to be the person opposed to new approaches that work.
I think you'll be surprised at how hard that will be to do. The reason many people feel that way is because: (a) they've become an expert (often recognized) in the old approach. (b) They make significant money (or something else).
At the end of the day, when a new approach greatly encroaches into your way of life -- you'll likely push back. Just think about the technology that you feel you derive the most benefit from today. And then think if tomorrow someone created something marginally better at its core task, but for which you no longer reap any of the rewards.
Of course it is difficult, for precisely the reasons you indicate. It's one of those lifetime skills that you have to continuously polish, and if you fall behind it is incredibly hard to recover. But such skills are necessary for being a resilient person.
You are acting like it was obvious that machine learning was the future, but this person was just stubborn. I don't think that was necessarily the case in the early 2010s and skepticism was warranted. If you see results and ignore them, sure that is a problem. But it wasn't until ML vision results really started dominating conferences such as CVPR that it became clear. It's all a tradeoff of exploration/exploitation.
> I cannot work with those who denounce calling out misbehavior on social media to thousands of followers, while themselves roasting people both on social media and on mailing lists with thousands of subscribers.
That person is someone called `Sima` and their posts on Mastodon are pure gas lighting. These are the worst abusers.
Yeah, I would love an actual alternative to Ollama, but RamaLama is not it unfortunately. As the other commenter said, onboarding is important. I just want one operation install and it needs to work and the simple fact RamaLama is written in Python, assures it will never be that easy, and this is even more true with LLM stuff when using AMD gpu.
I know there will be people that disagree with this, that's ok. This is my personal experience with Python in general, and 10x worse when I need to figure out all compatible packages with specifc ROCm support for my GPU. This is madness, even C and C++ setup and build is easier than this Python hell.
RamaLama's use of Python is different: it appears to just be using Python for scripting its container management. It doesn't need ROCm to work with Python or anything else. It has no difficult dependencies or anything else: I just installed it with `uv tool install ramalama` and it worked fine.
I'd agree that Python packaging is generally bad, and that within an LLM context it's a disastrous mess (especially for ROCm), but that doesn't appear to be how RamaLama is using it at all.
@cge you have this right, the main python script has no dependancies, it just uses python3 stdlib stuff. So if you have a python3 executable on your system you are good to go. All the stuff with dependancies runs in a container. On macOS, using no containers works well also, as we basically just install brew llama.cpp
There's really no major python dependancy problems people have been running this on many Linux distros, macOS, etc.
We deliberately don't use python libraries because of the packaging problems.
I gave Ramalama shot today. I'm very impressed. `uvx ramalama run deepseek-r1:1.5b` just works™ for me. And that's saying A LOT, because I'm running Fedora Kinoite (KDE spin of Silverblue) with nothing layered on the ostree. That means no ROCm or extra AMDGPU stuff on the base layer. Prior to this, I was running llamafile in a podman/toolbox container with ROCm installed inside. Looks like the container ramalama is using has that stuff in there and amdgpu_top tells me the gpu is cooking when I run a query.
Side note: `uv` is a new package manager for python that replaces the pips, the virtualenvs and more. It's quite good. https://github.com/astral-sh/uv
One of the main goals of RamaLama at the start was to be easy to install and run for Silverblue and Kinoite users (and funnily enough that machine had an AMD GPU, so we had almost identical setups). I quickly realized contributing to Ollama wasn't possible without being an Ollama employee:
I just realized that ramalama is actually part of the whole Container Tools ecosystem (Podman, Buildah, etc). This is excellent! Thanks for doing this.
Having done lots of Minecraft modding a decade ago, it's wonderful to see that the community is still active enough for there to be inside jokes like these.
Given the size of the game, it's not an easy feat to build a Minecraft server in any language. Yet there are seven, in just Rust alone??
The protocol minecraft uses to communicate between server and client is relatively straightforward and 'dumb' (read: tolerant of missing or contradictory data), so it's quite easy to make a server that a client will connect to and work OK with. Making something that supports all the game mechanics, especially world generation (an area Mojang/Microsoft are a lot more protective of, besides) and bug-compatibility, is a lot harder.
If somebody could get a high performance mc server working that supported everything except world generation, that would be immensely useful to a lot of people. Worlds are often pregenerated and this can be done offline by an official java instance, then give to the alternative software which players actually connect to.
I suspect the hard part would be getting total parity with all the undocumented intricacies of mob spawning and AI, and block interactions. But if there are slight differences from Vanilla this isn't necessarily the end of the world for players. Popular server mods like Paper already tamper with some Minecraft "features" in an opinionated way and for the most part players don't notice.
Getting Redstone interactions to be bug compatible is no small task. Redstone has complex interactions with nearby blocks that are completely baffling to new players and still challenging to veterans.
Mob spawning and behavior shouldn't be that difficult, but if you want identical terrain generation you are going to be cursing life.
What would really make a third party server stand out is first class mod support.
Better performance is almost a given. Minecraft's engine has a lot of low hanging fruit that has yet to be picked despite it being theoretically a multi-billion dollar game. Just look at how shockingly CPU hungry hoppers are for example. Mob pathfinding also consumes an inordinate amount of resources and is still kinda lousy.
I get it has lots of computing to do for something like a server with large players, but even a server with a small amount of players that's technical focused can easily bring the game to a crawl.
Its funny how the best way to get great performance from Minecraft is getting a CPU with great single core performance, get lots of memory, and then use fabric mods to optimize the game/server.
Interesting tool, but very tuned for modern days, with few content management options from what I can see. Thanks anyway, this is a nice addition the same.