I took a look at Alfred Workflows when we were first starting to build this a few months back but couldn't figure out a way to implement the local crawling and indexing functionality esp. for online services. The alternative would have been indexing and storing data on the cloud but we wanted to prioritize privacy. We wouldn't use a service which crawled our personal data and stored it in the cloud so it didn't seem fair to ask anyone else to do so!
I always take a full disk snapshot before full upgrades like this (boot into recovery and this is straightforward with a usb disk, or use carbon copy cloner) but yep updates in the Apple eco system are (knock on wood) very painless.
Back in the day before things like homebrew it was far worse... the days of macports and mod_python to run your Django app are fortunately behind us.
MacPorts has a pretty slick migration feature now. Not sure about Django, but this is the smoothest migration I've experienced with MacPorts in over 15+ years.
Times have changed completely since... 2006-2007 when this was relevant. If you are still using mod_python in 2024 please seek a therapist specializing in trauma.
I did something similar for finding which outlet a breaker went to. I connected my MacBook to the outlet, airplay to the TV so it was loud, then had it yell if the power was disconnected:
while ! pmset -g batt | grep 'discharging'; do; echo waiting; sleep 1; done; say disconnected
a billion years ago I piped say and some various noise files into ffmpeg to make audios that sounded like numbers stations, I don't think it would run anymore but it was a lot of fun :D
Back in windows and powershell days, this was my favorite "security reminder" prank. Everyone who forgot to lock their machine and walked away got this script executed on their machines.
TLDR: it schedules a task "Security Reminder" to play "I'm watching you" via voice synth, every 30 mins.
This is also a terrible pattern in general that permeates every corner of the tech blogging world. Stop `doing` this with your `posts` so that every other word is going to be tokenized to stand out from the background text.
If you are going to do this, ensure that it is a very subtle treatment to the text.
It looks like a close copy of Erlang APIs, albeit with the usual golang language limitations and corresponding boilerplate and some additional stuff.
Most interesting to me is it has integration with actual Erlang processes. That could fill a nice gap as Erlang lacks in some areas like media processing - so you could use this to handle those kind of CPU bound / native tasks.
func (a *actorA) HandleMessage(from gen.PID, message any) error {
switch message.(type) {
case doCallLocal:
local := gen.Atom("b")
a.Log().Info("making request to local process %s", local)
if result, err := a.Call(local, MyRequest{MyString: "abc"}); err == nil {
a.Log().Info("received result from local process %s: %#v", local, result)
} else {
a.Log().Error("call local process failed: %s", err)
}
a.SendAfter(a.PID(), doCallRemote{}, time.Second)
return nil
I don’t know Go well, but this API would surely piss Alan Kay off.
Why a function that takes an Actor instead of each Actor being a type that implements a receive function? There’s so much Feature Envy (Refactoring, Fowler) in this example.
There is no world where having one function handle three kinds of actors makes any sense. This is designed for footguns.
I also doubt very much that the Log() call is helping anything. Surely lathe API is thin enough to inline a that child.
Honestly for Erlang integration just use NIFs or an actual network connection.
That golang is a mess, and demonstrates just what a huge conceptual gap there really is between the two. Erlang relies on many tricks to end up greater than the sum of its parts, like how receiving messages is actually pattern matching over a mailbox, and using a tail recursive pattern to return the next handling state. You could conceivably do that in golang syntax but it will be horrible and absolutely not play nicely with the golang runtime.
The ideal situation for this sort of code is to basically treat it as marshalling code, which is often ugly by its nature, and have the "payload" processing be significantly larger than this, so it gets lost as just a bit of "cost of doing business" but is not the bulk of the code base.
Writing safe NIFs has a certain intrinsic amount of complication. Farming off some intensive work to what is actually a Go node (or any other kind of node, this isn't specific to Go) is somewhat safer, and while there is the caveat of getting the data into your non-BEAM process up front, once the data is there you're quite free.
Then again, I think the better answer is just to make some sort of normal server call rather than trying to wrap the service code into the BEAM cluster. There's not actually a lot of compelling reasons to be "in" the cluster like that. If anything it's the wrong direction, you want to reduce your dependency on the BEAM cluster as your message bus.
(Non-BEAM nodes have no need to copy using tail recursion to process the next state. That's a detail of BEAM, not an absolute requirement. Pattern matching out of the mailbox is a requirement... a degenerate network service that is pure request/response might be able to coincidentally ignore it but it would be necessary in general.)
In my 8.5 years of Elixir practice I found it much easier to just use a Rust NIF or, in extreme cases, publish to an external job queue. Had success with one of Golang's popular ones (River); you schedule stuff for its workers to do their thing and they publish results to Kafka. Was slightly involved but IMO much easier than trying to coax Golang / Java / C++ / Rust nodes join a BEAM cluster. Though I am also negatively biased against horizontal scaling (distribution / clusters) so there's also that.
NIFs have the downside of potentially bringing down the VM don't they? It's definitely true that the glue code can be a pain and may involve warping the foreign code into having a piece that plays along nicely with what erlang expects. I messed around with making erlang code and python code communicate using erl_interface and the code to handle messages pretty much devolved into "have a running middleman process that invokes erl_interface utilities in python via a cffi wrapper, then finally call your actual python code." Some library may exist or could be written to help with that, but it's a lot when you just wanna invoke some function elsewhere. I also have not tried using port drivers, the experience may be a bit different there.
Yeah, NIFs are dynamically linked into the running VM, and generally speaking, if you load a binary library, you can do whatever, including crashing the VM.
BEAM has 4 ways to closely integrate with native code: NIFs, linked in ports, OS process ports (fork/ecommunicate over a pipe), and foreign nodes (C Nodes). You can also integrate through normal networking or pipes too. Everything has plusses and minusses.
Yeah, a NIF can bring down the entire OS process but I've used quite a bit of Rust NIFs with Elixir and never once had a crash. With Rust you can make sure nothing ever goes down, minus stuff that's completely out of your control of course (like a driver crash).
reply