The starting assumption when crossing any[0] international border is that you don't have a right to enter the country, until you prove otherwise.
People from wealthy Western countries are generally used to just waving their passports and passing through, but that is not nor has it ever been some kind of automatic right. People are questioned and denied entry all the time, should they fail to satisfy the border official of their eligibility for entry under the exact terms of their visa (or the relevant visa waiver program).
I'm very sympathetic to the idea that border officials should have less discretion to deny people entry without very solid reasons, but if you start talking about 'innocent until proven guilty' at a border today, you're not going to have a good time.
[0] International agreements can of course modify this default assumption, e.g. Schengen.
ppl here are so freaking annoying and ignorant about how immigration works in any country.
you are right, for immigration its your responsibility to prove that you are not coming in to violate terms of entry. Onus is not them to prove that you are coming to work on tourist visa.
LibreOffice is pretty bad. It doesn't tend to hold up past very basic use, and that use case has long since been taken by Google Docs. For serious work - e.g. legal services word processing, or many accounting workflows - LibreOffice doesn't hold a candle to Excel and Word.
The problem with desktop Linux software is that it generally doesn't have the demands put on it that would push it to develop competitively, so it tends to get stuck spinning in circles for decades. LibreOffice Writer isn't really a competitor to MS Word - it's more of a bloated Wordpad. Ditto The Gimp, etc. It's just not there.
Well at least Google Docs works just fine on Linux, I use it all the time, so I don't think that alone would be a deterrent to switch to Linux.
I don't work in law or accounting, and I can only speak to my own experiences, but I have generally thought that LibreOffice Writer is fine. Even back when it was OpenOffice, I edited and formatted a weekly newsletter in high school with it, and it never really bothered me. I admittedly don't use it a lot now, I'm one of those irritating LaTeX people. I've heard that OnlyOffice is better, but I haven't used it yet.
I definitely agree that LibreOffice Calc is considerably worse than Excel though. It works ok most of the time, but even the free online version of Excel is generally better.
I think that Krita is pretty competitive though. I don't work in art but I've talked to people who do and they've said Krita is pretty ok, and at least one person uses it for paid work. There's also Blender, which is (I think) being used for "real" movies now?
Shopify made the new Ruby JIT compiler. [0] They're on the Rails Foundation, as is 1Password, among others.
Stripe is still in on Ruby too; they're behind the Sorbet gradual type system, and Ruby is a first-class language for working with their API.
I always hear the stereotype of companies starting on Rails and migrating later, and I think it sticks around because it makes some level of intuitive sense, but it doesn't actually appear to happen all that often. I guess successful companies don't see the need to rewrite a successful codebase when they can just put good engineers to work on optimising it instead.
> Why is the ruby/rails community so weird. Half of us just quietly make stuff, but the other half seems to need to sporadically reassure everyone that it's not dead, actually.
Half the net merrily runs on PHP and jQuery. Far more if you index on company profitability.
> Not everything needs to have bloody AI.
Some things are an anti-signal at this point. If a service provider starts talking about AI, what I hear is that I'm going to need to look for a new service provider pretty soon.
In an era where everything and their mother is getting rewritten in Rust, surely we should be able to get a proper, fully featured, batteries included web framework out of it too. But it seems like all Rust web frameworks are either extremely low level (so low level that I don't see their value add at all), or extremely unfinished. Last I checked, even things like "named routes" or TLS were exotic features that required manual (and incompatible!) workarounds.
It's kind of fascinating to me that all the frameworks in 'slow', dynamic languages (Rails, Laravel) are feature packed and ready to roll, and everything in 'fast', static languages is so barren by comparison. The two seem almost exactly inversely proportional, in fact.
A batteries-included web framework in Rust with comparable built-in functionality to Rails, and comparable care for ease-of-use, would be game changer.
As a rustacean, I completely agree. A big chunk of the Rust ecosystem is obsessed with performance, and sees static typing as the way to achieve that. This approach generates extremely efficient code, but blows up compile times and creates a messy hell of generics (and accompanying type check errors).
I think there is a space for a more dynamic and high-level web framework in Rust, with an appropriate balance between leveraging the powerful type system and ease of use.
Honest question from someone working on a non-negligible Rails codebase: what would be my gains, were I to switch to Elixir?
I've watched Elixir with much interest from afar, I even recently cracked open the book on it, but I feel like my biggest pain points with Ruby are performance and lack of gradual typing (and consequent lack of static analysis, painful refactoring, etc), and it doesn't really seem like Elixir has much to offer on those. What does Elixir solve, that Ruby struggles on?
Performance is a complex story. Elixir is very good at massive amounts of parallel computing, and uses this to handle each request (or socket) in it's own contained manner, which nets you some simplicity in designing systems and scaling. However it's not very good at single threaded off-the-block performance (but neither is Ruby)
Typing is coming, some is already here, and if you're impatient you can use dialyzer to get half way there. But in my experience you need it less than you'd think. Elixir primitives are rather expressive, and due to the functional nature there really isn't any "new" data structure that isn't easy to dive into. And with each function head being easy to pattern match in, you can dictate the shape of data flow through your application much as you'd do in a typed language.
The ide story isn't great, but it's getting better. There are a few LSPs out there, and an official one coming soon. And I'd say all of them beat solargraph handily
But most of all I'd say that, since it's a bit more strict, elixir saves you and your coworkers from yourselves and each other. There are still several ways to do something, like Ruby, but going out of your way to write very cutesy code that the next programmer will loathe is more difficult. Not impossible, but harder. And with the rock stable concurrency systems in place, a lot of the impetus to come up with clever solutions isn't there
Re types, I think what I really want is just comprehensive static analysis. Coming from Rust, where I can immediately pull up every single call site for a given function (or use of a struct, etc.), I find refactoring Ruby/Rails code to be comparatively very painful. It is several orders of magnitude more laborious and time-consuming than it should be, and I just don't find the justification for that cost convincing - I'd trade every last bit of Rails' constant runtime metaprogramming cleverness for more static analysis.
What I like about Rails is its batteries-included nature, but I honestly could do without those batteries materialising via auto-magic includes and intercepted method calls. I appreciate that that's just the culture of Ruby though, so I don't expect Rails to ever change.
The lack of cutesiness in Elixir sounds lovely. I don't know if functional approaches could make up for a lack of typing for me; I think I'd need to try it. I've used and enjoyed Haskell, but of course that's very strongly typed.
Elixir has had some static analysis for a long time, you can use a command to see all call sites of a module and function. It's useful, and most of the LSPs use (or used to use) it. The newer versions also are adding various hooks to the compiler, to allow for better tooling
As for magic, elixir can have what looks like magic, but upon closer look it's nothing more than macros, which are generally pretty easy to decipher and follow. It has minimal automatic code generation, what it has is mostly the form of generators you'd run once to add things like migrations
I have the same experience with typing in Elixir. It's hard to explain without experiencing it yourself, but the dynamic typing just doesn't feel like as big of a deal as it might in other languages. Elixir's guardrails (such as pattern matching in function heads, which you mentioned) get you most of the benefits - and you still get the convenience and simplicity of a dynamic language. It's a great balance.
I'm looking forward to the upcoming gradual type system - it can only be an improvement - but I would still encourage people to try Elixir now, and let go of their preconceptions about static typing.
I cracked open an Elixir book last night, and with the benefit of a few chapters, I can see how Elixir's pattern matching can obviate some of the issues I have with purely dynamic, Rails style programming.
I also note that there appears to already be more static typing going on than I realised. In your add_comment/2 code, for instance, you focus on the {published: true} pattern matching. That is very neat, but what stands out more to me is that all clauses of that function require BlogPost, a struct type.
Am I right in thinking that every instance of BlogPost type must be constructed explicitly in your code? I.e. that every possible instance of BlogPost in the code base is knowable at compile time, along with its entire life cycle?
Or does Elixir partake of the horror that is duck typing, where any conforming untyped map of indeterminate provenance will pass the guard check for a BlogPost?
> Would you like another undefined method exception on that NilClass?
Don't take my word for it but IIRC structs are implemented as maps with a __struct__ key with the struct name, and that's used to implement checks and balances at different levels of compilation, linting and so on.
In practice I find that I hardly ever need to think about things like this. A few times I've done macro expansion to peek under the hood and figure something out but that's partially Lisp damage, I could probably just as well have read some documentation.
> Am I right in thinking that every instance of BlogPost type must be constructed explicitly in your code? I.e. that every possible instance of BlogPost in the code base is knowable at compile time, along with its entire life cycle?
Almost. You can create a struct dynamically at runtime like this:
struct(BlogPost, %{title: "The Title"})
# => %BlogPost{title: "The Title"}
… but you rarely need to. In fact I'm not sure I've ever used `struct/2` in real life. 99.9% of all the structs you ever create will have their types known at compile-time.
> any conforming untyped map of indeterminate provenance will pass the guard check for a BlogPost?
Nope. In the `add_comment/2` example, If I pass anything except a %BlogPost{} to that function, I'll get an error.
While I remain haunted by thoughts of someone e.g. deserialising a YAML file into a map, which then sneaks in some __struct__ key and squeaks past the guard clause, I also appreciate this seems fairly unlikely in practice. I think I'm just traumatised by Rails. It sounds like the culture around Elixir eschews excessive cutesiness, though. Promising!
> Performance of what, exactly? Hard to beat the concurrency model and performance under load of elixir.
The performance of my crummy web apps. My understanding is that even something like ASP.NET or Spring is significantly more performant than either Rails or Phoenix, but I'd be very happy to be corrected if this isn't the case.
I appreciate the BEAM and its actor model are well adapted to be resilient under load, which is awesome. But if that load is substantially greater than it would be with an alternative stack, that seems like it mitigates the concurrency advantage. I genuinely don't know, though, which is why I'm asking.
Some of the big performance wins don’t come from the raw compute speed of Erlang/Elixir.
Phoenix has significantly faster templates than Rails by compiling templates and leveraging Erlang's IO Lists. So you will basically never think about caching a template in Phoenix.
Most of the Phoenix “magic” is just code/configuration in your app and gets resolved at compile time, unlike Rails with layers and layers of objects to resolve at every call.
Generally Phoenix requires way less RAM than Rails and can serve like orders of magnitude more users on the same hardware compared to rails.
The core Elixir and Phoenix libraries are polished and quite good, but the ecosystem overall is pretty far behind Rails in terms of maturity. It’s manageable but you’ll end up doing more things yourself. For things like API wrappers that can actually be an advantage but others it’s just annoying.
ASP.NET and Springboot seem to only have theoretical performance, I’m not sure I’ve ever seen it in practice. Rust and Go are better contenders IMO.
My general experience is Phoenix is way faster than Rails and most similar backends and has good to great developer experience. (But not quite excellent yet)
Go might be another option worth considering if you’re open to Java and C#
Thank you, I really, really appreciate the thoughtful answer.
I've written APIs in Rust, they were performant but the dev experience is needlessly painful, even after years of experience using the language. I'm now using Rails for a major user-facing project, and while the dev experience is all sunshine and flowers, I can't shake the feeling that every line I write is instant tech debt. Refactoring the simplest Rails-favoured Ruby code is a thousand times more painful than refactoring even the most sophisticated system in Rust. I yearn for some kind of sensible mid-point.
Elixir seems extremely neat, but I've been blocked from seriously exploring it by (a) a sense that it may not be more any more performant than Ruby, so why give up the convenience of the latter, and (b) not having seen any obvious improvement on Ruby's hostility to gradual typing / overuse of runtime metaprogramming, which is by far my biggest pain point. I'm chuffed to hear that the performance is indeed better, that the magic in Phoenix happens at compile time, and that gradual types are being taken seriously by the language leadership.
There's three reasons to choose elixir or perhaps any technology
The community and it's values, because you enjoy it, because the technology fits your use case. Most web apps fit. 1 and 2 are personal and I'd take a 25% pay cut to not spend my days in ASP or Spring, no offense to those who enjoy it.
> You have the Python type system, and while it's inferior to TypeScript's in many ways, it's far more ubiquitous than Ruby's Sorbet.
I'm a big fan of Ruby, but God I wish it had good, in-line type hinting. Sorbet annotations are too noisy and the whole thing feels very awkwardly bolted on, while RBS' use of external files make it a non-starter.
Do you mean Ruby lacks syntactic support for adding type annotations inline in your programs?
I am one of the authors of RDL (https://github.com/tupl-tufts/rdl) a research project that looked at type systems for Ruby before it became mainstream. We went for strings that looked nice, but were parsed into a type signature. Sorbet, on the other hand, uses Ruby values in a DSL to define types. We were of the impression that many of our core ideas were absorbed by other projects and Sorbet and RBS has pretty much mainstream. What is missing to get usable gradual types in Ruby?
My point isn't technical per se, my point is more about the UX of actually trying to use gradual typing in a flesh and blood Ruby project.
Sorbet type annotations are noisy, verbose, and are much less easy to parse at a glance than an equivalent typesig in other languages. Sorbet itself feels... hefty. Incorporating Sorbet in an existing project seems like a substantial investment. RBS files are nuts from a DRY perspective, and generating them from e.g. RDoc is a second rate experience.
More broadly, the extensive use of runtime metaprogramming in Ruby gems severely limits static analysis in practice, and there seems to be a strong cultural resistance to gradual typing even where it would be possible and make sense, which I would - at least in part - attribute to the cumbersome UX of RBS/Sorbet, cf. something like Python's gradual typing.
Gradual typing isn't technically impossible in Ruby, it just feels... unwelcome.
None of my customers ever asked for type definitions in Ruby (nor in Python.) I'm pretty happy of the choice of hiding types under the carpet of a separate file. I think they made it deliberately because Ruby's core team didn't like type definitions but had to cave to the recent fashion. It will swing back but I think that this is a slow pendulum. Talking about myself I picked Ruby 20 years ago exactly because I didn't have to type types so I'm not a fan of the projects you are working at, but I don't even oppose them. I just wish I'm never forced to define types.
I for one really like RBS being external files, it keeps the Ruby side of things uncluttered.
When I do need types inline I believe it is the editor's job to show them dynamically, e.g via facilities like tooltips, autocompletion, or vim inlay hints and virtual text, which can apply to much more than just signatures near method definitions. Types are much more useful where the code is used than where it is defined.
I follow a 1:1 lib/.rb - sig/.rbs convention and have projection+ files to jump from one to the other instantly.
And since the syntax of RBS is so close to Ruby I found myself accidentally writing things type-first then using that as a template to write the actual code.
Of note, if you crawl soutaro's repo (author of steep) you'll find a prototype of inline RBS.
+ used by vim projectionist and vscode projection extension
Obliged to point out that spelling is always an entirely cultural artefact, and that before colour was spelt color, it was spelt colos. There's nothing more correct about older forms, or newer forms, or any other forms. What matters is what is going to be clearest to your speech community and audience.
It's fair to assume that if a brit writes something online, it's highly likely to be a piss take joke. We just don't waste our time writing /s on the end of every sentence
It must be said that the Americans are rather well known for an inability to spot the satire and sarcasm that pervades our conversation here in Blighty!
S/he's wrong, simple as that. "The particular spatio-temporal version of speech that I grew up with is correct, and all others are bastardised" is not a defensible or - frankly - interesting position. Chaucer would find virtually all modern English to be debased; Bede would wince at Chaucer's English; and so on, forever.
Nothing fruitful comes from cultivating arrogance towards the language of others. It is just as much a cherished part of their cultural inheritance as yours is to you.
I find it ironic that you're making an argument about how language evolves in the same sentence that you insist on an awkward "s/he" instead of just using a singular "they" (or if you're Richard Stallman, whatever neopronoun he fancies, I forgot what it is)
And I find it totes ironical that you'd respond to a post advocating against language prescriptivism, and intolerance of other modes of speaking and writing, by trying to pick on variants you dislike. Point goes whoosh.
Also, I "insist" on nothing, you're the only one with a chip on your shoulder about pronouns here.
I agree, getting my comment flagged for nitpicking spelling in a post about nitpicking spelling is very exhausting. Even more so when the person I nitpicked replied to my comment and showed no issue with my nitpick.
Also the downvotes don't really matter, here's another comment if you wish to downvote me again in this post. Ironically my flagged comment actually had an upvote before getting flagged.
Strangely, it's the insular dialects across Britain that have become more bastardised over time. The North American English dialects are far more conservative when it comes to evolution. As this BBC article[1] says: "[...] although there are plenty of variations, modern American pronunciation is generally more akin to at least the 18th-Century British kind than modern British pronunciation. Shakespearean English, this isn’t. But the English of Samuel Johnson and Daniel Defoe? We’re getting a bit warmer."
I've heard more specifically the southern US dialect is probably closest. Not sure how deep or dirty in the south. I have about as hard a time understanding people with the US South accent as UK accents.
The colonies were acknowledged as the offspring of Britain .. the United States of America is more of a chosen fraternity of the emancipated offspring after they fled the control of their former legal guardians.
The transition in spelling from "colos" to "color" did not have anything with culture, but it has correctly reflected a change in the pronunciation of the word.
English is one of few languages where the relationship between the writing and the pronunciation of the vowels is mostly unpredictable, so knowing whether a word is spelled with "o" or with "ou" does not help you to know how to pronounce it.
So for the case of English, you are right that spelling is a cultural artefact, but not for the case of most languages, including Latin.
The oscillations in the spelling of Old French were caused by the fact that French had acquired some vowels that did not exist in Latin, e.g. front rounded vowels, so the French speakers did not know what Latin letters should be used to write them, and there existed no standardizing institution to choose some official spelling.
> English is one of few languages where the relationship between the writing and the pronunciation of the vowels is mostly unpredictable, so knowing whether a word is spelled with "o" or with "ou" does not help you to know how to pronounce it.
That is true, but it's a trade-off made for other benefits. Why is there a silent "g" in "sign"? Because it provides semantic meaning - it preserves its connection to works like signatory, signature, significant, signal, etc. While English spelling doesn't always help you pronounce the word, it does help you identify its meaning. If it was spelled "sine" or "sin" (if you choose to also do away with silent "e"s in your spelling reform) that connection would be weakened or lost.
Also, this has a lot to do with the pronunciation of words changing over time and drifting out of sync with the spelling, but the spelling not changing to match the new pronunciation in order to preserve the aforementioned connection with similar words.
I don't know if the "g" in "sign" was pronounced at some point, but other silent letters today exist because the Norman scribes used them to indicate sounds that were in fact pronounced by the Anglo-Saxons at the time, such as the oft-maligned "ough" - a sound which has pretty much entirely disappeared from modern English (but as I understand can still be found in an extant distant relative: Dutch).
Does the dutch phrase "Acht en tachtig kleine kacheltje" contain a few "ough"? I am trying to figure out if your "ough" is like "ach", "och" or cough, rough, plough
Spelling is in every case a cultural artefact, even for languages more phonetic than English. Such an orthography still needs to make choices about what distinctions to reflect in writing (e.g. should the orthography reflect regular and predictable allophony? voicing assimilation? final obstruent devoicing?) and there's no correct answer to this, there are only various trade-offs.
Colos to color was indeed part of a common sound change in early Latin (e.g. floses > flores), and led to new spelling, but many later substantive changes in Latin did not lead to any changes in spelling (e.g. the palatalisation of /k/ before front vowels). Similarly, English spelling used to change regularly to reflect changes in pronunciation, until Middle English, when it suddenly stopped and became largely fixed. And yet other languages continued to evolve orthographically after that point (e.g. major Czech spelling reforms in the 19th century). Why?
All of this is entirely cultural. In certain societies and at certain times, language users will prefer phonetic spellings, and in other societies and at other times they will prefer etymological ones. Sometimes spellings can evolve dramatically in a short span of time, at other times they seem eternal and utterly immovable. Language is deeply cultural.
The spelling of a word is one thing, the writing system of a language is another thing.
Of course, the writing system of each language is a cultural artefact, which differs from the writing systems of other languages for various historical reasons.
On the other hand, for most languages the spelling of a word is determined by uniform rules, which are the same for most words, with the possible exception of a small number of words, typically recent loanwords from other languages.
In such languages for most words the spelling is not a "cultural artefact", but it is determined by the rules of the writing system, which have nothing to do with that individual word.
Few if any languages have, like English, words that come from a great multitude of sources, where each source had different spelling rules, so that now, when seeing a written word, one cannot guess which spelling rules have been applied to it, unless one knows the history of that individual word.
How are spelling rules anything other than a cultural artefact? They are invented by a person or group of people and then agreed upon and implemented by a larger group. Then the rules are obeyed or not on a case by case basis by an even larger group of people. Arguably language itself is not a cultural artefact. But distinctions between languages and all attempts to describe or prescribe their use are.
This the functional equivalent of a self-pitying LiveJournal post by a moody teen that's been called out by his friends for being a bit of a dick.
Marcan wants to use social brigading to get his way, Marcan wants the entire Linux kernel dev flow to bend for him, and, when none of his poorly presented demands get him what he wants, he is - of course - the victim here.
Asahi is neat, but it clearly isn't a working daily driver yet, and it's not abusive to make feature requests and bug reports. Discussions around Rust in the kernel are not, and can never be, an 'injustice'. In Marcan's world, everything other than vehement agreement and immediate compliance is abusive, hostile, toxic, etc. But of course, the only toxic person here is the one threatening to wield the mob to get his way.
Honestly, I'd query whether the benefit is worth the cost. I'll take average code from well-adjusted anons over clever code from bullying, hyper-online micro-influencers any day of the week.
The starting assumption when crossing any[0] international border is that you don't have a right to enter the country, until you prove otherwise.
People from wealthy Western countries are generally used to just waving their passports and passing through, but that is not nor has it ever been some kind of automatic right. People are questioned and denied entry all the time, should they fail to satisfy the border official of their eligibility for entry under the exact terms of their visa (or the relevant visa waiver program).
I'm very sympathetic to the idea that border officials should have less discretion to deny people entry without very solid reasons, but if you start talking about 'innocent until proven guilty' at a border today, you're not going to have a good time.
[0] International agreements can of course modify this default assumption, e.g. Schengen.
reply