If we know something really well and there are enough developers to support an ecosystem and the talent pool in your <however you decide your region> is big enough just use whatever you want.
PHP in 2018? SURE.
C++ in 2018? SURE (You masochist)
Rails in 2018? Youre damn right I would.
GO in 2018? OK. Fine. Whatever.
This is highly opinionated, but our jobs are to make stuff that works in a predictable, less risky way. Do that.
This is very true, but it doesn't mean "Use the thing that's worked in the past because that's less risky." Languages get better and tooling improves, and if you're still using one that isn't keeping up then you're increasing risk. These days if your chosen language doesn't have things like strong typing, interfaces, good debugging tools, a good compiler, etc, then you should definitely be asking yourself if it's the right tool for the job.
You shouldn't jump from one language to another every few months, but you should keep a close eye on what other languages are doing and ask yourself if switching would improve the code you write.
Quite the opposite. With strongly typed languages, it becomes safer to make changes, because the compiler will catch mistakes. On top of that, with mature strongly typed languages, there is tooling which takes advantage of the types to help you avoid mistakes in the first place.
What Python is doing with typing I think is going to be amazing though. Just not sure if I'll have time to embrace it fully within my own projects.
The benefit they offer is fewer checks so you can deploy things faster.
It's the same benefit you can get if you take your car to a mechanic who doesn't bother to check the wheels are attached properly after they've changed the brake discs.
Most CRUD (such as tracking applications) is just marshaling attributes around and about, not really doing much math on them. Forced type checks can be repetitive busy work and hogs screen real-estate.
Shouldn't that last sentence be "... if switching would improve the end product or service you are delivering." The focus should be on what people are paying you for. Better code doesn't always mean a better product, or be a smart business decision.
Pretty much speed is the only concern for the vast majority of web applications, it isn't really that big a concern for most web applications, and even then it's less about choosing the fastest language and more about limiting yourself to a still pretty broad set of "faster" languages.
I agree though that it's not relevant for nearly all web applications (and even for those where it is, it doesn't require that everything is done in a performant language)
Yes and no. The most elegant technology solution in the world does mean squat if it doesn't solve the problem the client / user needs solved.
Technology and solution are not the same thing.
I know this is tongue in cheek, but if one is doing anything related to real time graphics programming I'm not aware of any good alternatives.
Then on the browser you are stuck with GLSL 3.0 shaders, or any JS library that generates them on the fly.
But yeah for the layer that actually talks to the metal, it is still going to be C++ for many years, specially thanks to the ongoing language improvements since C++11 revision.
Metal and DirectX shaders are C++ dialects, and Google is driving the effort of making HLSL having first class support on Vulkan as well.
Oh, and NVidia has designed they latest GPGPUs specifically to run C++ code optimally.
Seriously, for this kind of tasks there's nothing better than C/C++ - and I write this as someone who is actively AFRAID of power of C++ and uses less powerful tools whenever and wherever possible (because I don't want all that power to shoot in my own leg).
Likewise no one is using C++ with MI and 9 level deep hierarchies for game engines anymore.
C++ is one of my toolbox languages, which I know since C++ARM days, yet I am also open minded that it shouldn't be the only way.
I am old enough to remember when C, Modula-2, Basic and Turbo Pascal, AMOS were the "Unity" of MS-DOS/Amiga, because
for this kind of tasks there's nothing better than Assembly and everyone had a copy of Zen of Assembly Language.
Then the speech was updated to there's nothing better than C when compilers improved, and it took PS 2 with its C++ SDK to eventually force game devs to grudgingly accept C++.
Which now reigns on all things GPGPU, but it doesn't mean we should be happy with just one language.
And even though it is one of my favorite ones, I dislike the lack of modules (maybe C++23 with luck) and the copy-paste compatibility with C that endangers the software I use written with it, because on many teams static analyzers and guidelines are foreign words.
It's really easy to accidentally create a simple Linq query that underneath turns into O(N^2) and suddenly the production application crawls to a halt. Actually happened.
It's the latter 95% of the time.
Still, that's no excuse to write readable code that for example runs in O(N^2) when there would be just as readable O(N) alternatives.
The effort is lead by two well known figures in the game scene.
Specially relevant because their opinions regarding C, C++ and high performance game code are also well known.
(Relies on libboost https://www.boost.org/)
Seriously, it makes putting together a simple API a breeze. I haven't had the chance to try anything more complex yet.
Is the situation in C++ frameworks very far off?
How do you handle redundancy, e.g. if your Intel Compute Stick goes offline for any reason?
How do you handle crashes of your applications?
Vulkan/C? Metal/ObjC†/C? From C libs you can probably get to bindings to other languages like D, Haskell, Rust, Go†.
† depending on your definition of "real time" (soft vs hard), both kernel-wise and GC-wise.
Vulkan and C are a sane option. Although, with C and linear algebra one starts to miss C++ operator overloading and existing libraries like eigen  or glm with their graphics and computational geometry targeting API:s really soon.
If one deploys only to OS X/IOS, Metal and ObjC, sure.
"From C libs you can probably get to bindings to other languages like D, Haskell, Rust, Go†."
With the added complexity of the binding layer, and sub-par collection of existing examples and resources. With C++ both OpenGL and Vulkan have excellent didactic material, including:
I don't know about Vulkano, but Glium is much more pleasant to use than the raw OpenGL API.
Which links to:
Which pretty much states that Glium is not a really long term viable solution at the moment.
I've toyed with cool open source libraries for 20 years. I'm too old to get caught up with something new and shiny only to find out it wasn't structurally sound.
That is why C++ is a good choice. It has huge traction. The ecosystem is basically bullet proof. It has lot of practical libraries that are so well used that while not bullet proof themselves, it is likely they will stay alive as long as you need to compile your code.
> Which pretty much states that Glium is not a really long term viable solution at the moment.
Except what that post-mortem is _really_ saying is that OpenGL is not a long-term viable solution. The whole post-mortem is about the fact that OpenGL drivers are so buggy that you can't write portable code to target it.
Sure, that's why generic non-platform specific OpenGL wrapper is a bad idea, unless you have an army of engineers to apply kludges and workarounds. I'm sorry the author had to find out this the hardest way. The quality of OpenGL drivers has always been like this.
One way of writing portable OpenGL is to look at what some popular game with open source is doing, and copy their approach, because the likelihood of a) the bugs that title triggered are mostly fixed and/or b) they managed to avoid the biggest potholes.
E.g. look at what Quake or Doom III does, etc.
Note that I specified real time graphics development as the focus area here for C++. While I don't want to get into platform wars, on that front Windows is a really strong contender, for various ecological reasons.
Nim does not seem to be well supported on windows . The typical cyphers for poor windows support in open source projects, that seem to hold true for Nim, are that:
a) No mention of MSVC support as a compiler option
b) Installer does not install the complete required language runtime, and the end user must hunt the missing DLL:s by themselves.
c) Need to manually configure various paths
While these are not a technical blocks, they imply strongly that Windows support is not one of the key priorities for the project. "Here you go, have a nice language - and oops, good luck debugging our runtime". Whereas in industrial quality setups that have lot of users you can pretty much expect it to work out of the box.
Hey, Nim dev here. I can assure you that Windows is incredibly well supported by Nim.
> a) No mention of MSVC support as a compiler option
> b) Installer does not install the complete required language runtime, and the end user must hunt the missing DLL:s by themselves.
The DLLs are all included in the .zip file. Even if they weren't, you do not need to "hunt" them, there is a link for them at the bottom of the download page.
> Need to manually configure various paths
You don't. It's easy to think that when you only read the headings though :(
Here is an excerpt from under the heading:
> The zip file includes a simple application called finish.exe that can attempt to add the first directory into your PATH. This tool also checks for the presence of a C compiler and can install MingW, the GNU C compiler collection for Windows.
The installation isn't perfect, we did have an installer in the past but it proved more trouble than it was worth (NSIS is very archaic and INNO is a PITA). Despite this, you won't find many languages with Windows support that is as good as Nim's.
My initial comment may have been a bit uncharitable, and I'm sorry - I chose clear communication instead of overt politeness. I might not resemble the core audience of the language in any way :)
I based my response to the first page mentioning windows install:
This is the first page I end up from both https://nim-lang.org and Google.
My instinctive response was based on a pattern matching from trying to run various open source projects on Windows the past 20 years, and how the projects choose to communicate their install narrative. I.e. the "ciphers" I mentioned in my posting above.
Rust needs additional compiler as well, this is how they communicate it:
"The installation isn't perfect, we did have an installer in the past but it proved more trouble than it was worth (NSIS is very archaic and INNO is a PITA)."
WIX toolset may be an option http://wixtoolset.org. Not having a Windows installer generally appears amateurish nowadays (I know a plain zip is just as good but that's what people do and is generally expected nowadays).
Even Latex which was an incredible mish-mash of various components you needed to... acquire, is now available through just clicking the installer:
"...Despite this, you won't find many languages with Windows support that is as good as Nim's."
You could mention msvc in the initial install page since you need it. There was a specific time when projects had good economic and technical reasons not to depend on Visual Studio on windows. Not so much anymore, since it's free for individuals.
Installer landing pages for languages which don't signal my "fragile-not-well-supported" sense:
The Ocaml Install page is on the edge, but, that is compensated by a pedigree of being well known and having credible industrial users (i.e. Jane street):
I know you are a small team and have other priorities than your web page... the installer and mentioning MSVC on the install page would go a long way in increasing the likelihood of at least trying the project for random people running Windows.
The only problem that arises with C++ is when people want to use EVERYTHING in the language, because if it exists, it has to be in my project, right?
Things are in the language for a reason - if you've got the issue then you use the feature.
I hardly ever recommend C++ these days as it's not worth the slower dev time. There are a few cases where run-time really does trump everything else and it's still appropriate.
In that case the code will be templated to within an inch of its life, because that's how to write real C++.
Although STL implementations are getting closer today (regarding underlying speed), unless you plan to lock to particular implementation, even version, you'd better use subset ;)
Things like testing, agile, tickets, code indentation, patterns - none of that you'll find there.
If you aim for serious environments like Wolfram Research or MathWorks, that could be different. If you aim for quantitative firms, usually you'll find there scientists writing prototype in mathematica/matlab/r/python (messy again) and devs who will re-write it in c/c++/java/asm.
When one looks outside the traditional technology industries, there are a huge number of technology jobs as the tech person in a non-tech industry, being forced into the tech world. These tend to be C-Suite level jobs, so picking up an e-MBA is also be pretty good idea - to know how to talk and operate in their terms, how and why what you're doing for them is what you're paid to do. They have no idea, and very likely a history of being burned by low level tech consultants.
The way I describe my career goals now is: I want to work in a domain where quality and performance is the product.
I hadn't considered getting an MBA and using it in that way, that's something that I will take a hard look at. Thanks for the advice!
It really is best if the language doesn't include the kitchen sink in the first place, if you want the code to be clean in the most general circumstances.
The original joke though, was that C++ was for masochists. I'm not sure developing a web application is any less painfull...
However the demos one gets to see from production work at Siggraph are quite impressive when running on proper desktop cards.
How exactly do you think AAA games are made? It's all C++ because there's literally nothing else out there which would give you the same performance(and also nothing that is supported as widely by Sony/MS/Nvidia/ATI).
Sure, we might hit growth problems in years but we can cross that bridge down the road.
A heavy read site is trivial to scale these days. Serve static assets from CDNs, fragment cache the views, horizontally scale when needed. Rinse, repeat.
Most scaling issues I've seen on well-designed Rails sites have been to do with the DB, not the framework!
Web dev is in a different place now. Developers don't generally install and maintain application servers from the ground up. Scriptalicious was superceded by jQuery, which in turn was supplanted by vanilla js actually becoming useful. Relational databases are not quite as dead as some people would lead you to believe, but other types of DB are much more common now (key-value stores, search indexes, etc).
Rails was probably one of the last truly open source projects (Most OS these days is to some degree 'owned/developed' by Big Tech) to get big traction and for that it deserves credit. Just the idea that a Danish web consultant could hack together something that became popular and which then allowed him to literally retire and become a racing driver before he hit 30 should inspire us all.
If you want to start a startup, and you know Rails, use Rails with Postgres.
Do the complicated shit (data science, video uploads, etc) in your other language if you have complicated shit. Use a different subdomain or get nginx to fork the endpoints for the complex shit if it needs to be public.
I've known plenty of people, myself included, that tried something else because of "reasons" but the real reason was I wanted to try some new language or framework but it gets old fast.
> I'm gunna use Riak!
I said. Then I needed to do some random task that would have been trivial in SQL.
> I'm gunna use Go!
I said. Then I spent hours fiddling with types and boiler plate code and catching errors.
These are not solving problems startups have. These technologies solve problems that huge companies have. My only gripe with Rails is that I think it should be JSON-first, not HTML-first, because almost everyone needs an API these days and many people use front end frameworks for HTML, but whatever. It's not that hard to override that stuff.
> I think it should be JSON-first, not HTML-first
Isn't that the case in Rails 5? https://guides.rubyonrails.org/5_0_release_notes.html#api-ap...
While being successful in all those aspects, DHH is still very active in the development of the framework. He is currently also the CTO at Basecamp. Not really what I would consider being retired.
(And DHH didn't literally retire--he runs his company, Basecamp, which is built on Rails, which he actively maintains.)
Yup, but why? This is still true:
Mobile and mobile-first thinking, perhaps. "webdev" is far more js-driven than it was 10-15 years ago. I'm not sure whether that's good or bad, but imo that accounts for much of the difference. 15 years ago I could build entire applications with a minimum of JS, and they were acceptable to clients. That's not really the case any more - there's more demand/need/expectation of js in web projects today.
The need to talk to relational database is, ime, as strong as ever, while the need for supplemental data stores is growing, but is rarely the primary focus for any system already in existence.
On the backend too?
jQuery is written in vanilla js so vanilla js can do anything jQuery can. I'm always irritated when people claim that vanilla js can't do something X can do.
Handling browser compatibility alone made jQuery extremely useful back when it first emerged.
See also: React, Pandas, Numpy, Boost, etc.
In fact the term "vanilla JS" mostly started in response to jQuery itself, so if anything I would argue that the term "vanilla JS" is to blame for what you're talking about.
Sure, and we could all just design our own chips, hand code our own binary to create our custom programming language we use for everything... But humans have a neat ability to benefit from other humans' work.
But don't worry, I see this on Reddit and other amateur forums all the time.
It is an excellent framework with gems to act as plugins (rack middleware) to serve websockets, forums, admin interfaces all with a few lines of configuration. Google, Microsoft, Facebook all maintain Ruby libraries in GitHub. So as long as you are doing Joe's textile website, it is the right tool.
Cloud scaling rails can be expensive, as apparent from horror stories  but a few manually managed instances with a load balancer and good web servers can go a long way.
It lacks development time relative to python which cater to domains such as data processing and artificial categorization. With extension libraries such as helix  which utilize rust to create native extensions and enough development time it has the potential to dominate those fields in the future.
One honorable mention is JRuby and while it isn't kotlin in terms of efficiency, it still lets you use Java libraries when you need any sort of domain libraries.
In conclusion, I'd say it is still pretty relevant today.
I think they are doing something about it. Ruby 3x3 and Rails 6.0. One of the problem is resources, there isn't a massive company like Google / Facebook / Amazon backing Ruby and Rails with Man power and funding. It is all open source.
I am wondering if any is using helix. There were previously another idea Rubex  from SciRuby, making Ruby / C Code collaboration easier.
I don't think Rails is the best tool for that. A static site or Wordpress are probably a better fit.
> Cloud scaling rails can be expensive, as apparent from horror stories  but a few manually managed instances with a load balancer and good web servers can go a long way.
It definitely can be, but I think most apps people are working on don't have a scaling issue (at least of the technology sort). When there is a scaling issue, it's often something related to the database (bad indexes, poor instance type, or N+1 queries)
I'm not some crazy fanboy but until someone can actually name a seriously competitive, batteries-included, all-in-one framework* which delivers everything, or even most of, what Rails does - it is very relevant and you ignore it to your serious disadvantage.
* your list of 30 random npm packages does not satisfy the requirement
I like Rails, but to be fair, doesn't Rails also require a list of 30 random gems? Unless things have changed, authorization and authentication aren't even built-in, so people have to navigate random third party gems, and remember, you don't want CanCan, but you want the fork, CanCanCan. Again, I like Rails, and maybe it's still the nearest to an all-in-one framework, but I think somebody who took the next step and created a truly all-in-one framework would win big.
My current project:
$ grep -c ^gem Gemfile
Perhaps the best example I can give for this difference is Devise, an authentication plugin for Rails. Have a look at the instructions for setting up Devise on Rails: https://github.com/plataformatec/devise#getting-started You just add the dependency and boom, you instantly get a database migration for storing authentication data in your database, automatic integration with Rails's email notification system, a single command you can run to generate all boilerplate files, and a single line of code you can add to your controllers to automatically enforce authentication for any particular action. With a similar authentication package in a Node application you'd most certainly have to set all that up manually.
my-MacBook-Pro:production-web-site me$ grep ^gem Gemfile | wc -l
On the other hand, overriding the "default" Rails behavior for e.g. logging or SAML is pretty easy as long as you can find a gem compatible with the versions of all your other gems. This can easily spiral into out of control technical debt where you're stuck using gems from 6 years ago out of fear you'll tug on one of them and bring the entire house of cards tumbling down, but I've consistently been surprised at how little effort has been required to do gem upgrades as long as they're planned carefully.
And while gems like CanCanCan are great, not every application needs them, and it's relatively straightforward to set up authorization logic directly for most simple cases (i.e. something like "users control their own things, admins control everything".
You're right in the sense that I've very rarely seen sizeable apps that are mostly "stock" Rails outside of things literally developed by Basecamp.
Other reasons I enjoy Rails:
- Scaffolds allow me to have CRUD support for any model I want with one command (combined with auth, I have Django's admin, more or less)
- JSON views without a third party library (surely this is pretty standard for a web framework in 2018 in most cases?)
Other reasons I don't enjoy Django so much:
- I can't think of an eloquent way to phrase this but Rails values my time. With Django I'm doing menial minutiae like creating a directory for my app's templates before it'll serve a page. Why? I know 'magic' is anathema to the Python community but context switches like this shouldn't be there when I'm trying to get an app up and running at speed
- I feel like it's stewardship is bordering on begrudging: Why are websockets (and JSON for that matter, but I've touched on that) not present in a 'kitchen-sink' web framework? I'm not sure why we're worrying about master/slave DB terminology  when there are axiomatically more pressing technical and adoption-related issues at hand.
Django is a fine web framework, and perhaps the best-in-breed for Python, but it feels miles behind where it should be at this point and I'm not really sure why.
 - https://github.com/plataformatec/devise
 - https://docs.djangoproject.com/en/2.1/releases/2.0/#mobile-f...
 - https://github.com/django/django/pull/2692
Django does have built in JSON support. There's also Django Rest Framework, which while not built into the core, is well supported and hardly a burden to add.
And finally, Django's "scaffolding" functionality is built around its models. You define your model as a plain Python class inheriting from Django's model class, and then the various generic/"scaffolding" views work by being told what model they're for, with the ability to override and extend whatever you want to.
In order to access a JSON endpoint in Django, I need to install DRF, integrate it with Django, configure it, edit my urls.py and perhaps set up a serialiser. I won't detail the procedure in your first link because I think you very well know that is simply a response object with a crude serialiser and a content header set. The inclusion of both misses the point, which is that in Rails, I would simply navigate to /model-name.json (so in fact, yes, it is absolutely a burden to go through that when you compare the two). That's it. No parsing. No route configuration. No concerning myself with edge-cases unless I'm doing something odd. Requirement satisfied, and I can sleep soundly knowing I'm not bequeathing some custom car-crash on whoever inherits my codebase. Debate aside, I wholeheartedly encourage you to try out this framework sometime - I've found going from Django to Rails to be an absolute joy to develop with.
Generic Views. Ah. Well that has its own set of problems. Why am I wading through the docs (or more accurately ccbv.co.uk because the actual docs are a verbose, time-sinking miasma; recall that in my previous comment, I am championing the development-velocity of Rails in contrast to Django) to discover which method from which class I need to override to implement the functionality I need? I can intuit a lot more now, after doing this for years, but why do I need this knowledge at all? It is the job of the framework to abstract and ameliorate bullshit. Corollary: why is it that every Django codebase I inherit has at some point converged on its natural state of 'car-crash' because of the combination of a bunch of rigid inheritance chains and my predecessors not putting in an escape hatch? At risk of being a bit of a zealot, MI is almost always an anti-pattern and I would prefer the flexibility afforded to me by composing things in the traditional fashion.
In Rails, I would build a model like this in my shell: `rails g scaffold Blog title:string content:text`. That's again, literally it. From that, I now have a Blog model with title and content, created and updated fields, controllers (or 'Views' in Django parlance for some reason) and both HTML and JSON views ('Templates', in Django-land), unit tests, assets for my asset pipeline (we don't have to run collect static btw - it seamlessly transitions into production because it was designed on the basis that people might actually be deploying their projects), my equivalent of urls.py has been updated to accommodate my new model and I have full CRUD functionality at this point. Genuinely, I implore you to try this framework. Django has come a long way since I first used it but it's nowhere near the level that modern frameworks like Phoenix  (you should give this a go too - the third party libs aren't quite there yet but it's a really well thought-out framework) are at, and I'm genuinely curious as to why.
 - https://phoenixframework.org/
I've seen my fair share of terrible Django and terrible Rails applications. Bad code can be written in any language and framework.
Don't get me wrong, Django definitely has its pain points, but I think it offers a very competitive alternative to Rails.
I definitely agree with you on Phoenix though! Elixir/Phoenix is my current stack of choice. I actually think they do a really great job of capturing the high points of both Rails and Django while also improving on the weak points of both as well. (For example, Phoenix offers a lot of the "magic" of Rails but it's all backed by explicit code that Phoenix generates while creating your project.)
You want a smooth progression of tooling that you can approach and slowly clarify ever-finer approaches to over time. The object model is the base of the pyramid, it's the thing you rely on when nothing else solves the problem, you know you can always invent a new kind of object hierarchy by refactoring to BasicObject, obliterating stdlib semantics with a veritable flamethrower. I've never put such a thing into production, because I know there's always less drastic ways to define, mold, shape, and extrude abstractions.
With less semantically-fluid languages, you wind up relying on the same few tricks over and over and over again, because they solve problems, albeit poorly and opaquely. This flows, again, from the underlying approach to typing. Python is dynamically typed, which keeps it from being totally unworkable any time you need to do something special or cross-cutting, which is absolutely a need in the web world, but it's structured enough to where you need to devote library-level amounts of thought into coming up with clean, semantic abstractions that are easily traversed, something Ruby just comes with out of the box.
Django Channels: https://www.djangoproject.com/weblog/2016/sep/09/channels-ad...
Stuff included in Rails I don't need:
* Heavyweight controllers
* Server-side HTML templating
Stuff not included in Rails that I do need:
* Basic DB-level sanity checks, such as uniqueness constraints that actually work
* Any kind of real frontend framework
Stuff that I desperately want that Rails makes fundamentally impossible:
* Sane concurrency
* Not having to pull my hair out over random values in my codebase magically turning into "nil"
The best pattern for developing basic web applications that I've seen lately is basically:
* Use React for the UI. (If you prefer another front-end framework, I don't really want to fight about it, but React is pretty damn good.) Host your statics, well, statically.
* For your backend, use the simplest possible mechanism that receives HTTP requests and emits JSON. If you're married to Ruby, Sinatra does this. Flask does this. Node and Go and Java all have relatively simple mechanisms for this. The mechanism doesn't matter. Route HTTP requests to function; return JSON (or some HTTP response object that encapsulates your JSON) from function.
* For interacting with a relational database, use sprocs. SQL isn't hard; ActiveRecord is probably the best example of the Rails antipattern of, "make easy things easier, and hard things harder".
The learning curve isn't too bad.
I've been a Rails developer for quite some time and am picking up Elixir / Phoenix in my spare time while building a single side project. This is my first real attempt at picking up a functional language too.
The project I'm building is an ambitious project (users, accepting stripe / paypal payments, lots of data modeling, real time aspects, decent amount of logic I haven't implemented in other more familiar frameworks, etc.) but thanks to the community and open source projects to look at I'm making reasonable progress.
Of course I'm not as productive as I am in Rails but that's because I'm only putting in a few hours a week into this project (while learning Elixir / Phoenix as I go).
First I think a very large number of developers don't care for functional programming.
Then Phoenix is mainly run by only two persons, Chris McCord and José Valim (who also manages Elixir and Ecto). If I compare the pace of development and what you get out of the box with PHP/Symfony (I don't know RoR) it's quite clear that Phoenix is not there yet (without even speaking of the difference in ecosystem size).
Having said that I'm excited for Phoenix 1.4 https://github.com/phoenixframework/phoenix/milestone/17
I don't think that this is something I want in my project, I think I'll keep with .NET and Go even if I don't get the "competitive batteries-included all-in-one framework" packaged in.
(Well and there is always the issue that most larger RoR projects like to pull a lot of resources. And seriously the biggest three VMs on my host are an RoR application and the SQL Server which runs both MySQL and PG and the only node.js app I run)
When you know the framework well, however, I find one of Rails' strengths is that it massively decreases the time it takes to get familiar with a codebase. There's never a question of how a developer has organized their code, or configured logging (to take your example), because it's not a choice that Rails gives you - every project is going to work the same.
Usually you'd just create a custom logger and tell Rails to use it
Grepping can be a successful strategy, esp. when you know what you are looking for and you can search the database column in the code
Go with Ruby/Rails if you don't need the JIT/AOT compilers and high performace GC algorithms.
But it's missing scaffolding and stuff like that.
Another would be that sometimes a train is not the most practical transportation to travel a few blocks.
It also makes good use of what Typescript has to offer, which IMO is very much welcomed.
(Not that I'd consider an untyped ecosystem in the first place, especially one as fond of "magic" (e.g. monkeypatching) as rails. I have sympathy for just getting something working as quickly as possible, but there comes a point where you're storing up too much trouble even for the short term)
If the project's successful. The chance that most of what we're building with Rails hits the required escape velocity to require something other than Rails is a "Nice Problem".
For the vast majority of things people are building for the web, "CRUD+Auth+Billing" is all that's needed, maybe with a few API hooks into something more novel running on different infra.
If you then need it, then you can just build out a few more services with a few more API hooks, and then transition to a non-Rails stack as required.
It's great if you have unlimited resources to spend the time setting up the perfect tech base for a project for if it does eventually require scaling above what, Basecamp for example, can handle - but that seems like wasted resources for a great deal of orgs that don't have that level of backing.
Of course, if you truly need something novel, then it's a moot point, if you can't achieve what you need to with a stack, then just don't use it.
> For the vast majority of things people are building for the web, "CRUD+Auth+Billing" is all that's needed, maybe with a few API hooks into something more novel running on different infra.
> If you then need it, then you can just build out a few more services with a few more API hooks, and then transition to a non-Rails stack as required.
True as far as it goes. But if we're just talking about a prototype for validating product/market fit, with the intent to transition to a more maintainable stack as and when that becomes necessary, what's the problem with "list of 30 random npm packages" as a starting point?
We can argue it from both ends, but I see Rails as the middle ground, and allows you to go from MVP to $XXm ARR on the same stack. A hacky MVP with 30 npm packages doesn't have a clear path up the tree to grow without serious changes, and a custom architecture is useless if you don't get any users.
I see it as the opposite: it's clear which functions are the responsibility of which library, and it's easy to replace the ones that are becoming blockers and leave the ones that aren't. Whereas with Rails if you keep it monolithic then upgrades always become a huge pain point, but if you try to cut out pieces of it you're going against the grain.
(I used to love TurboGears nearly a decade ago, because you could use it like a monolithic framework to start with but it was explicitly a collection of standalone libraries underneath, and so once you started needing to do different things to individual pieces it was easy to. The modern web ecosystem could do with something like that, IMO (though personally I'm sticking to ScalaJS for the type safety))
The dude starting with rails will probably be done with the MVP before you decide _which_ 30 random npm packages to use.
I agree that that's what's going to happen in the short term, but var doesn't mean becoming dynamic; type inference doesn't compromise soundness at all. I think it's going to be much easier for sound type systems to get good-enough inference (indeed Hindley–Milner already offers "perfect" inference: you don't need to write any explicit types if you don't want to) than for dynamic languages to retrofit soundness. And while one can live with a few unsound constructs as long as it's locally clear when unsoundness is happening, as soon as you have pervasive soundness issues with action-at-a-distance your type system becomes almost useless.
Yes, I do.
Because it has allowed the community to develop a whole lot of well-integrated, batteries-included libraries that span the from the model layer to the frontend.
From a language perspective it's stable, fast, easy to deploy. Why wouldn't you choose to use it? For most it's not a difficult language to learn and start using.
The defaults of functions/methods in the standard library are often insecure.
if you want to write good PHP i can recommend following resources:
https://paragonie.com/software Would use their stuff if i wrote PHP
https://phpdelusions.net even though the name sounds abrasive it is a good resource on dbaccess if you want to know specifics. It is from a guy with several SO gold badges.
Really cheap to run, low-low-low maintenance. I don't even need to touch it except if there are some security bugs that need to be patched and heroku handles backups automatically. In the event heroku has problems I run my own cron-job that does backups of the psql database and the application itself is put under version control.
I'm not sure there is another framework I could have pieced together such a smooth application with anything else in the amount of time and resources I had available plus without having to worry about it ever breaking because of some weird bug. Stability is very important for many businesses.
Once in a blue moon a new feature is requested and doesn't take long to make some changes. Now, would I use rails to fire up a basic website like I did in the past? No. I'd grab a static-gen like Jekyll and maybe some PHP to handle mail if needed.
You wouldn’t build something new on cobol if you could avoid it of course, but rails is not cobol. Ruby and rails are still good at getting things out the door.
I wouldn’t personally touch it, but that’s because I’ve never heard about anyone using it in my country. If you live in a place where a lot of shops use ruby and rails, then why not?
Changing tech isn’t easy, so you should only do it if you’re motivated. We’ve been doing more and more JS/Node where I work because it’s really good at getting things done quickly, sort of similar to why you’d chose ruby and rails. However I’ve been a C# developer for years, and I’m nowhere near as good in the JS environment as I am in .Net. I’ll get there eventually of course, but I probably wouldn’t without the internal motivation of actually enjoying the change, partly because JS is so great at getting things done.
The key questions to me when chosing a language would be, do I really want to learn it? Why do I want to learn it? And also, is anyone in my area hiring for it?.
I assume people make them comparisons because both ecosystems have a lot of third party libraries, which means if you want to add some functionality there’s probably a library out there for it. I don’t really feel that comparisons like this are really that accurate though (and are often made by people who don’t have experience with Rails).
If you are using something like Node or Go you need to have knowledge about which libraries you need to use (bearing in mind that often changes every year or so, compared to the Ruby ecosystem which is a lot more stable) and write all the glue logic to make them work together yourself. With Rails you can literally build a blog by running a handful of commands and editing a couple of views, and have something up and running in less than 15 minutes.
The beauty of Rails is that it’s opininated enough, that you can build a pretty decent application just by using what it gives you out of the box. Yes you may want to add Sidekiq for better performance of background jobs, or Devise to avoid all the repetitive code you need to handle user accounts, or add RSpec because TestUnit is the devil, but those aren’t required to get something done.
You’re absolutely right that this requires knowledge of packages for node/js. We haven’t been using a whole lot of modules to achieve this, but we certainly haven’t used just Node/JS either.
At Latacora that has changed dramatically; we see Django a lot, rarely Rails, but most commonly we see Go, Python, Node, and Java API servers with React frontends and minimal frameworks.
From my vantage point: Rails is still relevant and still a viable option for building new stuff, but it is less relevant than it used to be.
because it covers a lot of things regarding classic everyday web development which is extremely useful to beginners. So just for the existence of that book I'd say yeah, Rails is still relevant.
There is also this one for flask which builds an web app from start to finish for people who like Python.
A lot of ruby books features the design and development of complete medium sized application from start to finish in various domain.
That's the kind of thing beginners should learn instead of 'Docker' or 'React' because they've heard everybody constantly talking about the latter.
> I wouldn’t spend too much time learning programming languages outside that list as you need more tools in your toolbox but too many tools may get your toolbox cluttered
You suggest not cluttering your toolbox, but your recommendation is a list of completely interchangeable dynamic languages?
I would recommend learning either language outside the context of a framework like Rails or Phoenix, though. There's immense value in knowing the underlying libraries (Rack and Plug, respectively) and the HTTP servers underlying those (Cowboy and Unicorn/Puma/whatever-is-popular-nowadays, inverse-respectively). Once you know those, you'll either gain an appreciation for the existing frameworks or have the know-how to improve and/or replace them.
Elixir is compiled, the other two are interpreted.
JS is supported natively by all browsers and is at its core an event-driven functional language, the other two are not.
Ruby is probably the comfiest OOP language out there, and certainly the one with the best standard library.
These are not the same languages.
What does Ruby's standard library have that makes it superior to Python's?
How can a language be intrinsically compiled or interpreted?
This is a great example of an academic nondistinction that is completely worthless in practical matters.
It's precisely in academia where it is a worthless nondistinction, there a language doesn't even need to be implemented to be studied. For instance lambda calculus predates the programmable computer.
(Yes is my answer to the question).
I’ve been doing this a long time, and it’s demotivating to see every new generation of noob developer turn their nose up at the work of the generations before.
People unironically ask if Rails is “still relevant”, and then switch back to their (hideously slow, JS-based) editor to grind out spaghetti code in node, not even stopping to wonder why their NPM dependency tree has sixteen different versions of the same json module.
Such a waste.
Well, I’m starting a company and our current prototype is Rails. I don’t regret my decision at all, although after spending the last four years writing exclusively Python it’s been a little jarring. The muscle memory is still there though and after two weeks I’m almost back to 100%.
I really missed Active Record and all of the conveniences there around migrations. When you’re prototyping and constantly evolving your data model - but still intend to throw real traffic at it - flexibility here with the hardened foundation of PostgreSQL is vital.
The framework and language aren’t red hot anymore but they are far from dead or even stagnant. There’s a tremendous amount of active development happening in Rails.
Meanwhile I’ve been learning Elixir via a course called “Elixir for Programmers” and love it a lot. I didn’t want to risk building things on Phoenix at this stage. It’s full featured and likely more than capable but the smaller ecosystem and it being a first production app for me wasn’t worth the risk. I’m dying to put the actor model and OTP to work in an environment where it would really shine.
Anyway... still loving Rails and Ruby! A lot! With SASS, Haml, rake tasks, lots of data fixture tools etc... and no BS with React and webpack, slow pipelines and maintaining two codebases for an API and a Frontend... you’re left with lots of functionality being built daily. It’s great.
Don’t get me wrong, React is awesome and it’s certainly the new foundation of UI development. But... the ecosystem is still learning to walk imho. Redux isn’t the answer. GraphQL is great on paper but the way people use it is not ideal (coupling data to ui). Long way to go before I’d go 100% JS.
This reminds me to book a ticket to rubyconf.
I’ve seen companies that made poor technical decisions up front spend years paying for those mistakes. One could argue that those mistakes are what even put them in the position to where they could pay em down... but there is definitely a way to measure twice/cut once and save yourself from a world of hurt while still moving quickly and making sales.
Step 1-1000 of building a startup is selling.
But even with that being said, prototypes and v1s have a nasty habit of continuing along far past their originally intended expiration date, and if your only criteria is what gets sales today with no regard for future maintainability, you're taking on a very significant debt that is going to be hard to repay when you do need your tech to be easy to work with and performant.
* productive as in easy to get started, easy to iterate, easy to get things done.
Doh Phoenix and Elixir are trying to do all the best to help developers be productive, when you are a technology that promises such a huge scale you need to introduce practices that need to be decoupled => bit slower for developers.
Good example of this how a “model” writes to database. Module (with schema) need to call a changeset, changeset call repository, repository writes to database (example). That’s 3 manual steps where in Rails you have in in one.
Phoenix & Ecto work that way because 1) Elixir is functional, and 2) ActiveRecord mixes a lot of concerns that suck in the longterm.
IMO the real thing holding back Phoenix/Elixir is deployment & operational issues. It's being worked on a lot with Distillery, but from what I understand there's still a lot of kinks to be worked out.
That's such a funny thing to say. You'd be hard pressed to have a program without state. Every program is made up of a combination of 3? components: logic, data(state), and time.
The thing that differentiates different programming languages and tools is what abstractions they build around those three things and how they are exposed to the programmer.
> Yes. But I have a crush on Elixir.
The article doesn't say much else.
Django, Laravel and other established frameworks can be a bit better or a bit worse, depending on your criteria.
Buffalo (Go) and Phoenix (Elixir) are the only competitors that I know of that could match Rails at it's own game, and also have compelling advantages over it. Both have smaller ecosystems right now.
(Source: I've been working at a Rails shop for 6 years).
And with this comes potential dependency hell, an app like Discourse pulls in over 140 dependencies. This means either you are a Ruby developer who can debug compile or version conflict issues quickly or have a lot of time to go down various rabbit holes.
Many casually suggest Docker but this simply doubles complexity as now not only does the end user need to know Ruby but also Docker. This means understanding networking, port forwarding, volumes, state separation and single process environments just for starters before we even get to the app.
For in house apps none of these are deal breakers but one wishes language developers thought more seriously about the deployment story.