I would have much preferred to use Perl, R, dotnet, or just about anything else, but there was an already made library in Python that was good enough for 90% of my needs, and 100% with some elbow grease, so my lazyness made me learn Python instead!
But wait- it gets dirtier than this! The code is done is Vi, with no care in the world about version control (rsync does most of what I need: backups), and deployed to servers by manual SCP, with no automatic tests, no care about CD or automatization - yet: as I need more and more servers, I realize I have to spend time learning say ansible to provision them faster than a copy paste of bash (I have low standards but I won't do curl|bash, just... no) but I am delaying that as much as possible to concentate on implementation.
The servers are on Debian VMs on Azure, just because they gave me free credits and do not perform as bad as AWS. When the free credits run out, I will move to DO. Then I will see what else I can use, and that will be the right time to care about ansible or whatever.
It is as ugly as I can get away with to get my MVP out, using tech from 1996 lol.
> People saying they wouldn't use git don't know git.
A few things:
* to the grandparent, rsync is awesome, but comparing it to version control is comparing apples to orangutans.
* to the parent, version control != git, and there are easy arguments to be made against git
* distributed version control is one of the greatest things to emerge in the last 20 years. If git isn’t your thing, check out (haha) fossil or mercurial
Edit: some reference reading. If it’s not already integral to your workflow, get familiar with distributed source control - you’re unlikely to regret it: https://en.wikipedia.org/wiki/Distributed_version_control
Perfect example that happened to me today:
$ git push
fatal: The current branch insert_branchname_here has no upstream branch.
To push the current branch and set the remote as upstream, use
git push --set-upstream origin insert_branchname_here
Git is just plain terrible in almost every way. It just happens to be less terrible for certain workflows than CVS/SVN/mercurial/whatever.
Yes, git’s UX is absolutely terrible. It is an abomination. It is the worst command line tool I’ve extensively used in that regard. For a long time I stubbornly used mercurial, which is vastly superior.
But, it has become a de facto standard. This is a tragedy. But it is a fact.
So it is best just to spend time learning it, find a subset of commands and make a cheat sheet you can refer to, and accept that the world is imperfect. Hopefully at some point in the future git will die and we will transition to the next big thing, and they will get it right that time.
There are many such things in the world of tech, and in most cases it is best to say “yes, it’s awful, but no, I can’t change it, and rather than expending energy on hating it (which is totally justifiable) I will expend the energy on minimising its damaging impact to my productivity and focus on accelerating my work elsewhere.”
I say this because I have spent so many hours ranting about git and every time it gets in my way (git submodules anyone?) I used to get so stressed by its design that it would hurt my productivity.
Be the Zen Dev: acknowledge it, isolate it, keep it at arms length.
Tips: Make a cheat sheet, use a minimal feature set, don’t try to do anything clever with it, and always have a way out (eg backups! Take copies of your directory before dealing with lesser know commands! Try things out on dummy repo’s before breaking your own, etc.).
I feel this sentiment so often in this field. It’s really distressing.
I don't think it's a tragedy. UX can be fixed (even if difficult because of backwards compatibility and social concerns). Changing the underlying technology is much harder.
git has become a de facto standard because it gets everything else right.
Your example boils down to not knowing the word "upstream". I would argue that is table stakes domain knowledge for decentralized version control, and is trivially easy to search for. That does not make bad UX.
Good UX does not eliminate the need for domain knowledge, it eliminates incidental complexity from a user's task, and one could argue makes a tool pleasant to use.
I don't disagree that there are rough edges to git (the debate over merge vs rebase is a glaring example), but "just plain terrible in every way" is gross hyperbole.
Again, the whole point of git is that it is decentralized; there is no requirement that you have one single central server that you are pushing to; there could easily be multiple places that you might want to push something to.
Using something as default that makes sense for a centralized VCS when git is not centralized is, imho, worse than bad UX, as it teaches incorrect assumptions.
Again, this is table-stakes knowledge for using decentralized version control.
I'm arguing against the idea that showing an error message with the exact command to use is good UX - see the original comment. It is not.
There are times when you need to troubleshoot a few things but most of the time you can just delete your local repo and reclone it - easy peazy. One of the other great things about git is that even if you do somehow fuck something up upstream it is really easy to get it back. Heck I think git is pretty awesome.
When you use git you still need to use it right. How many times have you cloned a repo just to find out it won't build/run/compile without help? Personally, with small projects - more often than not. And I have a habit of asking if they do know for a fact it's going to build without assistance. It doesn't help.
I like git as a glorified change-log, slack commit messages especially. I try to keep everything tidy and build-able as much as anybody. But I'll still give people an archive (with .git included). And ironically this keeps git tidier as you don't obsess over whether or not to commit certain things.
We were eventually forced to use git/hg/svn for some courses over that time, so I became comfortable out of necessity and now vc for anything serious; had to unfortunately even use TFS for a while. (guilty secret, for many toy projects, I absolutely just have a folder on my fileserver and rely on its backup/replication, zipping the folder if I want to "snapshot".)
I would however frankly say that I think the inability to recognize how intimidating git can be to an uninitiated was the _reason_ it took so long for me and many of my classmates; (this was over a decade ago, and I've been using vc professionally the entire interim) certainly far more damaging than anything you accuse me of propagandizing; It was extremely frustrating and dismissive to constantly be told by more experienced programmers that I should feel far more confident than I was about a tool that in hindsight, I don't look askew at myself for finding some aspects of unintuitive, and I'm a constant user now.
And, while I'm on this tirade, if you're addressing "working this way" as a negative to put unimportant toy projects on a heavily replicated and backed up server, I think you and I have different priorities in how we use our time.
diff -ruN dir1 dir2
(Of course, proper VCS wins any day.)
Learn some Git and stick to it, it’s a one day job. And most of the time you’ll use these commands
The advantage over SVN is in working with local or personal repositories. And forks of other projects in your local / personal repos, which happen all the time; heck, you will want to work with forks of your own repos for experiments; post-Git of course, b/c doing that with SVN is a pain in the ass.
I'd still use git.
Also, theres a decent half-step between ssh and Ansible: mssh. You can run ssh across a set of hosts quite trivially. It working on the order of a few dozen machines just fine.
I get the simplicity...but using git/hg/svn is just so easy. And you can easily rewind, diff, view changelogs for a file, etc.
The other stuff is there if you need it though.
When you're setting up for the first time:
git add foo.txt
git commit -m "Add foo.txt to project to enhance synergy"
git checkout foo.txt
The fact that there are so many people trying to explain it in their own way is a good sign.
That being said, for GP's use, they can work solely from master, requiring only a few commands with little to no chance of difficulty.
rsync is run by cron.
Yet markdown is not essentially different from running sed to render something into html, which I would have totally done if I hadn't also needed some other features like basic math that would have been too tedious in sed
Speaking of HTML, I do my HTML sites all static, with no external scripts or resources! It is so 1994.. or so 2018??
I use vi. I do use git locally, but I don't trust it, so I also use rsnapshot, probably similar to your rsync. You might want to try rsnapshot some time if you have not. It uses less disk space by hard linking dupes using a perl script.
Despite having worked with Flask and Rails for the last 6-7 years I'm going for Elixir this time around because there are aspects of the app that lend itself well to what Phoenix has to offer and I'm using this as an opportunity to grow as a developer. Every time I learn a new language / framework I find myself leveling up in other technologies I know.
So far I'm super excited with the way things are turning out. I'm still pretty early on in the project but I've gotten a decent amount of functionality implemented already.
I'm really surprised Elixir isn't more popular. The language author and the supporting community is awesome, and I've been the happiest I've ever been working with this tech stack. Of course I'm still drinking the "omg this is new and awesome" kool-aid, but even without really knowing everything too well, I'm able to accomplish real world tasks and that's all I care about in the end.
Sprinkling in either Elm or Vue for the one or two complicated widgets I'm going to need, but I haven't committed to either one yet.
On the front-end I might still experiment, but I think I'll settle for at least a decent chunk of time on this:
1. React + Styled Components (so html/css/js is finally no longer decoupled in a way that just makes no sense)
2. Page.js or something similar for routing, because I've been bitten too often by react-router and like, and the idea that routing is it's own thing makes sense to me.
3. Baobab.js or something similar for state management: basically, A. a single state object with B. a kind of cemtralized action-flow to change this state, and C. listeners of sorts within various components that trigger a re-render (cursors in the case of Baobab). I could go full-on redux, but it doesn't seem, necessary and I kind of relish running into the situation where, as my apps grow, it turns out to be immediately obvious why Redux does what it does.
I'd be very curious to get some feedback on my choices, and/or alternative suggestions. In particular regarding 3.
EDIT: to be clear, this is for full-on SPA's with no kind of crawling/SEO needs.
My most complicated build required somewhat of an SOA approach, but rather than go heavy-duty microservices and use grpc where you need a bunch of load balancing and service discovery involved, I went with something simple called nats, that ended up costing me 5 dollars a month, unfortunately.
Is the Go service directly accessible from Internet, or is it hosted behind a reverse proxy?
When you deploy a new binary, is there a small downtime between stopping the old binary and starting the new one?
How do you supervise the Go process? You use something like systemd?
When SQLite backup is ongoing, does it block writes to the database?
When you backup to S3, if an attacker gets control of your EC2 instance, is he able to erase your S3 backups, or is it configured in some kind of append-only mode?
Where do you store your logs and do you "read" them?
- The Go service is hosted behind a reverse proxy (nginx or haproxy) to enable zero downtime deployments, by 1) starting the new process, 2) directing new requests to the new process, and 3) gracefully stopping the old process.
- Since we've started to use Docker, we let the Docker daemon supervise and restart our services. Before Docker, we used systemd. Before systemd was available on our system, we used supervisord.
- We thought about using SQLite for some apps. But SQLite can only have a single writer at a time, which goes against the zero downtime deployment described above (two processes can be processing requests at the same time). Thus we use PostgreSQL (and MySQL for legacy reasons) which provides online backups. Must be noted that online backups are possible with SQLite, provided the application implement it using SQLite Online Backup API . Another solution, which doesn't require application cooperation, is to snapshot your disk, if your system supports this.
- We backup to rsync.net, which provides an append-only mode, through their snapshot feature . An attacked cannot override or erase the snapshots of your previous backups. I think it's possible to do something similar with S3, albeit in a bit more cumbersome way, using S3 versioning and MFA deletion.
- About logs, we're still not satisfied by what we use currently.
I'd be curious to read about what others do :-)
AND Zero mentions of Meteor, lol!
I think I'm the only indie developer using Meteor and Blaze, but I have to say they make my life WAY better. Granted, it took some time to figure out exactly how to optimize the stack to make it scale correctly, but NOW Meteor is the only solution I've heard of that does this:
Meteor now automatically builds two sets of client-side assets, one tailored to the capabilities of modern browsers, and the other designed to work equally well in all supported browsers, so that legacy browsers can continue working exactly as they did before. This “legacy” bundle is equivalent to what Meteor 1.5 and 1.6 delivered to every browser, so the crucial difference in Meteor 1.7 is simply that modern browsers will begin receiving code that is much closer to what you originally wrote.
The apps I build now run like native apps in a modern browser. Laser fast. It's beautiful, like the first time using broadband.
After everyone and their mother proclaimed Meteor to be dead, they rose from the grave and locked in like Godzilla. I don't begrudge anyone their choices, but if you haven't taken a look at Meteor lately, having used Wordpress, Rails, Ember, React, and vanilla JS with node in the past, I'm very grateful Meteor's development team is STILL knocking it out of the park.
I’ve never experienced any website that responds faster than one built with meteor post 1.7 in an updated browser. Lightning fast
Apollo can also be used with graphQL, but I’m happy with the results using mongo and DDP out of the box with one simple command meteor create Deploying to now is also a one liner thanks to meteor now
However every single day I do wish there was a solution for my webapp to get access to contacts on the phone and of course notifications. It would have been a game changer given the nature of the product. Currently it is a world of compromises for hobby developers (shameless plug: http://ping3.com)
As a language PHP can be a mess but the ecosystem is fantastic; the release of PSRs, composer and 7.x have been great.
For my VR stuff... Unity is really the only choice that's fast to get off the ground, so Unity it is.
For hacking together backend heavy stuff, Python. Flask if I need to expose an API endpoint. Postgres if I need a real database, sqlite if I don't. Deployed to my evergreen playpen virtual instance via Docker containers because it's, once again, a well trodden, simple, and undramatic path.
Other than that, try your best to decouple game state from the actual objects that are flying around. You will never do it 100%, but the more egregious Unity code I've seen all shares the trait that they just jammed variables into scripts for objects without really thinking about how they'd tie together a distributed network of objects that have no easy way of addressing each other. The answer is that you don't: you still need a central game state.
I started on Heroku, but I had a lot of free AWS credits from Stripe Atlas that I wanted to use. So I moved to Convox  on AWS, and it has been absolutely awesome. I have a rock-solid cluster across 3 different availability zones, and I'm also running Postgres with high availability (RDS).
I haven't had a single second of downtime since I migrated  (I up the status dashboard a few months ago, but moved to AWS earlier than that.) Convox supports auto-scaling and rolling deploys. It waits for the web process to be healthy, and if something crashes it rolls back the deploy without any downtime. I can also terminate any instance at will, and another one will pop up to replace it with zero downtime. After using it for the last ~6 months, I feel confident enough to start offering a 99.999% SLA.
Chose both because I wanted consistency between the abstraction of my business logic, and the spec of my implementation.
I get some chippy guff about using graphs, but by taking the time to define my business logic as a grammar, expressing that grammar as a graph, then implementing it directly into the model, I get a lot of "aha," moments from potential customers.
Graphs can be analogous to functional languages in that if you are using them, there is a higher likelyhood you've reasoned something through before implementing it.
Partly because I'm apparently a masochist, but also because... I mean c'mon, modern architecture is CRUD and job queues. Rust can do that fine if you don't get distracted by the bells and whistles.
I ripped out and open sourced my user-module stuff for authentication handling, if anyone's interested: https://github.com/ryanmcgrath/jelly
Why? For me it's best of breed vs simplicity. OpenID Connect is the most mature auth, rabbitmq very good messaging, elixir a lovely language, graphql the most programmer-friendly connection between frontend and backend, and react native allows 90% code sharing between web SPA and mobile apps.
And if you combine all of these (I mean if you finally set it all up), it's a low number of lines of code environment.
All of that said, I'm a big believer in "use the right tool for the job" so if something comes up that requires C++, I'll use C++. If something needs Python, then Python it will be. If Prolog is right for something, I'll use Prolog. Or COBOL, or Erlang, or Ruby, or CL, or Perl, or SNOBOL, etc., etc, yadda, yadda, ad infinitum...
Front-end: React SPA (Via Create React App),Redux, React-Bootstrap
Why: I had no prior framework experience, single command setup, and plenty questions on Stack Overflow and Medium articles to learn with.
React's one-way data flow and component architecture made things incredibly easy to mentally digest.
I found React's documentation VERY well organized and explained, and Vue's documentation was intimidating when compared to React.
Version Control: BitBucket Git
Why: Free private tier for solo use and enjoyed previous experience with Atlassian products
CI/CD: BitBucket Pipelines
Why: Already using Bitbucket, this just worked seamlessly with it. Only $10 for 1000 more build minutes.
Firebase: Hosting/Cloud Functions/Firestore/Auth
Why: Everything is automagical
HTTPS static website hosting for the create-react-app (This is what first got me started)
Then came Storage,Firestore,Cloud Functions, and Auth.
Wish it had an automagical SQL product.
Low Entry Cost
Cloud Provider: Google Cloud
Why: Had no experience with any cloud provider.
I found the pricing/product offerings really easy to digest and interpret when compared to AWS.
Email Sender: Sendgrid
Why: Good starting free tier. Decent Docs. Easy to setup.
Backend Compute: Google Compute Engine +Docker+Python-based API app
Why: I can do local dev easily with Docker, push the image to Google Container Registry, then pull the image to the VM.
It's easier to do than learning to setup a CI/CD piped GKE cluster
Eventually I want to pay the "tax" go to a GKE CI/CD Kelsy Hightower setup,so I can git push/PR/merge a change, and it's in production.
I don't do enough changes to the backend right now to justify a CI/CD piped GKE setup.
Email Client: Gsuite
Why: Custom Domain Name
I'm already familiar with Gmail
Project Management: Trello
Why: Integrates with Bitbucket
Multi Device Support (Phone,Tablet,Desktop)
Easy to mind map and organize features
What I value:
Free tier for solo devs/low use, and progressive options for paid plans (Gives me breathing room on cash while I figure things out)
Lots of questions/answers on Stack Overflow or in-depth Medium articles.
Python for the backend under the argument of "build using what you know". ReactJS for frontend development because Python for frontend work I find very antiquated.
Architecture wise I'm hosted in AWS. Data is stored in MySQL and Redis. All self hosted because I'm to cheap to pay per request pricing and the overhead is very minor when done right when rolling my own
* Storing time counters on user actions (timestamp of last time a user posted/edited X, meant to be a throttling mechanism against abuse)
* Site content caching, around 20-50kb per page, each page being user generated content
Sizes will obviously vary on traction so not sure about the final numbers.
My last 9-to-5 employer was a very well known and my largest caching tier in Redis there was 512GB in a cluster configuration. I'm using the same server configuration and sharding logic for the indie thing, just on a smaller scale
The Baeldung Spring Security course helped a lot.
If in a hurry though you can always use an external service like Auth0. I had that setup in an afternoon whereas understanding OAuth 2 and Spring Security took a week.
I am not making fast progress, I think the problem is my keyboard, I should try buying that Ducky one I have been eyeing for quite a while.
Play looks good, but maybe I should go with Scalatra instead? Should I be using Slick? It's quite different than any ORM I used so far, maybe I should bake my own Scala ORM. OK then, here we go. One more decision, REST or GraphQL? GraphQL seems to be the future. Let's go with that. Man, types are nice and Scala's type system is powerful, but sometimes I wish I had the flexibility of a dynamic language. It would make prototyping much faster.
Fuck it, "git checkout -b node_v2"
You can use both scala and clojure seamlessly.
I'm writing my first gui app in clojure for prototyping swing, but if I choose to rewrite parts of it in scala the rest of the clojure code would continue to work.
There are several ways.
using clojure in a scala project: https://github.com/Geal/sbt-clojure
using scala in a clojure project: https://github.com/technomancy/lein-scalac
jvm polyglot with maven: https://stackoverflow.com/a/9676895
See now this is where you went wrong. Should have switched to Nim at that point. Can't beat it's familiarity, speed and metaprogramming.
Isn't the actual wrong turn focusing on a metaproblem rather than the problem you set out to solve?
How often has a commercial project been rewritten to gain metaprogramming mid-dev? I added commercial because personal projects have different objectives.
I use PHP for prototypes and Go for serious stuff.
How do I separate what might be a side project to learn a new stack from a side project that might eventually make money?
Before you write a line of code, make a clear decision as to whether it's a learning experience or a business venture. Plan your project based on the goal you've chosen. If it's a learning experience, consciously avoid doing anything that doesn't push you to learn something new. If it's a business venture, consciously avoid doing anything unless you believe it has the highest RoI.
Some purely recreational projects do turn into viable businesses, but far more projects have failed due to indecision and yak shaving. If you're building a business, you don't gain anything by experimenting with a new stack. If you're learning React, you don't gain anything by writing a bunch of CSS and marketing copy. Don't be afraid to step away from the keyboard and ask yourself "Is this a good use of my time?".
I feel like this is just missing so much conte xt.
If you are working full time and can hack in your after hours to learn a new tech stack first and then start your project there might be an aggregate benefit over just starting the project right away using what you know.
Context matters to a huge degree in these kinds of discussions. I feel like people just like to give blanket statement advice.
Also build using what you know isn't always the best advice. Figure out what you want to optimize for and optimize for it. That may include doing things in a different stack.
Apps: Flutter and if I need it switch to native. But flutter is really good for my simple needs!
So far I am super happy with the stack, runs cheaply and requires pretty much no maintenance. Just tag a release on GitHub, Google Cloud Builder builds the image, Keel updates deployments and in a few seconds I have updated production with zero downtime.
While some people are happy with their copy/paste binaries/code to remote servers, I think a good pipeline to release and run your workloads makes working on side projects a lot more pleasure where you can focus on code and features. It also provides you with a valuable experience that can make you more money than the side project itself :)
How do you deploy your backend containers without downtime?
Do you host your Node app and PostgreSQL on the same machine?
I understand your frontend is hosted on Netlify and your backend on DigitalOcean: are they accessible under the same hostname (using Netlify proxy for example), or do you use CORS requests?
It was all designed to run on kubernetes which would allow rolling updates with zero downtime, but I can't justify the cost of a cluster at the moment.
The node app and postgres are on different instances. They could easily be on the same, though. Postgres is open to the world since GIS analysts need to be able to connect to it with QGIS. The API is also open to the world using CORS for the same reason (a custom GIS collection tool accesses the REST API).
I think a 1 second downtime when deploying a new version of the backend is tolerable if the app is a single-page application, because the app can implement a retry logic client-side. This is possible only if the backend can be stopped and restarted really quickly. Otherwise, a rolling update or green-bue deployment is necessary.
Another question: do you expose your Node service directly to the Internet, or is it behind a reverse proxy?
It was a great--thanks.
Vue+vuex on the front end.
Gitlab CI/CD for deployment
Extremely fast dev/deployment turnaround time with Django and Beanstalk.
Vue/vuex is simple enough for me to understand and build SPA
Though I have been thinking if I should replace elastic beanstalk with Kubernetes, since it seems to become the non-proprietary standard for deploying containers?
SNS is "good enough" for my use cases and I don't have to run a beanstalk worker (like for sqs).
I chose beanstalk because of how dead simple it is. It is non-standard in a way, but was the fastest way to get the app out.
Again, I know the app may outgrow it's needs at some point. At that point I'll invest time to move it out to the flavor of the day:).
I'm not in web apps.
> I've heard React described as "10 times as much work for a 20% better user experience", and I think that's about right (maybe not 10x, but probably 2x)
I thought this was a gem.
- Tachyons CSS
- Node.js (with express and postgraphile)
But the video producer/converter is also in TS (in that same repo). It produces your making-of videos like these:
Standing up a CRUD app in Phoenix is dead simple as well.
Python with Cython because this allows extremely nice flexibility between concise, expressive code and low-level targeted performance optimization.
Flask with gunicorn has scaled extremely well for us, but there are many good alternatives. Postgres because flexibility with customizations and data types in the database has been the most important thing for us.
Past that, I'm fond of Node, but that may be an unpopular thing to say on HN. :)
Something that will let you focus on creating the part that's unique to your project rather than having to build everything (or a lot) from scratch? (Rails/Django/etc are not enough).
I'm on mobile so going into detail is difficult. If it would be useful I'll write a guide for you.
I’m running my newest solo project on Prisma with GraphQL Yoga as the backend and React + Apollo as the front end. It’s really quite something and allows for extremely quick development.
Tempted to learn a 2nd language well, thinking Go but curious about the experience of Kotlin + IntelliJ
I managed to run the deploy using terraform on AWS free tier and Azure msdn account