Hacker Newsnew | comments | show | ask | jobs | submit | jherdman's comments login

Absolutely! We're using Rails at Precision Nutrition. We use it to serve up a JSON API, and in a more traditional sense with ERb templates.

We're finding it to be incredibly productive, and more than performant for our needs — especially when you couple it with Postgresql.

-----


I'd jump right in. The upgrade path has been very gentle for quite some time now.

-----


Transpiling may actually be the safest way to use JavaScript, especially given its rate of change. Take, for instance, a number of features trickling down the ECMA2015 pipe: modules, classes, "let", etc. Using these features without a transpiler, such as Babel, is simply not possible. A transpiler allows you to use these features now without any worry.

Not all transpilers may be to your taste, but they have their place.

-----


> Transpiling may actually be the safest way to use JavaScript, especially given its rate of change. [..] A transpiler allows you to use these features now without any worry.

I disagree and my original statement applies as much to TypeScript as it does to an ECMAScript 6 transpiler. If you want to use the new features and you're transpiling now you need to have a good understanding of the differences between ECMAScript 5 and 6 and you open yourself up to possible bugs in the transpiler.

Instead you could just write ECMAScript 5 until 6 is more widely available. Sure no one likes to wait but it requires the least amount of expertise and lower chance of bugs from a 3rd party interfering with your application.

-----


> Sure no one likes to wait but it requires the least amount of expertise and...

I would argue that knowing some of the implementations of features such as module/export/class/extends of TypeScript in native javascript requires a lot more expertise to create coherent OOP code than just opening up TypeScript and using a syntax for OOP that most people are already familiar with.

-----


I feel more comfortable writing TypeScript rather than using an ES6 transpiler, really. The compiler is built really well - you can observe the team's development process on github and I have to say the level is quite impressive (extensive code reviews and great discussions on the issue tracker).

-----


I'm sure the compiler is well built but stability is my main concern. JavaScript, even with its many flaws, is used by millions of people every day and is one of the most used programming languages in existence. That's a ton of testing through real world use. Since transpilers just compile to JavaScript, they can take advantage of some of the same stability of JavaScript for its output but the act of transpiling is only used by maybe a few thousand users. Sure they'll find bugs and fix them but that's nowhere near the same level as simply writing in native JavaScript (native JavaScript, is that an oxymoron? lol).

Besides all that everything ever created has bugs; why risk introducing bugs from a JavaScript engine and from a transpiler when you can risk a single one?

As an aside my first and last time using CoffeeScript I ran into an issue that, since this was about two years ago, has already been fixed but at the time it cost me hours of debugging time. It really soured my experience with any transpiler.

-----


You have a very valid point regarding bugs. Transpiler bugs definitely happen. When I first tried using Babel - still named 6to5 then - instead of Traceur, it immediately broke my code because of an incorrect TCO implementation in Babel. Babel's maintainer (sebmck) is insanely responsive and the bug was fixed almost immediately, but still.

If I used Babel or any other transpiler for critical production code, I would only enable a whitelist of features that I thoroughly understand: for example, how a fat arrow is translated into a regular function. This vetting would be time-consuming, but that's a price I'd be willing to pay because I enjoy writing ES6 much more than ES5. That's highly subjective though, so I completely understand how some other people would rather just stick to ES5 and get stuff done with the lowest possible risk of weird things happening.

-----


Babel is a completely different transpirer than all the others. It's goal is readable, understandable transpiled code and Sebastian (sebmck) is a genius robot wunderkind. I check the repo everyday and see new updates, bug fixes, etc. It is heartening to see someone give a project so much dedication.

I have an affinity for Babel unlike any other JS project (except React). They team works really hard to keep it interesting.

-----


Yeah, Babel and the Babel team is great. I got very suspicious of the project after my first experience with it broke perfectly valid code, but after reporting the issue and then another and having them fix it almost immediately they won my confidence back. :)

For production code I'd still stick to a whitelist of features that have been released for at least a month though. Babel is aggressively pursuing ES6 coverage and new features might not have all edge cases covered when they're released. Give them one month though and with their responsiveness it's 90% likely it's going to be fixed.

-----


The same can be said for Babel, and it is actually very simple to predict or retro-engineer the transpilation from ES6 to 5.

-----


I'm presently in the middle of an experiment to use Firefox (and its Developer Edition) as my primary browser when performing my duties as a web developer. So far... It's been kind of rocky. Daily usage of the browser isn't that bad, but the developer tools are lacking compared to Chrome.

A good example is the Network tab. In Chrome, I need only have the developer tools open to capture information about requests. In FF, I must not only have the tools open, but I must also be viewing the tab.

Another minor annoyance is the state of add-ons. It seems that new versions of the add-ons I use regularly are several versions behind their Chrome counterparts. And have you tried to develop an add-on? Wow. The docs are kind of harsh. They could really use a guided example.

-----


You should install Firebug. It's better than the integrated dev tools in almost all areas.

About writing extensions we had this on HN in September 2014 https://news.ycombinator.com/item?id=8285744

-----


I'll definitely give Firebug a try. I hadn't even considered it to be honest. I thought I remembered reading a while back that it was no longer important after Mozilla decided to brew their own.

-----


I feel like adding Firebug adds unneeded weight to FF now that FF's dev tools have matured so much. There's still a lot of work to catch up to Chrome, as you stated, but for the most part it's very capable and Firebug just feels like bloat and adds memory consumption that's not necessary.

-----


I sometimes use the dev tools but I prefer Firebug. Maybe it's only better looking but there are some real advantages. I'm on a phone now and can't make a side by side comparison. Relying on my memory won't work :-) The dev tools are faster, that's for sure but Firebug is not slow enough to switch. I'd like the two teams to merge.

-----


The only thing that irks me is that there is no native dark theme for it.

-----


You should install Firebug. It's better than the integrated dev tools in almost all areas.

Really? I uninstalled Firebug not so long ago, having concluded that it offered little added value any more, while measurably, repeatably, and dramatically reducing Firefox's performance. Has either or both of these things changed in the last few months?

-----


Yes. Last spring Firebug rewrote its script debugging tools on top of the new JS debugging APIs in the then-upcoming FF nightlies[1][2]. Firebug was painfully slow for me on JS-heavy sites about a year ago. This year it's perfectly responsive, on the same computer.

[1] https://blog.getfirebug.com/2014/03/26/firebug-2-0-alpha-1/ [2] https://blog.getfirebug.com/2014/06/10/firebug-2-0/

-----


FF's built-in developer tools are still pretty sad, but nothing quite compares to full-blown Firebug on Firefox. The only place anyone else's tools outdo it, IMO, is Chrome's profiling (Firebug's is nearly useless.)

-----


One minor grievance I have with the Firefox console is the input line is all the way at the bottom. I quite like Chrome's approach of starting at the top.

Firefox has a few of these irritating UX quirks that I just can't get over. Having to restart after installing an extension is another. Every time I see the request, I can't help lose a bit of respect for it.

-----


"Having to restart after installing an extension is another. Every time I see the request, I can't help lose a bit of respect for it."

Restart-less add-ons have been around for long years, so if you see one that wants you to restart Fx, you can blame it on the addon's creator (I know there may be some edge cases where it is a must, but those are pretty rare).

-----


> One minor grievance I have with the Firefox console is the input line is all the way at the bottom. I quite like Chrome's approach of starting at the top.

This is such a small thing but it drives me insane (and keeps me on Chrome, among other things).

> Having to restart after installing an extension is another. Every time I see the request, I can't help lose a bit of respect for it.

Yeah, I've enjoyed the last few years with not having to worry about restarting idk if I could go back...

-----


>Having to restart after installing an extension is another

Because it's part of your everyday workflow to install extensions one after another, right?

-----


> Having to restart after installing an extension is another

It depends on the extension. Only some (particularly, older) extensions require restarting.

-----


> Firefox has a few of these irritating UX quirks that I just can't get over.

Another one is not being able to use shift-home or shift-end to select to the beginning or end of a css property.

-----


Wow, this drives me insane. I've always meant to file a bug report about this but I'm not sure if its supposed to be a feature instead. It appears to let you cycle through valid properties but I really wish it was assigned to another key.

-----


File a bug if you want to request devtools improvements.

-----


Also the detail provided is not as informative... I was recently migrating a website and while doing this needed to modify my local DNS. To verify the site still worked on the new infrastructure before putting all traffic. So, I was able to use the network tab in Chrome to confirm that I was loading the site from the new servers and not the old servers. In the network tab in Chrome it shows you the IP address of the requested site. Very useful when loading from multiple servers as well... I couldn't find the same information when trying in Firefox.

-----


I assume you've already installed the web developer toolbar and firebug? FWIW, I do all my primary web development in FF, and don't have that network problem.

-----


How about filing some bugs for your concerns?

-----


That's my intention. The first step is always to ensure that you're actually dealing with bugs though, it may just be something I'm not used to.

-----


As I understand it, add-ons are updated more slowly than their Chrome counterparts because Mozilla actually reviews the code of each add-on offered from their store to attempt to validate it's not doing anything naughty. Google and Apple make these types of promises, but really they approve things after a cursory glance and then respond to any user complaints that may come in.

-----


Which is really what Firefox needs to do as well. This is one of my main pain points as a Firefox Extension developer and a turn off for most I know. Rather than keeping people wait for 15 or so days for approving their add-ons, they should rather improve their automatic code checking tools as much as they can, and stop at that (and have a better extension flagging system).

The problem with current system is more if our add-ons are based on a specific website like mine. When that website changes, it breaks my add-on and I upload the fix immediately. But Firefox takes days to approve it, making my add-on look bad in public.

-----


On the other hand I read that the review process is quite slow (as in painfully slow)

-----


Yeah, this is the tradeoff. It takes a long time to audit code. Supposedly Mozilla is doing this every time an add-on update is sent for review. Google and Apple just pretend like they do, and I guess Mozilla didn't get the memo that you're supposed to pretend.

-----


Yes, FF/Mozilla trying to be perfect delay lots of things. They should ship fast, fix later (and not be perfectionists).

-----


This pull request is a thing of beauty. Each commit is a little slice of (mostly) independent awesome. The project nerd in me just did a little dance.

-----


I've found this to be true for most of FreeBSD and even other BSD-licensed code.

-----


How the fuck do people do this with git? I loved doing pretty commits with mercurial's patch queue. Git is awesome but it seems like the only option for pretty commits is rebase (ew). All the git patch queue implementations are basically dead or unmaintained. :(

-----


These are actually the raw commits as I wrote them; I try to structure my work to build on itself in this way.

-----


Yes, one trick I use a lot is interactive rebase. I do a lot of commits with "WORK IN PROGRESS" then git rebase -I HEAD~X to improve the history before pushing.

IMHO mercurial is superior to git in a lot of aspects, but there is main philosophical difference in the usage. Mercurial is more about indelible history while git people love to edit the history of the commits.

-----


> git people love to edit the history of the commits

It varies. I avoid editing the history and make my opinion known whenever the topic comes up. Editing the history now requires making assumptions about what future developers might want or need; making assumptions like this is similar to a form of You Ain't Gonna Need It (YAGNI). It's better to give the future developers everything that we have, no matter how "ugly", and allow them to get what they actually need out of it.

In a previous life, our development workflow required rebasing as one of the steps for permission reasons (only senior devs could merge branches into master, so having everyone rebase first made sure conflicts were resolved by the branch's author)

-----


I haven't used it much so please forgive my ignorance, but what is wrong with git rebase?

-----


Just the user interface. It's a lot easier to fuck up during a rebase. MQ is a lot like git stash on steroids. Super easy to flit between multiple patches that I'm working on and put a change in the appropriate one. Wasn't uncommon for me to touch 3-4 patches in a random order over the course of a few minutes.

Let's say I'm working on patches 1, 2, and 3.

--

Situation: Currently on 3. I write some code and decide it should be in patch 1.

Git: Can't interactively rebase with uncommitted changes. Some sort of stash juggling.

MQ: Go down two patches. Commit the changes to the patch.

--

Situation: I'd like to introduce a new intermediate patch 1.5.

Git: Make a new commit on top of patch 1, rebase the other patches on top of it. If there's potentially conflicting merge errors they need to be addressed immediately—or I'm stuck leaving an alternate history of commit 1 around until I'm ready to deal with merge issues.

MQ: Go down two patches. Make a new patch. Deal with the merge errors whenever I go up/apply patches 2 and 3.

-----


Rewrites history. Also, Mercurial allows you to version control your patch queue (WIP commits).

-----


How is this different from the information stored in `git reflog`, which allows you to rollback to commits before their rebase? You can just as easily revert back to the old version, since it never goes anywhere.

Besides, the need to perfectly preserve history in most cases is totally overblown AFAICS, especially local history nobody else sees. I don't care if a person who submitted patches to me made 20 separate minor commits to fix minor things in some case like a code review (e.g. "fix spelling", "fix 80 column violations", "rename this thing", "clean up code a bit and make it shorter re: code review"); those are superfluous and add no meaning to the actual work itself and can be rebased/squashed away in almost all cases. If they submit 20 minor commits that are each independent of one another and isolated, that's another story.

The alternative seems to just be 'have an ugly history littered with these commits' if "rewriting history" is so incredibly dangerous/terrible like it is always implied (which it is not, because you can always recover from it with the reflog until you push). But I'd rather keep my project history clean and clear; a tidy history is just as important as tidy code IMO. FWIW, I think the OP's set of patches are clear and do not constitute an ugly history.

The actual way to 'stop rewriting history' is to disable --force pushes, which does unilaterally rewrite history for all downstream consumers. This is also true for Mercurial. Rebase does not do this, or anything close to it.

As someone who reads and writes a lot of patches, this is an exceedingly common workflow. How is Mercurial any better in this situation where I don't want all that useless information?

-----


> How is this different from the information stored in `git reflog`, which allows you to rollback to commits before their rebase? You can just as easily revert back to the old version, since it never goes anywhere.

Patch queues make the distinction between mutable WIP patches and finished commits explicit. Also, versioned patch queues make it safe to share WIP patches.

-----


That guy seriously rules.

-----


My apologies if this is a really naive question, I don't really do this kind of low level systems programming. Do most projects of this sort not have tests, or are they just hard enough that they don't really pay off?

-----


As for the xlib stuff, I can't really write tests for that. But I will write a more extensive test suite for the core components, especially the ones in src/core.rs

For the methods in window_manager.rs, they often depend on the window system, but that shouldn't be a problem as I could simply insert a dummy interface.

Tests will follow. Last weeks were just filled with thesis and preparations for my last exam. So I used the coding for relaxation and kinda omitted the tests. I know, I know...behaviour.

-----


> As for the xlib stuff, I can't really write tests for that.

I do that by wrapping the C APIs into traits. I have some macros that generate both the proper implementations with ffi calls and mock implementations. Then I can write extensive tests for the high level binding's low level behavior using properly set up mocks.

-----


I've been using the Headspace app (https://www.headspace.com/) for a few months now and LOVE it. I highly recommend giving it a try. They have a free introductory course you can try too.

-----


What does it do, and how does that help you?

-----


For one, it gives you 10, 10-minute meditations for free, so you get a sense of what the whole enterprise is about without laying down any cash. For some (many?), that's all you need to get going.

Second, having listened to quite a few guided meditations, I find the Headspace guy (I think his name is Andy) to be one of the best. The meditations strike a near-perfect balance of helping direct my energy and attention in a way that doesn't draw attention to himself or his voice/delivery. i.e. he gives excellent guidance and gets out of the way.

-----


> ... but the title is very confusing to the layman

Given the density of knowledge in your comment, I'm not sure the title could be much different to aid in the understanding of the layman.

-----


"New Emergent Particle Acts as its Own Anti-Particle".

A layman will not know what "emergent particle" means. (I did not.) But they will at least know that the presence of an adjective implies it's not quite "a particle", and the adjective itself gives a hint to the meaning. If the layman is then piqued, they will get clarification in the article itself.

-----


You wrote the style guide for Wikipedia's mathematics articles, didn't you? ;)

(By which I mean, a balance between immediate understanding vs. links for detail >> primarily links for detail without immediate understanding)

-----


Would "Virtual Particle" possibly be a more correct and explanatory term?

-----


Sadly, no, as "virtual particle" has a different meaning. See http://en.wikipedia.org/wiki/Virtual_particle. The technical term for these, I believe is "quasiparticle": http://en.wikipedia.org/wiki/Quasiparticle

-----


So, for example, a "hole" in a semiconductor lattice is emergent, but a phonon is quasi?

I suppose we would call an L4 Lagrangian point an emergent phenomenon and not a virtual mass (just because it can be orbited). For one thing, if we did a surface integral around the L4 point, of the gravitational field, we would find that there is no mass there; the net flux is zero.

-----


I do not know, as I am not a particle physicist. I got "emergent" from the popular press article. The terms "virtual" and "quasi" already have meanings in the field, so my point was it would be further confusing to use their non-technical meanings when trying to explain it to a layman.

-----


We're using CSS Wizardry Grids in a new project at Precision Nutrition. It's been pretty great so far, though the use of comments is quite something to wrap your head around with respect to grid items.

-----


Those demanding Eich step down aren't "silencing [his] freedom of speech", they're noting that Mozilla's culture of inclusivity is not congruent with his viewpoint and, as such, isn't fit to lead the organization in the capacity of a CEO.

-----

More

Applications are open for YC Winter 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: