Not all transpilers may be to your taste, but they have their place.
I disagree and my original statement applies as much to TypeScript as it does to an ECMAScript 6 transpiler. If you want to use the new features and you're transpiling now you need to have a good understanding of the differences between ECMAScript 5 and 6 and you open yourself up to possible bugs in the transpiler.
Instead you could just write ECMAScript 5 until 6 is more widely available. Sure no one likes to wait but it requires the least amount of expertise and lower chance of bugs from a 3rd party interfering with your application.
> Sure no one likes to wait but it requires the least amount of expertise and...
I feel more comfortable writing TypeScript rather than using an ES6 transpiler, really. The compiler is built really well - you can observe the team's development process on github and I have to say the level is quite impressive (extensive code reviews and great discussions on the issue tracker).
As an aside my first and last time using CoffeeScript I ran into an issue that, since this was about two years ago, has already been fixed but at the time it cost me hours of debugging time. It really soured my experience with any transpiler.
You have a very valid point regarding bugs. Transpiler bugs definitely happen. When I first tried using Babel - still named 6to5 then - instead of Traceur, it immediately broke my code because of an incorrect TCO implementation in Babel. Babel's maintainer (sebmck) is insanely responsive and the bug was fixed almost immediately, but still.
If I used Babel or any other transpiler for critical production code, I would only enable a whitelist of features that I thoroughly understand: for example, how a fat arrow is translated into a regular function. This vetting would be time-consuming, but that's a price I'd be willing to pay because I enjoy writing ES6 much more than ES5. That's highly subjective though, so I completely understand how some other people would rather just stick to ES5 and get stuff done with the lowest possible risk of weird things happening.
Babel is a completely different transpirer than all the others. It's goal is readable, understandable transpiled code and Sebastian (sebmck) is a genius robot wunderkind. I check the repo everyday and see new updates, bug fixes, etc. It is heartening to see someone give a project so much dedication.
I have an affinity for Babel unlike any other JS project (except React). They team works really hard to keep it interesting.
Yeah, Babel and the Babel team is great. I got very suspicious of the project after my first experience with it broke perfectly valid code, but after reporting the issue and then another and having them fix it almost immediately they won my confidence back. :)
For production code I'd still stick to a whitelist of features that have been released for at least a month though. Babel is aggressively pursuing ES6 coverage and new features might not have all edge cases covered when they're released. Give them one month though and with their responsiveness it's 90% likely it's going to be fixed.
I'm presently in the middle of an experiment to use Firefox (and its Developer Edition) as my primary browser when performing my duties as a web developer. So far... It's been kind of rocky. Daily usage of the browser isn't that bad, but the developer tools are lacking compared to Chrome.
A good example is the Network tab. In Chrome, I need only have the developer tools open to capture information about requests. In FF, I must not only have the tools open, but I must also be viewing the tab.
Another minor annoyance is the state of add-ons. It seems that new versions of the add-ons I use regularly are several versions behind their Chrome counterparts. And have you tried to develop an add-on? Wow. The docs are kind of harsh. They could really use a guided example.
I feel like adding Firebug adds unneeded weight to FF now that FF's dev tools have matured so much. There's still a lot of work to catch up to Chrome, as you stated, but for the most part it's very capable and Firebug just feels like bloat and adds memory consumption that's not necessary.
I sometimes use the dev tools but I prefer Firebug. Maybe it's only better looking but there are some real advantages. I'm on a phone now and can't make a side by side comparison. Relying on my memory won't work :-)
The dev tools are faster, that's for sure but Firebug is not slow enough to switch. I'd like the two teams to merge.
You should install Firebug. It's better than the integrated dev tools in almost all areas.
Really? I uninstalled Firebug not so long ago, having concluded that it offered little added value any more, while measurably, repeatably, and dramatically reducing Firefox's performance. Has either or both of these things changed in the last few months?
Yes. Last spring Firebug rewrote its script debugging tools on top of the new JS debugging APIs in the then-upcoming FF nightlies. Firebug was painfully slow for me on JS-heavy sites about a year ago. This year it's perfectly responsive, on the same computer.
FF's built-in developer tools are still pretty sad, but nothing quite compares to full-blown Firebug on Firefox. The only place anyone else's tools outdo it, IMO, is Chrome's profiling (Firebug's is nearly useless.)
As I understand it, add-ons are updated more slowly than their Chrome counterparts because Mozilla actually reviews the code of each add-on offered from their store to attempt to validate it's not doing anything naughty. Google and Apple make these types of promises, but really they approve things after a cursory glance and then respond to any user complaints that may come in.
Which is really what Firefox needs to do as well. This is one of my main pain points as a Firefox Extension developer and a turn off for most I know. Rather than keeping people wait for 15 or so days for approving their add-ons, they should rather improve their automatic code checking tools as much as they can, and stop at that (and have a better extension flagging system).
The problem with current system is more if our add-ons are based on a specific website like mine. When that website changes, it breaks my add-on and I upload the fix immediately. But Firefox takes days to approve it, making my add-on look bad in public.
Yeah, this is the tradeoff. It takes a long time to audit code. Supposedly Mozilla is doing this every time an add-on update is sent for review. Google and Apple just pretend like they do, and I guess Mozilla didn't get the memo that you're supposed to pretend.
One minor grievance I have with the Firefox console is the input line is all the way at the bottom. I quite like Chrome's approach of starting at the top.
Firefox has a few of these irritating UX quirks that I just can't get over. Having to restart after installing an extension is another. Every time I see the request, I can't help lose a bit of respect for it.
"Having to restart after installing an extension is another. Every time I see the request, I can't help lose a bit of respect for it."
Restart-less add-ons have been around for long years, so if you see one that wants you to restart Fx, you can blame it on the addon's creator (I know there may be some edge cases where it is a must, but those are pretty rare).
Wow, this drives me insane. I've always meant to file a bug report about this but I'm not sure if its supposed to be a feature instead. It appears to let you cycle through valid properties but I really wish it was assigned to another key.
Also the detail provided is not as informative... I was recently migrating a website and while doing this needed to modify my local DNS. To verify the site still worked on the new infrastructure before putting all traffic. So, I was able to use the network tab in Chrome to confirm that I was loading the site from the new servers and not the old servers. In the network tab in Chrome it shows you the IP address of the requested site. Very useful when loading from multiple servers as well... I couldn't find the same information when trying in Firefox.
How the fuck do people do this with git? I loved doing pretty commits with mercurial's patch queue. Git is awesome but it seems like the only option for pretty commits is rebase (ew). All the git patch queue implementations are basically dead or unmaintained. :(
Yes, one trick I use a lot is interactive rebase. I do a lot of commits with "WORK IN PROGRESS" then git rebase -I HEAD~X to improve the history before pushing.
IMHO mercurial is superior to git in a lot of aspects, but there is main philosophical difference in the usage. Mercurial is more about indelible history while git people love to edit the history of the commits.
> git people love to edit the history of the commits
It varies. I avoid editing the history and make my opinion known whenever the topic comes up. Editing the history now requires making assumptions about what future developers might want or need; making assumptions like this is similar to a form of You Ain't Gonna Need It (YAGNI). It's better to give the future developers everything that we have, no matter how "ugly", and allow them to get what they actually need out of it.
In a previous life, our development workflow required rebasing as one of the steps for permission reasons (only senior devs could merge branches into master, so having everyone rebase first made sure conflicts were resolved by the branch's author)
Just the user interface. It's a lot easier to fuck up during a rebase. MQ is a lot like git stash on steroids. Super easy to flit between multiple patches that I'm working on and put a change in the appropriate one. Wasn't uncommon for me to touch 3-4 patches in a random order over the course of a few minutes.
Let's say I'm working on patches 1, 2, and 3.
Situation: Currently on 3. I write some code and decide it should be in patch 1.
Git: Can't interactively rebase with uncommitted changes. Some sort of stash juggling.
MQ: Go down two patches. Commit the changes to the patch.
Situation: I'd like to introduce a new intermediate patch 1.5.
Git: Make a new commit on top of patch 1, rebase the other patches on top of it. If there's potentially conflicting merge errors they need to be addressed immediately—or I'm stuck leaving an alternate history of commit 1 around until I'm ready to deal with merge issues.
MQ: Go down two patches. Make a new patch. Deal with the merge errors whenever I go up/apply patches 2 and 3.
How is this different from the information stored in `git reflog`, which allows you to rollback to commits before their rebase? You can just as easily revert back to the old version, since it never goes anywhere.
Besides, the need to perfectly preserve history in most cases is totally overblown AFAICS, especially local history nobody else sees. I don't care if a person who submitted patches to me made 20 separate minor commits to fix minor things in some case like a code review (e.g. "fix spelling", "fix 80 column violations", "rename this thing", "clean up code a bit and make it shorter re: code review"); those are superfluous and add no meaning to the actual work itself and can be rebased/squashed away in almost all cases. If they submit 20 minor commits that are each independent of one another and isolated, that's another story.
The alternative seems to just be 'have an ugly history littered with these commits' if "rewriting history" is so incredibly dangerous/terrible like it is always implied (which it is not, because you can always recover from it with the reflog until you push). But I'd rather keep my project history clean and clear; a tidy history is just as important as tidy code IMO. FWIW, I think the OP's set of patches are clear and do not constitute an ugly history.
The actual way to 'stop rewriting history' is to disable --force pushes, which does unilaterally rewrite history for all downstream consumers. This is also true for Mercurial. Rebase does not do this, or anything close to it.
As someone who reads and writes a lot of patches, this is an exceedingly common workflow. How is Mercurial any better in this situation where I don't want all that useless information?
> How is this different from the information stored in `git reflog`, which allows you to rollback to commits before their rebase? You can just as easily revert back to the old version, since it never goes anywhere.
Patch queues make the distinction between mutable WIP patches and finished commits explicit. Also, versioned patch queues make it safe to share WIP patches.
My apologies if this is a really naive question, I don't really do this kind of low level systems programming. Do most projects of this sort not have tests, or are they just hard enough that they don't really pay off?
> As for the xlib stuff, I can't really write tests for that.
I do that by wrapping the C APIs into traits. I have some macros that generate both the proper implementations with ffi calls and mock implementations. Then I can write extensive tests for the high level binding's low level behavior using properly set up mocks.
For one, it gives you 10, 10-minute meditations for free, so you get a sense of what the whole enterprise is about without laying down any cash. For some (many?), that's all you need to get going.
Second, having listened to quite a few guided meditations, I find the Headspace guy (I think his name is Andy) to be one of the best. The meditations strike a near-perfect balance of helping direct my energy and attention in a way that doesn't draw attention to himself or his voice/delivery. i.e. he gives excellent guidance and gets out of the way.
"New Emergent Particle Acts as its Own Anti-Particle".
A layman will not know what "emergent particle" means. (I did not.) But they will at least know that the presence of an adjective implies it's not quite "a particle", and the adjective itself gives a hint to the meaning. If the layman is then piqued, they will get clarification in the article itself.
So, for example, a "hole" in a semiconductor lattice is emergent, but a phonon is quasi?
I suppose we would call an L4 Lagrangian point an emergent phenomenon and not a virtual mass (just because it can be orbited). For one thing, if we did a surface integral around the L4 point, of the gravitational field, we would find that there is no mass there; the net flux is zero.
I do not know, as I am not a particle physicist. I got "emergent" from the popular press article. The terms "virtual" and "quasi" already have meanings in the field, so my point was it would be further confusing to use their non-technical meanings when trying to explain it to a layman.
We're using CSS Wizardry Grids in a new project at Precision Nutrition. It's been pretty great so far, though the use of comments is quite something to wrap your head around with respect to grid items.
Those demanding Eich step down aren't "silencing [his] freedom of speech", they're noting that Mozilla's culture of inclusivity is not congruent with his viewpoint and, as such, isn't fit to lead the organization in the capacity of a CEO.