Hacker News new | past | comments | ask | show | jobs | submit login
Federal Reserve Bank of NY converts major economic model to Julia (newyorkfed.org)
173 points by idunning on Dec 3, 2015 | hide | past | favorite | 88 comments

Cool, despite Julia replacing matlab here, it looks like the biggest loser to Julia's rise might be Octave.

Matlab will always have its proponents, that will use it no matter what, but if Julia keeps encroaching on this territory, I'm not sure where that leaves Ocatve.

To the model discussed in this paper, check out this series of blog posts for more information:


Essentially the model is a work in progress that is continually updated each quarter. It attempts to model at a macro economic level, the interactions between Banks, consumers, Companies, Governments and households.

Once the model has solved for the general equilibrium of how those 5 agencies interact the model can then be used to "shock" a particular factor, such as interest rates, to determine how this might affect the interaction between these agents.

Its a fairly well respected model, and the fact that the US is one of the few countries to release such a large amount of financial data is one reason why the US is still a leader in economic theory.

> it looks like the biggest loser to Julia's rise might be Octave.

It's not a competition. Julia is our friend, not a rival to Octave. We're on good terms with their developers. Occasionally we share ideas and patches back and forth when it makes sense. They've asked me for help with their Octave benchmark and I've gladly provided it.

Octave being unnecessary is Octave's ultimate goal. When nobody cares about Matlab, probably nobody will care about Octave either. But as long as people care about Matlab, Octave will be relevant. Or perhaps, in a perfect world, enough people will stop caring about Matlab that Octave can start leading in fixing the biggest stupidities in the programming language (we already do this to some degree, but keep relenting and replicating Matlab's bugs because code depends on those bugs).

If Julia is gaining the minds of people in scientific computing, great. It's going to take them away from Matlab and Octave to equal degree. If we can all do our computing unfettered by proprietary licenses, I don't care if it means we end up using Octave or Julia as long as we stop using Matlab.

And many of Matlab's alleged proponents keep turning to Octave. Octave still fills a niche that Julia does not: being able to use Matlab code freely.

It's good to see people that have a goal beyond wanting their personal project to succeed, but rather to keep the original reason they started the project at the for front.

Too often large enterprises and projects forget the reason they were started and simply try to keep themselves relevant.

Well, Octave wasn't started with the goal to take down Matlab. That became a goal later in the project's lifetime as it became clear that users were demanding it.

Just thought you might like to hear this---I love octave and use it regularly in my academic work. It is the perfect language for me to rapidly prototype out a calculation.

That said, I think the future lies in porting over octave libraries and plotting capabilities to Julia.

Respectfully, maybe it should be a (friendly) competition. Competition motivates people. It's commonly accepted as one of the central drivers of capitalist markets. Perhaps if competition were injected in open source projects in some form it'd help the open source community compete with the proprietary community.

We already have fearsome competition: Matlab. We don't need in-fighting within team Free Software.

Respectfully disagree with the phrase "infighting". I believe healthy competition would be good for OSS.

You're applying a broad principle (competition is good) without considering lots of specifics.

First of all, within free software, collaboration is a lot easier than competition. Sure, we should all try to be the best we can be, but if our "competition" is also free, we can just grab their stuff and re-use it. We cannot, however, grab whatever we want from Matlab and re-use it, so we do have to compete against them.

Secondly, even if I agree with you that competition is great even in this case, how do you want me to modify my behaviour? To stop collaborating with Julia? To view them with more suspicion? To refuse to help them when they ask for help?

I already try to match features whenever possible. The feature I'm most envious of is their JIT compiling, but it's very difficult to implement this. Vice versa, I don't think they have a goal to match our features, such as a native Qt GUI or an exact implementation of the Matlab language.

Julia and Octave have lots of ideological differences too. For one, Octave is a GNU package, so it tries to adhere to the GNU coding standards and ideals. In particular, we prefer to call ourselves free software, not open source, even if both terms mean the same thing. I think the most important thing is to enable unfettered scientific computing. Julia's main goal is to provide a better language for scientific computing, while ensuring user freedom is a secondary goal. So far they have not compromised user freedom, and I hope they never do. So, there you have it, we're already competing on something?

Hmm. Sigh.

It is frustrating and saddening how often I encounter the same broken assumptions and nonconstructive disagreement on HN. I'm beginning to disengage.

I have considered many of the specifics, but yes, I was applying a broad principle in a broad manner, because it has broadly been shown to be beneficial. Even in child play, competition is good. When two children play one will almost always have an advantage relevant to the game at hand, and so the "winning" child must learn to temper their play while the "losing" child must learn to lose beneficially. When the Petronas Towers were being built the teams were placed in competition, with rewards at stake, to complete their tower first. This helped keep the project moving forward at a fast clip. Militaries of different sovereign states participate in "war games" to foster their relationships all the time. Both Russia and China have been invited to participate in the RIMPAC war games, for instance. So, if the idea of friendly competition holds for childhood development, construction, and war, I'd say it holds fairly broadly.

I'm not going to bother getting into the remaining details of your response, because you honestly don't seem open to the discussion. I've got about a 1000 things requiring my attention and energy, and so I try to prioritize those that don't require me beating my head against a wall in vain. But, as a purely "release of stress" exercise, I'd like to point out that OSS is getting its butt kicked in the consumer market. It's getting it's but kicked by proprietary software and software as a service, which is hardly unequivocally empowering to the user, which is one of the most important components of OSS. So, you hate me, you disagree with me, whatever; if you give one flying fuck about OSS, put your ego aside for one short second and realize that OSS is losing to proprietary software and enabling software as a service unabated, while always getting the scraps from the table. Stop with the cognitive dissonance already and do something about it.

As someone forced to do matlab/octacve in college, this is incredibly refreshing to hear. My respect for the octave team just went up considerably.

This is an anecdote about a big Matlab proponent who switched sides.

I learned Matlab from my Econ 101 professor 8 years ago. He was a big Matlab proponent, and he knew all the econ tools inside out.

We kept in touch, since he has been one of the best professors I've ever had. So, fast forward to 2 years ago, I sent him an email asking about this [1] Quantitative Economics course by Sargent and Stachurski based on python. He told me that it looked interesting and that he would give it a try and tell me what he thought. This was his first approach to Python.

Two months later he sends me an email telling me that he's ditched Matlab for good, he's taken a couple MOOC python courses and that he is having a lot of fun with Python. He also praised the community and all the open source tools. He also told me to get away from the Sargent and Stachurski course, that they add too much complexity to the explanations just for the sake of looking more Nobel-prize-winner like.

[1] http://quant-econ.net/

>>He also told me to get away from the Sargent and Stachurski course, that they add too much complexity to the explanations just for the sake of looking more Nobel-prize-winner like.

That's really good feedback for anybody thinking of attempting that course. Does your professor recommend an accompanying course to Sargent and Stachurski's quantitative economics course? I haven't seen many other courses that provide code fragments to run, so S&S's course still has a lot going for it in that regard.

Nope, I'm sorry, he just told me that I should get into Python. He did recommend this edX course, but it's an introduction to Python, nothing to do with economics.

[1] https://www.edx.org/course/mitx/mitx-6-00-1x-introduction-co...

IMF course looks interesting https://www.edx.org/course/macroeconomic-forecasting-imfx-mf...

Sargent course looks pretty intense. IMF course looks maybe more approachable, uses Eviews which is a time-series-oriented statistical modeling package.

> Matlab will always have its proponents

I honestly can't understand why. I'm currently in a computer vision class that uses Matlab, and it's literally the worst language I've ever used in my life. The design is nonsensical, bordering on malicious

I've always thought that the biggest problem with python was that as programs get bigger, you're likely to run into some confusion due to the lack of static typing and substantial use of implicit conversion (e.g. "Truthy" and "falsy" values), but python has nothing at all on Matlab.

Just last night, a matrix I constructed using the expression "[x, y, (x+scale), (y+scale)]" inexplicably had dimensions 1x8 instead of the visually obvious 1x4, but only after a few hours of matlab slowly evaluating this expression thousands of times without any trouble. I suspect this has something to do with matlab's horrible ambiguous matrix concatenation syntax.

> I honestly can't understand why

The answer is fairly obvious though, isn't it? For scientists or novice programmers wanting to do some numerically intensive work, Matlab has typically offered one of the quickest (in time-to-solution) prototyping platforms. One of the keys being that you can get started very quickly, with minimal setup (besides the expensive license and installation, something most academics need not worry about). As a former academic programmer and now a consultant that helps companies commercialize such prototypes, I understand your pain, but at the same time I completely understand why a physicist, imaging scientist, or finance major ends up using Matlab over something like C++ or Python. Same goes for LabView, and if you despise Matlab, I challenge you to give LabView a whirl. The gap between the Matlabs and Labviews of the world and the other programming languages is closing in terms of ease of use and setup time, but we're not there yet and also momentum shifts on this type of thing take a loooong time. If you come into a lab and are working on a research project that was started in Matlab, odds are, in the interest of finishing your thesis on time, you're unlikely to port it to another language, even if you have the skill to do so.

If you single-handedly list Python and C++ together, Matlab should have some ridiculously low entry barrier. I can remember a few other horrible tools that saw widespread use exactly because of that, e.g. Basic and PHP3/4.

Corollary: if you are offered a tool highly praised for its being exceptionally easy for beginners, think about the price the tool had to pay in other departments, and how will it affect you in 6-12 months of use.

> If you single-handedly list Python and C++ together, Matlab should have some ridiculously low entry barrier.

The ridiculously low entry barrier exists wrt importing data and plotting/displaying results.

These are the same case-uses as Excel and Access, and these also end up bloating in size and becoming unmanageable. The problem is that the selection criteria that lead to using Matlab in the first place also lead to never paying down technical debt.

Libraries. Inertia. The IDE is actually far more polished than Spyder. It makes things easier for non-programmers with many wizards, and excellent help files. The toolboxes are uniformly fairly high quality. I should mention again the help files. Matlab documentation is comprehensive and far better than any of its competitors.

There are many research institutions and workplaces where the cost of the tool isn't really thought about at all. In fact, the cost of Matlab and toolbox fees is a small fraction of a senior scientist/engineer's total compensation.

The main competitor is Python. However, using a general purpose programming language is often too hard for non-programming oriented scientists. They are not able to compile a Python package on Windows, never mind actually package and distribute their work to colleagues. On Windows, if the package isn't in one of the distributions like Anaconda, it may as well not exist.

> The main competitor is Python

I would say that it really is R. Python while a good choice is a fraction of the size of R and R as a domain specific language excels in its own realm.

Using R with RStudio and sticking with Hadley Wickham universe with Functional Programming makes R shine. There is a reason why so many companies invest in R.

I think your perspective is too small. R is a DSL for data and statistics. Matlab is used in academia and industry across all engineering disciplines, where data and statistics is but a small subset of its total capabilities.

You will never be able to use R to model a Kalman filter, calculate stresses in a beam, or simulate engine control logic.

I refer you to Matlab's toolbox list. http://www.mathworks.com/products/

You can do all of those things in R except maybe Simulink with a GUI (at least not yet, maybe soon?), and you have the choice of working with multiple implementations. https://cran.r-project.org/web/packages/FKF/FKF.pdf https://stat.ethz.ch/R-manual/R-devel/library/stats/html/Kal... http://stackoverflow.com/questions/1738087/what-can-matlab-d... Unless you are doing something very specific and require a specific toolbox only for Matlab, you're better off just using R (or Python).

I can't disagree more with a blanket statement like that. I would never recommend Python / R to the physicists and scientists that I work with. They don't know how to program well. The code they put out is usually garbage, hard to read, and fragile. Software development is not something they do. Backing up for them is emailing themselves a .zip file of their work and revision control is saving their DocXs and PPTs with different dates in the filename.

My point is, coding is not something they enjoy, or care about, it's just a means to an end. The environment and tooling around R/Python still cannot compare to Matlab in terms of ease of use, for the things that Mathworks (creators of Matlab) care enough about to write a toolbox or gui wizard for. The value is not in the language, or syntax, but in the libraries and tooling. I don't know a single Matlab user who doesn't make heavy use of the toolboxes Matlab sells.

And this is why I believe Matlab is not going to go away. If being free and flexible was all that mattered, Linux would have arrived on the desktop already.

After having worked with enough academicians, the code that I had to deal with was too agonizingly crufty for me to deal with that it made a huge influence over my decision to away from academia (and my subsequent drop in grades in school). I figured that if I had enough problems just dealing with the code, I couldn't imagine having to trudge through everything else that led up to that code. But it shattered my preconceived image of academicians being curious about everything and having a great deal of care and pride in the tooling they use - that I should look to them as reference rather than a bunch of people that dropped out of school for a quick buck, and that simply wasn't true whatsoever.

Matlab is going to be the COBOL of academia without some major groundbreaking research that invalidates or supersedes a great deal of research code, and given the community-consensus nature of so much research done I can't imagine quite something so disruptive happening.

If journals ever demand code that reviewers can read (as they should IMO; who knows whether hard to read garbage is actually doing what they claim it is in the methods section, and the call for reproducibility in science is getting stronger), then academics might be forced to learn to code. And then they might want to use a programming language that's more conducive to readable code.

I don't think MATLAB toolboxes are a huge obstacle. Maybe something like the Aerospace Toolbox is useful for people in aerospace. But for people in my field (neuroscience), people end up writing their own code for their various applications anyway, because the MATLAB toolboxes are too slow or too inflexible for their use cases. The source of inertia is that everyone uses MATLAB, so they end up writing the code in MATLAB. While it takes time to overcome an established userbase, I think it will happen eventually if the alternatives have substantial technical advantages, as I believe Julia does. It only takes a handful of technically skilled people to reproduce the majority of the code in common use, and if there are good reasons for those people to write that code in Julia and for others to use that Julia code, then Julia will eventually take over.

Open Source always wins in the end. Matlab is not a good solution due to its closed propitiatory nature and the need for science to be shared. We see the cracks in the walls on Journals and Research behind closed walls. The code and the tools for science also needs to be free from a closed system.

Sorry you are wrong especially about your first example. For the other two yes there aren't built in toolboxes to handle those but it can be done. R has a lot of packages + it's a language so anyone can effectively write anything: https://cran.r-project.org/web/views/

I can write anything to do any computation in any programming language. The whole point is that there are pre-existing, high quality libraries and toolboxes to do these operations. In fact, that is exactly why R is popular - it has excellent libraries for handling data and statistics.

You're right. R does in fact have a library for modeling Kalman filters. My mistake! Let me pick any other example from the Matlab toolbox product list that R doesn't have! Can you model a radar system in R? Can you tune PID systems in R interactively? Don't be a pedant. Since you are familiar with R, you should know R is 95% used by data scientists and others involved in statistical work.

"There are many research institutions and workplaces where the cost of the tool isn't really thought about at all. In fact, the cost of Matlab and toolbox fees is a small fraction of a senior scientist/engineer's total compensation."

Yeah that's my experience with my current job. Matlab licenses aren't an issue. Also many people have Bloomberg terminals and other expensive tools of that trade that make Matlab look fairly cheap in comparison.

For a language to beat out Matlab it will need to be based on other factors than cost. Most likely performance, documentation, training, tooling and be popular within the pool of candidates for jobs. Hopefully Julia can get there.

> For a language to beat out Matlab it will need to be based on other factors than cost.

There are plenty of cases in which Matlab's cost is a big factor. Maybe not at your company, but some of Octave's most ardent paying customers have been those disgruntled with Matlab's price tag.

if scale is a vector say [1, 2, 3] then x+scale = [1+x, 2+x, 3+x], and in Matlab [ [1,2,3], 4] = [1, 2, 3, 4].

I agree with all you said mind you, and could rant even more about Matlab, just giving a possible explanation for how Matlab parses the expression.

Why? Because professors keep assigning students problems with it, immune to its high cost and suspect coding idioms because they aren't professional programmers.

As (potentially) one of those professors, I'm going to suggest that if you're posting on HN, you have no idea how little computer experience some of the Econ, engineering, etc. students have. Matlab and other programs like it try hard to smooth over some of the rough edges that can trip up complete novices. As long as the main focus of the class is teaching "something other than programming" that's going to be pretty attractive.

I was one of those clueless engineering undergrads back in ancient times...I know the mentality of thinking since you know BASIC and can code up a brutally ugly solution in MATLAB that you know programming, as long as the right number is spit out on the command line. I also know, in my old age, that if the purpose of an engineering education is is about building higher order structures that can communicate intent and be extensible artifacts, that this is not acceptable. We need to put in place tools and methodologies that enforce good practice with code in the same way that we enforce good spelling, grammar, sentence, and paragraph structure in language, from the beginning.

I agree with you, but that's not a decision that can be made unilaterally by professors at the course level without sacrificing _a lot_ of the other content.

You're going to cry if you see the current standards of undergraduate spelling, grammar, and sentence structure, btw. :)

Touche, Prof. Carry on. ;)

It works well enough and there's an incredible amount of infrastructure behind it. It's an academic's job to produce research, not code.

That was the advice I got from my advisor as a grad student. 15 years later, I disagree strongly. I wasted a lot of time writing code that I couldn't read later, that took forever to get right, and that couldn't be reused. I've made a significant investment in programming tools and it has been worth it. Git alone has probably made me twice as productive.

The logic you state (which is dominant) is wrong. Optimizing the speed of writing the first version of a particular piece of code is a bad idea.

* Don't unreliable tools impede research?

* Isn't it a good idea to publish code used to produce the results so as to help fellow scientists verify the results and methods?

1. Who said matlab was unreliable? (There's a difference between "reliable" and "good")

2. Matlab code gets published all the time.

3. Just because it's not matlab doesn't mean that it's, for example, in a state others can use. See for example: http://reproducibility.cs.arizona.edu/v2/index.html (which isn't perfect, but it's something)

I'm always found Julia performance claims [1] to be misleading in comparison to LuaJit [2].

Because Julia claims to be much faster than LuaJIT, yet continually - people find that LuaJIT (not Julia) is much faster in real world test [3].

Does anyone else have experience in Julia vs LuaJIT?

[1] http://julialang.org/#high-performance-jit-compiler

[2] http://luajit.org/performance_x86.html

[3] http://bayesanalytic.com/lua_jit_faster_than_julia_stock_pre...

Improvements to our LuaJIT benchmarking are currently being discussed in https://github.com/JuliaLang/julia/issues/14222. LuaJIT 2.1 is significantly faster than LuaJIT 2.0, but until now I haven't had a convenient way to get an installation of LuaJIT 2.1 on our test machine. LuaSci was easy to install and we would be amenable to switching over to LuaSci instead of vanilla gsl-shell. See https://github.com/JuliaLang/julia/issues/14222#issuecomment... for a timing comparison.

I never looked at SciLua till just now. I have always liked Lua and have made several scripts for my desktop and servers. Just never thought of Lua for Data Science. My main issue is I haven't hit a data set to big for R in my own use cases.

Thanks for the note. Appreciate the transparency

Julia in contrast to most languages can run very fast or very slow for roughly the same program because of type information and garbage collection.

I would be willing to bet there is a lot of fat to be trimmed if you can't get close to LuaJIT unless SIMD is involved. Have you tried profiling your Julia code to see where the problem is?

I think it would be really interesting to understand exactly why Julia underperforms LuaJIT on some benchmarks. I'm pretty skeptical that Julia loses to LuaJIT uniformly; I suspect it depends on the benchmark taking advantage of LuaJIT's superior discovery of run-time type information.

I find benchmarking to be of little value when the percentages are small. The differences are tiny when I see them don.

Even more interesting is the document on Github with technical details about the port:


Very interesting, particularly the Challenges section:

"Differences between the behavior of MATLAB and Julia’s core linear algebra libraries led to many roadblocks in the development of DSGE.jl. Julia uses multithreaded BLAS functions for some linear algebra functions. Using a different number of threads can change the results of matrix decomposition when the matrix is singular. This indeterminacy caused significant problems for our testing suite, both in comparing output matrices to MATLAB results and in testing for reproducibility among Julia outputs."

Kudos to them for taking this on, I think.

The real news here is not the move to Julia (although that is likely to be the reason for most of the attention). The important thing is the move to open - an open language, movement to Github, nice explanation of the details.

No. A central bank moving at all to Julia is absolutely fucking huge.

Agreed. I worked on the FRBNY research floor, this model has real mindshare with policy makers, porting it is a huge investment in the language. They may be training scads of RAs in writing and using Julia, many of them will go off and do PhDs later.

This also provides a lot of justification/support for those of us who'd much rather teach (say) macroeconometrics in julia instead of matlab.

I suppose it depends on whether you think it's important for the popularity of Julia (something I don't care much about, since it's just a programming language) or because you think open science and policymaking is important (which I do).

[I see you're from Iowa State. I'm posting this from the house I bought from John Crespi a few months ago.]

Small world, his office is the floor below mine :)

I think we probably agree then --- but this announcement isn't exactly a sea change in open science and policymaking. This model (or one very close to it) has been available on various people's webpages for a while [1], the research on estimating and using DGSE models for forecasting and policy-making has been well documented and openly discussed, etc. And the NY Fed has put out a lot of "accessible" information about features of the model and interpretations of its results.

[1]: http://sites.sas.upenn.edu/schorf/pages/working-papers

For those of you in this field...how influential is the NYFed trying out (and apparently being satisfied with) Julia to other government agencies, computational economists, and/or academics? Is it "OS X is Unix/BSD-based" influential, or "Whitehouse.gov uses Drupal [1]" influential?

[1] http://buytaert.net/whitehouse-gov-using-drupal

As far as votes of confidence go, Julia landing successfully at any of the 12 Federal Reserve Banks is a strong one. They all do a whole mess of complex modeling. The NY Fed in particular is a "first among equals" in the Federal Reserve system, so it's always good to be there.

That said, I have no idea if this will be that influential on economists changing their tools, or the tools that they teach to their students. People tend to stick with tools that they know and like. But it's very possible that this will help Julia in the eyes of people getting into the field choosing their first programming language.

The DSGE models are influential up until somebody comes up with a valid alternative, at which point they will be dropped at somewhat over the speed of light :)

To be brutally honest, this is not a big achievement. You should be able to rapidly speed up any matlab model by porting it into pretty much any other language due to well known issues with matlab. Being able to run DSGE models more quickly is important to economists because it allows them to be better 'callibrated' - which is to say that the economists essentially tweak the models for the country they are looking at, until they fit the known data over an acceptable time period. (They don't try to hide this btw. there are scores of papers on model calibration approaches.)

A good example of what this leads to is the one for Iceland:


Notice that this is published in 2010, but for some strange reason the charts (p57 onwards) all stop in 2005.

It's over fitting to the curve on computer assisted steroids.

Is there a reason calibration isn't automatic? Say with some sort of genetic algorithm?

Do the simulations take too long?

This is the critical thing - they're not simulations. They are mathematical models based on a mix of the existing economic macro-economic formula. With upwards of 200 parameters that can be tweaked. There is no particular recipe btw - if you start digging into the configuration files for these 'models', you'll find all sorts of things, with the occasional hilarious comment (my favourite -'this seems to work in Sweden').

Note that at the same time macro economists are talking about the business cycle, and the purported fit of one of these models to it, they are cheerfully ignoring the applicable Nyquist limit for whatever period cycle they believe they've identified.

Again to use Iceland - because it's a diddly little country and very easy to study. They stop the charts in 2005 because after that, there is a complete deviation from the model, which the model needless to say completely failed to predict. However, anyone who spent half an an hour looking at the Icelandic monetary statistics would have been able to identify clear warning signals from 2003 onwards, and some pretty clear analyses were being issued on that within the financial community by 2005.

Rumour has it the only use Wall Street has for DSGE models is to try and predict whatever crazy thing the central banks will do next.


Calibration here means simulation to find the right parameter values. Genetic algorithm isn't particularly useful here, it's a more general high dimensional optimization problem.

But if you can cut down the simulation time from e.g. a weekend to a few hours, that is huge in terms of allowing you to find a good specification.

I work on research close to this area -- time series w/ macro applications, but not on these specific models. These specific models are very very important for estimating the effects of monetary policy decisions, forecasting, etc. because they embed a fair bit of economic theory with a decent amount of estimation theory so that the models can address questions about "causality" in a meaningful way: if I intervene by changing the interest rate, what is the distribution of outcomes that will result. (The stuff I work on uses less economic theory so it can't address those questions.)

That's not to say that the NY Fed takes the model's recommendations uncritically and assumes that the models are perfect or the recommendations are the truth, because that would be nuts. Just that this class of models is one of the few ways to use "the data" to provide guidance about policy decisions.

The people who've developed the NY Fed's DSGE model are some of the absolute top people working on these models. It's based on a lot of academic research and a lot of research published in top journals by people at the NY Fed involved in building the model. Other banks have their own models, but I suspect this one is the highest profile. (Again, not precisely my field of research, so I could be missing one or two.)

So, to get around to answering your question. For academic research in economics, this is as "production level" as it gets. Is this the same as using Julia for algorithmic trading? No. The NY Fed's not running this model in real time, and if something crashes then they can restart it. But I think this is huge for Julia's adoption in economic research.

(For a pdf with more about the modeling strategy itself: https://www.newyorkfed.org/medialibrary/media/research/staff... but that is 2 years old now.)

To be honest, it will have very little influence. Economists don't like to change their tools.

Economists don't like to change their tools, but Julia is getting a lot of attention among grad students that are picking between Julia, Python, and Matlab as a first language.

I think you'd be surprised. I put "I ran my simulations in Julia, talk to me about it" on a slide at a talk in the spring and was really surprised by who was already using it.

I'm not sure how the comparison stacks up for you. Drupal nabbing the whitehouse site was pretty huge in enabling it to land a bunch of other government and higher ed sites/mindshare. I'm not sure which end of the order of magnitude scale you place that though.

The Fed isn't a government agency. It has private share holders. It was named The Federal Reserve specifically to breed the confusion that most people have - it was supposed to sound like a federal agency while it is not.

How's the probabilistic programming use case in Julia now?

I don't like Python as a language that much (I prefer Lisps or MLs), but I reckon its libraries are fantastic: PyMC, Theano, NumPy... And loads of general-purpose stuff.

Doesn't this make the code more difficult to work with, sine these characters require uncommon sequences of keystrokes?

Single character Greek variables don't seem better than single character ASCII for readability.

Julia's REPL, and many editor plugins, supports tab-completion of unicode characters via their LaTeX code names, e.g. \alpha

Whoa that's awesome, are there any other languages that allow that?

Javascript too. D3 uses it quite extensively.

See here: https://github.com/mbostock/d3/blob/master/src/geo/centroid....



Common Lisp

I've never written a line of Julia in my life, and I'm usually somewhat adept at figuring out what is going on when I read the source of unfamiliar languages. But I'm am having one helluva time trying to differentiate comments and code.

EDIT: downvotes? wtf?

I haven't either, but they just start with a #, right? What's the problem?

EDIT: Oh, I see there are multiline comments.

I see things that look a lot like english sentences that are neither preceded by #, nor surrounded by #= =#.

edit: with examples. this code, all the way up to line 48 looks like some comments, but also a bunch of sentences without the comment characters.


I the example you link, it looks like the top half is all comments, starting with """ on a line, and ending with """ on a line. What's confusing is that code examples in the comments are surrounded by ```, and they are long enough to look like actual code.

Also, in this markup, the top comments are blue only.

Then, the actual code begins, and it is syntax-colored. In code, comments seem to be preceded by pound.

After a big chunk of code, there is another set of comments in """, and then some more code, ending with `end`.

The ``` code samples in the comments might be part of some automated comment documentation extraction standard or some markup convention. The comments near them seem very formal, defining or explaining the code.

That's the best I can do not knowing the language.

You're looking at a markdown-formatted docstring. Note that it's enclosed in """, similar to Python except that it precedes the method definition. Within the markdown formatting, you can use ``` fences to format code examples.

GitHub's coloring here doesn't quite make it as obvious as it can be. Here's how it renders at the REPL with the help system: http://imgur.com/rlAnpSK

cool. thank you.

Notice that that is between a pair of triple quotes (and github even syntax highlights it in a different color).

It's all string, more specifically a docstring (see https://en.wikipedia.org/wiki/Docstring ). They are also used in other languages, including Python.

I'm not sure how someone can clear up your confusion without a link to the specific things are you seeing.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact