Hacker Newsnew | past | comments | ask | show | jobs | submit | danielmason's commentslogin

This might be broadly true. But I think in the context of what tptatek is talking about -- that is, getting low-ceremony mileage out of React -- it's not. The reason is React's main abstractions of declarative components and one-way data flow are already well-suited to describe most of the UI problems you'll face.

(I'm not sure if you mean "design patterns" in the general naming of abstractions sense, or if you mean Gang of Four specifically. If the latter, I'll point out that many of the GoF patterns really aren't relevant to Javascript unless you're writing it in a very OO style)


As someone who's pretty far down the rabbit hole on a bunch of these fronts, I think this is wonderful advice. Many of these are tools to solve very specific problems, and you should make sure you have those problems first.

The only thing I'd differ on is the semantic HTML advice. What you're saying is true modulo accessibility, but accessibility is (well, should be) table stakes for professional work. And while the community has come up with some really nice individual abstractions around tricky a11y concerns, I haven't seen a way to get good results without at least some knowledge of HTML semantics. I'm not implying you don't care about a11y -- it's totally possible I've missed some better ways of doing things.


Reason describes itself as "a meta language toolchain to build systems rapidly", which seems a little vague to me. Am I understanding correctly that Reason is both a (kind of) alternative compiler for OCaml's build toolchain, and a dialect of OCaml in its own right?

To tie this back to Bucklescript, does this loosely describe the process of using Reason syntax to author Javascript? Reason file -> (Reason) -> OCaml executable code -> (Bucklescript) -> Javascript


If you'll forgive the shameless plug, I talked a bit about ReasonML here, hopefully it helps clear things up https://youtu.be/QWfHrbSqnB0?t=29m34s

Basically, just a new syntax and a blessed-stack approach that really, really emphasizes developer experience. Which is to say, ReasonML is merely a cosmetic and DX change, it remains compatible and is not a fork of OCaml at all.


Thanks, that's very helpful!


Think Elixir for Erlang.

Currently it's a syntax on top of OCaml. But we'd like to polish the OCaml ecosystem tooling too; we're calling the umbrella project "Reason".


That will be great. Ocaml is great language, but not that user friendly.

Elixir is pretty mature nowadays. Reason still have a long way to go. Hope that will happen.


> Currently it's a syntax on top of OCaml.

Does this just mean it compiles to OCaml?


There's a crucial distinction! OCaml's compilation command takes in a `-pp` flag (preprocessor), which accepts a command that takes in a file and outputs an OCaml AST tree. We're basically using that. In that sense, the official OCaml syntax is really just that: another syntax like Reason, but official, and goes back two decades.

Because of the clean mapping from one syntax to another, you can toggle between the two fairly easily: https://github.com/facebook/reason/wiki/Newcomer-Tips-&-Tric...


Interesting. Thanks for explaining that.


I'm sympathetic to the worst-of-both-worlds problem, but can this actually be true? In the case of types, constraints actually are the benefit, right? Do you mean that if you don't have compile-time assurance for 100% of the code, the value of whatever % type coverage you have isn't very useful?

As someone who's evaluating Typescript and Flow for a team of JS devs, my intuition is that having some typed code would give you beachheads of type safety, which seems like a reasonable win and an incremental path to improvement. I'm curious if this intuition is incorrect from the perspective of those who have used gradual typing in the real world.


When in Rome, do as the Romans do.

If you're programming in a dynamic language, there are things that you do that would not easily fly in a static language. Things like tests for truthiness. On first blush, from a static background, something like "var p = x && x.y && x.y.z || w;" look terrible, but they're pretty standard ways of cascading through different options without causing null pointer exceptions in a language like JS. When you start introducing type constraints, you are committing yourself to getting rid of all those dynamic shortcuts.

Yes, shortcuts can sometimes get you in trouble, but they can sometimes get you down the road so fast that trouble doesn't have a chance to find you. That's sort of the tradeoff you make: we can be super-fast 90% of the time, at the cost of having difficult to debug problems occasionally.

On the other hand, when you have a requirement to have type constraints in your code, any other code you interact with has to also have type constraints. But in JS, that's not a lot of projects. There is the DefinitelyTyped repository, but it's incomplete and of unknown quality.

I had thought the same thing as you, "beach-heads of type safety". The problem is that dynamic code tends to infect static code. I eventually gave up on TypeScript after not being able to wrangle the combination of a few code generation and graphics tools. My project was already reasonably OO organized and not dynamic, but there was just no easy way to handle the boundaries of the APIs. To deal with it, I had to... start using the dynamic features of JS, giving up on the type safety.

For example, you can't do function overloading in JavaScript, because you don't have any information about types on which to differentiate functions. Well, it turns out that means you also can't do function overloading in TypeScript. The only reason Math.min works in TypeScript is because JS only has one number type, and it is 64-bit IEEE floats.

But some libraries in JS, and especially in DOM, do overload functions. You test the type of parameters at runtime and decide what each positioned parameter actually means. So if you want to use such a library in TypeScript, you need to have a type definition that uses Any for each of the parameters, meaning you've lost type safety and you're back to testing the type of things.

So it just leaves you in this limbo zone where you don't get to use the features of the native language that make up for it being so crappy, nor do you ever get to use the features of the transpiled language that promises to keep your code easy to modify over time. So that's what I mean about "worst of both worlds".

It's like trying to introduce rules on a group of anarchists: you're more likely to get burned at the stake than to make more productive anarchists. Or try letting a bunch of corporate lifers work without a direct boss: you're more likely to end up with a lot more donuts eaten than code written. What ES6 gets right is that it doesn't try to bolt on a type system into an ecosystem that just won't tolerate it. It has features for making it easier, less error-prone, more ergonomic to do things that people are already doing in ES5. In particular, people are already trying to make classical classes with inheritance out of the prototype-based class functions in ES5, so ES6 introduced a new syntax for just that use case that gets rid of all the repetitive and goofy "Object.create" and "MyClass.prototype.myMethod = function(){ blah blah blah blah }" stuff.


  So it just leaves you in this limbo zone where you don't get to use the features of the native language that make up for it being so crappy, nor do you ever get to use the features of the transpiled language that promises to keep your code easy to modify over time. So that's what I mean about "worst of both worlds".
I have to disagree with you on this - as you mentioned, you can always use the "any" type - you lose type safety but can then do more or less anything "type unsafe" you could do in JS... so to my mind it's almost more like the best of both worlds, you get strong typing wherever practical but you also have an escape hatch for times when it's too hard/impossible to type something.


Sorry moron4hire but you are underestimating TS types system.

You can write quite crazy expressions abusing Boolean operators and he successfully keeps the static type.

You can also have different method overloads signatures as long as you have a more general implementation that does the dynamic checks at run-time manually. Most of the time you can use Union types anyway.

The big collection of definition files for almos any JS library out there is a living proof of how is possible to describe statically almost any JS API.

Let's face it, dynamic languages are about laziness, not about 'unconstraibed creativity'


Laziness is the root of computer science. There is no virtue in hard work for hard work's sake.


I'm not sure I understand what you mean by "the client loses most the advantages of your library being in TypeScript in the first place." Your library still exposes a more clearly-defined API by making types explicit. And saying that the downstream consumer doesn't get the benefit of compile-time type checking if they've chosen not to use type checking is...kind of a truism?


The problem is with the author's assertion that all libraries should be using TypeScript and not, say, Flow, or some other alternative that achieves the same goals, perhaps better.


Does Flow offer type definition files?


Yes. Also, there isn't a need to maintain a separate definitions file[1] if the library was developed as Flow-typed code. I've made a few small libraries[2] that were written with Flow-typed code, and are built so that the types are automatically enforced if you use the library in a Flow-typed project.

[1] A hand-maintained definitions file: https://github.com/facebook/immutable-js/blob/master/type-de... . Sure, it's nice as documentation, but that is a lot of redundant effort if you already have separate documentation, and types and documentation in the original source code too. [2] https://github.com/AgentME/contain-by-screen is one simple example.


Are you being serious? That's exactly what's being implied. Which part of tangentially involved in tech juxtaposed with unlike women with Math, CS, or EE degrees would lead you to read that differently?


I am honestly as confused by your interpretation as you are by mine. I don't have any of those degrees, and I don't infer that Supercomputing is dismissing me as "tangentially" involved in tech. They are suggesting that Adria Richards' "Developer Evangelist" role wasn't a real technical role, which I couldn't really comment on. I don't think they are implying that ksenzee's "Senior Software Engineer" role doesn't count as a technical role.


Certainly, and your interpretation is logical. I agree the original commenter didn't necessarily mean to imply that. However, they appear to have bought into a kind of false dichotomy that gets applied to women and minorities all too often. It's a mindset where there are "fake" woman developers (or gamers, or whatever), and there are "real" woman developers, and you only get perceived as one of the "real" developers if you meet an unusually high standard. It's tiresome to be on the receiving end of it.


You just said you agree the OP didnt mean to imply it yet you're making a rather big statement about dismissal that only you seem to see. Which is just as problematic.


Doesn't the OO approach just move the work of the switch statement into the class declaration of the model? In practice, OO is about encapsulation, and FP is not. So it's probably not too surprising that the author chose not mix metaphors.

This is also modeled after an architecture in a statically typed language that expresses possible actions as a single union type, in which every possible type must be handled. There's no similar compile-time guarantee in JS, but a switch statements with constants is a reasonable approximation of the idea.


There's a reasoning behind this. You may not agree with it, but it's good to fully understand the problem it's trying to solve.

There are two problems with your suggested approach -- specificity and coupling. The rules of CSS are such that .collection > li is a more specific selector than .collection-item. So if you want a particular item in the list to be red, you can't just give it a class name .warning-item and style against that -- you have to match or exceed the existing specificity. In the simple case, this isn't too bad, but it's surprisingly easy to end up representing deeply nested structures in your CSS which are very difficult to override.

The coupling problem is really just a way of saying that it might not be a good idea to describe the specifics of your HTML implementation in CSS. Class names are like an interface. One refactoring I've actually done a lot is switching out lists like the above for a combination of nav and anchor elements. It's great to be able to do that without needing to rewrite all the corresponding CSS, too.

These are tools designed to help you manage complexity and increase flexibility, not dogma. If you don't find your HTML needing to change much, or you don't need to make styles overrideable, then YAGNI. Hopefully it makes sense why framework authors, whose work is explicitly designed to be overridden, would choose this approach.


Performance is penalized too. Since CSS rules are parsed from right to left and you have probably a lot of "li" tags, the rule would be very inefficient. Of course this doesn't matter in small pages.


I was going to ask why that is as it seems counterintuitive at first glance. Instead, I Googled it and found the following (excellent) answer at StackOverflow: http://stackoverflow.com/questions/5797014/why-do-browsers-m...

I'm just provided the link here in case anyone else was wondering the same thing.


It does not matter in any real life pages.


It did for Trello, to name an example (search for speed in http://blog.trello.com/refining-the-way-we-structure-our-css...).


I'm sorry not to add value to the discussion, but I wanted to say the back-and-forth has been a very educational look at some of the practical trade-offs that framework designers have to make.


I might be considered a larval Designineer. I started off as a visual designer, then learned HTML and CSS, then Javascript, then out of necessity began maintaining an old ASP Classic codebase. As soon as I started to understand the code I was reading, things got easier. Then I built a small data-backed web app and I was totally hooked. In the last two years, I've learned to build nontrivial web apps from the ground up. SQL, MVC, Backbone, UI, design. I'm pretty proficient at each level of the stack, but only insofar as it's related to the web. Eg. I know C#, but I wouldn't have the first clue about how to write a native Windows application. So I end up feeling like my knowledge is the proverbial mile wide and inch deep.

I'm looking for jobs right now, and it's been an exercise in frustration. The coding jobs require CS degrees, 5 years of experience, tech interviews with big-O notation and data structures (trying to teach myself basic CS theory, but need a job now). The UI and design jobs require a smidge of front-end knowledge, but are mostly mocking and wireframing. I want to be able to employ all of my tools, but I feel like the hiring market makes me pick between being a front-end or a back-end guy, and I don't currently have enough specialization at either to get a reasonably good job.

How can I find companies that could use someone like me, when their job descriptions are specialized? Any thoughts or advice?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: