Hacker Newsnew | comments | show | ask | jobs | submit | edsrzf's comments login

> Suggestion that threads are way better than other asynchronous implementations.

I don't think that's true. The author mentions several languages that use different solutions:

* Java uses threads

* Lua and Ruby use cooperative multitasking with coroutines and fibers, respectively

* Go uses goroutines, which are coroutines multiplexed onto threads

There are definitely similarities, but not all are threads. The common element is that each solution involves keeping separate call stacks. (The author mentions this too.)

> You still have disjointed code.

I disagree. In Go, I can write an HTTP server that never uses the "go" keyword, yet it's still concurrent. This is because the net/http package handles the goroutines for me. With Node, I have to use callbacks no matter what.


> Java, Lua, Ruby

These are all under the general term threading. You still have to pass messages or set up semaphores to use them. The code is still separated between main and sub/worker/thread.

> Go HTTP server

Every HTTP server lib handles requests in this way.

You cannot write an HTTP client, however, without explicit asynchronous code like callbacks, threads, or what have you.


If you're agreeing not to sue, then the company can use any criteria they like to hire or not hire you and there are zero consequences.

Imagine you check the box, you don't get the job, and your feedback turns out to be "You're a man and we don't like to hire men." You have no recourse because you agreed not to sue.

You can argue that that's obviously illegal hiring criteria and you should be able to sue anyway, but then you're back to square one.


There has been recent talk of it. See here for a link and (brief) HN discussion: https://news.ycombinator.com/item?id=8149810


Maybe something like "Do you have more older siblings than younger siblings?" would work. One twin has to be born before the other.

That question has some problems, but I feel like it could serve as the basis for a more specific question that would define how an only child or middle child would answer without introducing much bias. It would also have to define how half-siblings are counted and probably some other things.


So you are going to burn one entire question that is entirely worthless for the vast majority of Chinese that have been born recently?


Interesting read: "One Child Policy and Arising of Man-Made Twins". http://paa2013.princeton.edu/papers/130113


It's entirely possible that the twins would not know which one was born first.


Its entirely possible the person could not know the answer to any of the questions, I dont think that really counts


I'm pretty sure it's the other way around: he wouldn't be launching a cruise deal aggregator site if he weren't so gung ho about working on cruise ships. http://tynan.com/cruisesheet


Dmitry Vyukov keeps a great resource for learning about lock-free algorithms (and other interesting things) here: http://www.1024cores.net/home/lock-free-algorithms/introduct...


Isn't the very first example in that article flawed?

    void decrement_reference_counter(rc_base* obj)
        if (0 == atomic_decrement(obj->rc))
            delete obj;
This is classic test-then-act, isn't it? What happens if another thread bumps obj->rc after the comparison to 0, but before the deletion? That other thread could find itself referring to a suddenly-deleted object, or am I missing something?


New Zealand offers a similar Visa, but you'll be better off in Australia if you want to make money.


Agreed, I'm in AU now with my brother and he makes $20 an hour working retail at a bookstore. Housing and the rest is expensive though, so count on finding a shared house and a commute. And living cheaply while you save + work on your coding abilities.


> Someone needs to nuke this site from orbit and build a new debugger from scratch, and provide an library-style API that IDEs can use to inspect executables in rich and subtle ways.

LLDB[1] can do this, but I'm not sure how far along it is.

[1] http://lldb.llvm.org/


Judging by the version that's in Xcode, lldb is not ready yet.

The command syntax is a bit inconvenient compared to gdb. Perhaps I should overlook that, because it's presumably designed to be the lowest level of the debugger, so you interact with a GUI or something rather than with lldb directly, but Xcode's expression window is such utter unreliable junk that I end up having to use lldb by hand all the time anyway.

I've had problems with variable substitution. You can (say) ask it the value of $rip to find out what the value of the RIP register is, but if you try to set a breakpoint at $rip, it doesn't know what $rip is.

I've had problems with lldb telling me that a struct is forward-declared, even though I'm trying to print a local value of that struct type.

Sometimes lldb just gets confused, as in this exchange, which I reproduce verbatim, just as it happened:

    (lldb) p this
    (const CustomTrafficStyle *) $2 = 0x200cf4e0
    (lldb) p *this
    error: invalid use of 'this' outside of a nonstatic member function
    error: 1 errors parsing expression
lldb doesn't have gdb's @ operator for printing arrays; instead, there's this separate command for dumping memory. So instead of "p x[0]@100", you might do "me r -ff -c100 `x`" - which is obviously a big pain, because it's a lot more verbose. I don't even know offhand how you'd use an expression for the count argument either (more inconvenient syntax.) (Furthermore, I don't even believe the me command does exactly the same thing, because I don't think it prints structs, but it usually does the job just about well enough.)

Finally, and most tiresomely, lldb will sometimes just print nonsense. So you might end up with something like this, which is all reasonable enough: (made-up example representative output, not a copy and paste)

    (lldb) p x
    (int *) $2 = 0x1234578
    (lldb) p i
    (int) $3 = 100
    (lldb) p x[100]
    (int) $4 = 123
    (lldb) p &x[100]
    (int *) $5 = 0x12345998
But then...

    (lldb) p x[i]
    (int) $6 = 0
    (lldb) p &x[i]
    (int *) $7 = 0xc4231777 
Maddening. Absolutely maddening.


As you hint at, Xcode is a part of the problem here. The debugging parts of Xcode (ignoring the horrible things wrong with Xcode as a whole) have always been a pile of junk, even before they introduced lldb. This is complicated by the fact that in my experience Xcode was always built upon a quirky Apple fork of outdated third-party tools.

So, I guess what I'm saying is: Even if lldb isn't ready yet (it probably isn't), you shouldn't judge it based on your experience with Xcode. Being integrated with Xcode would make any piece of software look like junk even if Knuth wrote it.


I use LLDB with some regularity. The last I checked, lldb is essentially at feature parity with gdb in user space (not sure about kernel debugging), and has additional features beyond that aimed at fixing some of the mistakes of gdb.

As a small example of something LLDB got right, say I want to put a breakpoint in this C++ template function:

    template<int X>
    factorial() { return X * factorial<X-1>; }
Because it is a template, this one line in code maps to multiple program locations (factorial<1>, factorial<2>...) gdb will create a separate breakpoint for each location, and each must be managed separately. But lldb maintains a logical / physical breakpoint separation: one logical breakpoint maps to multiple breakpoint locations, so you can manage all of them as a single value, and more easily treat factorial as if were a single function.

(Maybe more recent gdbs do this too - my version is rather old.)

One downside of lldb is that, while its command line syntax is very regular, it's also quite verbose. gdb's 'attach myproc' becomes `process attach --name myproc`. gdb's `up` becomes `frame select --relative=1`.

lldb does have a set of macros and abbreviations that cut down on some of the verbosity, but they haven't always worked well for me, and I hope they overhaul the syntax.


GDB still does not support a logical breakpointing syntax, sadly.


Wikipedia describes a compiler as "a computer program that transforms source code written in a programming language into another computer language".

That certainly seems to describe what's going on here.


Actually Wikipedia on the disambiguation page says that compiliation is: "In computer programming, the translation of source code into object code by a compiler."

On the main wikipedia page, you cut off the full definition: "A compiler is a computer program (or set of programs) that transforms source code written in a programming language (the source language) into another computer language (the target language, often having a binary form known as object code). The most common reason for wanting to transform source code is to create an executable program."

Note how it says "the target language, often having a binary form known as object code." and "The most common reason for wanting to transform source code is to create an executable program."

If you then go down into the description, you'll see: "The front end checks whether the program is correctly written in terms of the programming language syntax and semantics. Here legal and illegal programs are recognized. Errors are reported, if any, in a useful way. Type checking is also performed by collecting type information. The frontend then generates an intermediate representation or IR of the source code for processing by the middle-end.

The middle end is where optimization takes place. Typical transformations for optimization are removal of useless or unreachable code, discovery and propagation of constant values, relocation of computation to a less frequently executed place (e.g., out of a loop), or specialization of computation based on the context. The middle-end generates another IR for the following backend. Most optimization efforts are focused on this part.

The back end is responsible for translating the IR from the middle-end into assembly code. The target instruction(s) are chosen for each IR instruction. Register allocation assigns processor registers for the program variables where possible. The backend utilizes the hardware by figuring out how to keep parallel execution units busy, filling delay slots, and so on. Although most algorithms for optimization are in NP, heuristic techniques are well-developed."

So, just stating that it is "a computer program that transforms source code written in a programming language into another computer language" is inadequate. There is more to it than that, and unfortunately so many just don't get it.


Since it will build on top of CodeMirror, I assume it will run on any OS that supports a modern browser.



Applications are open for YC Winter 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact