Hacker News new | past | comments | ask | show | jobs | submit | methodology's comments login

Isn't it painful to alternate between typing "her/him", "him/her", "her/his", "his/her", "she/he", "he/she"?


I wrote a browser plugin in Go (using gopherjs) that will humanize words like it, it's, and its when you press a hotkey while the cursor is on them. It will randomly pick between his/her or her/his. I'd put it on github if you like, but it would need a bit of cleaning up and work first.


It is certainly painful to read, particularly when the author is actually talking about a real person, who presumably is either a "her" or a "him."


HN subsumes reddit (on hacker topics).


Not very long because I'd be bored and run down from not having any important work to do. Also, social isolation would be a problem.


All he asked was if the income stopped. He didn't suggest not working (important work can yield 0 income too...). Nor did he suggest social isolation.


If it's important someone will pay you for it. This is even more true in today's crowdfunding era with kickstarter and co.


OCaml is more practical than ML and Haskell because it has objects, for loops, more edge cases in the language, built in mutable keyword, and extensible records.


No it is not. Ocaml's objects make it less practical, not more. That is why they are virtually completely unused. At best, for loops are irrelevant. I'd say they are closer to a negative than irrelevant though. What do you mean "more edge cases?" That the language is less safe? How is that practical? Haskell has mutable references too, with the added benefit of them being type safe. And haskell has extensible records, they are just a library like anything else: http://hackage.haskell.org/package/vinyl


> That the language is less safe?

Not necessarily.

> And haskell has extensible records, they are just a library like anything else:

And OCaml has monads, they are just a library like else.


Monads are arguably a library in Haskell, too... though one the standard guarantees is present, exposed by the Prelude, and relied on by a lot of code.


>Not necessarily.

Then what? You made the vague statement, make it not vague.

>And OCaml has monads, they are just a library like else.

And? I did not claim ocaml lacks monads. You claimed haskell lacks extensible records. You do understand that my post was a direct reply to what you said right? Not just some random things I felt like saying for no particular reason.


That's a very interesting website. It also features for example analysis of companies and what domains they did/should buy. It really shows how there is really an entire economy over domain names, not just in buying/selling but also other secondary fields like regulation.


The fact that Microsoft puts so much care into creating elaborate structures to preserve legacy is why I trust them with securing my identity.


I am sure I will get downvoted to all hell, but were you serious? And do other people trust MSFT for similar reasons?

(This, by the way, as a full time IT guy for 6+ years bc of bad Windows software makes me continuously feel the opposite; I did not even open a Hotmail/Live account until I was forced to try and report a bug with MSFT Connect, which requires it.)


I remember, once upon a time, you could just phone up Microsoft (UK) for support. This was early 1995 when I was battling with video driver stability problems on an Olivetti M series PC running a beta of Windows 95. The nice helpful person on the other end of the phone asked me to get updated drivers and lo and behold everything magically started to work.

On a serious note MSFT Connect is basically a ghost town. There are so many well documented and easily reproducible issues logged on Connect that languish there for months and months with no-one paying attention. I logged a fairly crippling issue with IIS7, complete with steps to repro, and it was ignored.

The only way to get any kind of support is to pay for a PSS incident. If the root cause does lie firmly in MS's backyard they do at least refund your money. I even got a knowledge base article written up about one of my PSS logged issues :)

(ex-Gold Partner IT guy here)


I hope this is sarcastic.


Terminate and stay resident.


USB devices need to be regulated to ensure this kind of thing can't happen. Perhaps the FCC's arm reaches far enough that they could tackle this issue, otherwise maybe a separate task force should be created for this.


> There are no buffer overflow vulnerabilities in Go applications

Actually there are in certain cases [1], but there's a good reason for that.

1. http://stackoverflow.com/questions/25628920/slicing-operatio...


But that's not a buffer overflow. You can't access uninitialized memory (well, without unsafe anyway).


Without unsafe or a race condition. http://research.swtch.com/gorace


Go's syntax and semantics seem ad-hoc, hard to remember, and inconsistent, but that's because Go was designed from uses cases and experience by prominent thought leaders such as Rob Pike and Ken Thompson, and Google. For example, sometimes you'll get Unicode code points, but sometimes you'll get bytes of UTF-8. The language was designed to give you the right one in the right case. UTF-8 is coupled with the language because it's the most useful choice. Another example is that pointers are automatically dereferenced, but not when you have pointers to pointers, because that's less useful. If any thread runs in an infinite loop that doesn't call built in functions, the entire runtime will freeze. But this is by design. You're meant to do actual useful stuff in your loop such as calling IO functions (which will yield to the scheduler) or calling GOSCHED.

It was designed to compile fast, and compiles faster than most languages and still has near C-speed. It doesn't have a GIL so it has better concurrency (Java style memory model). It has no complicated features like generics, instead you just use type assertions when you need. It also has impeccable tooling.


> If any thread runs in an infinite loop that doesn't call built in functions, the entire runtime will freeze.

That's not entirely true. Go 1.3 introduced pre-emptive scheduling, so now only the most trivial (useless) of infinite loops will hog the scheduler. But if you're doing real CPU-bound work you won't block other goroutines from executing.


Correct, pointless programs like this freeze but that's by design since they're pointless:

    package main
    import "fmt"
    func main() {
      go func() {
        for ;; {}
      }()
      for ;; {
        fmt.Println("still here")
      } 
    }
Even bigger pointless programs also freeze:

    package main
    import "fmt"
    func f(v int) int {
      return v+1
    }
    func g(x int) int {
      if (x > 100) {
        return 1
      }
      return 0
    }
    func main() {
      go func() {
        for ;; {
          if g(f(111)) == 0 {
            break
          }
        }
      }()
      for ;; {
        fmt.Println("still here")
      } 
    } 
This is of course by design, because Go was designed only to support useful programs. The programmer trying to figure out how the concurrency model works may whine that it's inconsistent and he can't figure it out, but that's only granted because threading is hard. Go makes it easier by only supporting useful cases.


>For example, sometimes you'll get Unicode code points, but sometimes you'll get bytes of UTF-8.

This isn't true, you only get code points when you convert to []rune, I'm not aware of any situation where you would magically get codepoints.

>If any thread runs in an infinite loop that doesn't call built in functions, the entire runtime will freeze.

This is not true, since 1.3 all function calls can potentially yield. Also it only happened if you had as many goroutines running infinite loops as you had kernel threads.


If you range on a string, you get runes.

http://play.golang.org/p/CqLnT4m4GI


> Go's syntax and semantics seem ad-hoc, hard to remember, and inconsistent, but that's because Go was designed from uses cases and experience by prominent thought leaders such as Rob Pike and Ken Thompson, and Google. For example, sometimes you'll get Unicode code points, but sometimes you'll get bytes of UTF-8. The language was designed to give you the right one in the right case. UTF-8 is coupled with the language because it's the most useful choice. Another example is that pointers are automatically dereferenced, but not when you have pointers to pointers, because that's less useful.

How about Go's syntax and semantics seem ad-hoc, hard to remember, and inconsistent because _they are_ ad-hoc, hard to remember and inconsistent? What does your thought leader have to do with your genuine opinion?


Any language seems inconsistent and has hard to remember syntax for people who are new to it. Unlike most of other languages, you can learn Go in a weekend and get beyond the newbie problems. There was a great article lately about Go's syntax - http://robnapier.net/go-is-a-shop-built-jig

The crux of the matter is that there are inelegant parts to the language because they are usefully inelegant, and when you're actually writing software, useful is better than elegant.


How are the Go and Java memory models similar?


For example: if you have one thread polling `x` and another thread writing to `x`, there's no guarantee that the polling thread will ever see any updates to `x`.


> It doesn't have a GIL

On platforms like Go and the JVM, the garbage collector acts like a GIL, which is why they'll never replace C/C++.


Most concurrent C programs of a reasonable size eventually have some kind of big global lock. The linux kernel had one for a long time and it took a lot of effort to add more fine grained locking. Similarly most large C programs have a garbage collector of some kind.


most large C programs have a garbage collector of some kind

[citation needed] - the vast majority of Linux software is C or C++ and I can't immediately think of any with GC. It's not normally assumed, everyone does explicit deallocation, possibly ending up writing their own slab or pool allocators.


That's a good insight. Thanks!


How did learning how Algebraic Data Types are algebraic make you "get" Haskell?

When I learned Haskell, I just read Learn You A Hasekell in a few days, and immediately understood the practical value of Algebraic Data Types. They simply are like the types in C,Java,etc but more general because unions can have "fields". This alone means you can represent things like Maybe without any trouble, whereas in C/Java, you'd have to use casting, explicit tagged unions, or the visitor pattern. ML has largely the exact same features as Haskell, except you have to explicitly say which module you're using, instead of the typeclass system doing it for you. Do you also "get" ML? It sounds more like you are claiming to "get" some sort of theory, rather than Haskell or ML, which are trivial, but you make it sound like you need to be somehow enlightened to understand.

The only hard part I ever found in Haskell is lazy evaluation, which to me is just another type of magic which nobody gets, just like the Java Memory Model. The other hard part of Haskell are the extensions to type classes, which make you start thinking about open research problems.

edit: Which part of the above is downvoting material? Please explain so I can avoid being downvoted in the future.


I think there's reason to believe that ADTs are both a major component of "getting" Haskell and a major topic in their own right of significant size beyond Haskell itself.

I've been plugging it a lot lately, but here's a post I wrote trying to emphasize this: http://tel.github.io/2014/07/23/types_of_data/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: