Hacker News new | past | comments | ask | show | jobs | submit | more Jach's comments login

I grew up in Utah, where "fry sauce" is the norm: ketchup mixed with mayonnaise. Fortunately I never really liked it (I prefer my fries unsauced, except as part of poutine or doused in malt vinegar for fish&chips), but it's always amusing to hear friends and family complain about the lack of it when they visit other states.


I grew up in New Mexico, moved to Utah in high school, and have a Dutch grandfather. Fry sauce exists outside of Utah, but mostly in places with either high Dutch populations (western Michigan) or high Mormon populations. I encountered it first when I was in elementary school.

The abundance of it in commercial establishments, however, is a definitely a Utah thing. Unfortunately, the Utah product is rarely homemade and usually a mix of Best Foods and a Heinz product, and isn't anything nearly as creamy as you'd get in the Netherlands. That said, nothing beats Heinz Ketchup. If more places in Utah took just a bit more pride in the fry sauce it could be a much better product on its own.


Yeah, and I put mostly physical books on my amazon wish list which makes it easy for people to gift me things... After moving recently though I'm looking to digitize some, physical books are annoying to move. I don't have a preference for physical books over digital books, apart from I can't stand "ebooks" that aren't plain PDFs or web pages. I don't like e-ink. I have a collection of PDFs that I read on my tablet, phone, or usually my desktop setup.

I also must say I got some mirror glasses last year (http://ecx.images-amazon.com/images/I/61HShDnJT6L._SL1200_.j...) and while ridiculous, they're damn convenient sometimes.


They certainly look ridiculous - how easy on your eyes are they to use for long periods of time?


I wear them over my normal glasses and I'm fine for hours, my only problem is if my head's inclined too much they'll occasionally slide down and pinch my nose a bit. Maybe some padding would solve that. But I don't know how representative I am since I play a lot of video games for hours, too... I'm sure some people will notice strain.


Thanks for the feedback. I was reading through all the reviews of these and poor fit and alignment seems to be the major problem with these type of glasses. They seemed to be all very cheaply made - maybe someone should create a start-up selling customised versions just like corrective optical glasses.


I think it's possible, if this hypothetical Java had strong corporate backing. I'm not sure Oracle could do it, but if it came out of Google, I bet it would catch on. The language Go has shown you can release a language no one really needs (sorry enthusiasts, just my opinion) that will still be successful.


I have a dream I'll one day be asked to interview someone, and get to try out my unconventional idea... (Or have it done to me one day.) I hate standard coding exercises, and I expect the candidate to have already been vetted for being able to code by either having some open source work or if none is available being given a take-home fizzbuzz-ish problem. My idea is to pick a problem off my own personal list of things I'd like to explore, things I'd like to reinvent code for from high level descriptions if I had more time/discipline (for example drawing voronoi diagrams), and do this alongside the candidate, so together we can attack the problem with a whiteboard and a shared computer or two for wikipedia/pseudo-code/real-code/etc. We may not get very far in just an hour, we might not even get to code, but it will be good information for both myself evaluating the candidate and the candidate evaluating me about how we collaborate on a problem neither of us knows the complete solution to in advance.


You might be interested in reading this little known book: Military Nanotecnology: Potential Applications and Preventive Arms Control (http://www.amazon.com/Military-Nanotechnology-Applications-P...) I see many parallels with concerns about nanotechnology being written off as scaremongering and little action being done in preemptive measures of control. No one on the danger side of AGI thinks monte carlo tree search will doom humanity, just as no one on the danger side of nanotech thinks that being able to move a single atom up and down in a crystal will turn that atom into a humanity-killing pathogen. But they are steps, and the future dangers are totally ignored.


That's some useful feedback to know as I've considered recommending that book but haven't read it myself as it's redundant to what I've already read, but if Bostrom doesn't seem to explain very well that there's a distinction between intelligence and goals/values, I'll just keep linking to some basic online texts. (Try http://wiki.lesswrong.com/wiki/Complexity_of_value) The paperclip superintelligence will indeed be able to reason more effectively than humans about what's the be all and end all of existing -- the problem is that its conclusion will always be "to make more paperclips", because that's the overarching value it uses to frame all its thoughts on future actions, and its reasoning will be air tight. It will also be capable of explaining to any human wondering why its being torn apart for paperclip conversion that human values are different (they are such and such) and because it does not share those values, it comes to a different conclusion for the meaning of life, and it will also be able to generate great arguments for why its value system is superior. But it probably won't bother to do so...


It's probably still worth reading the book if you're interested in a philosophical view of super intelligence, but if you're looking for a detailed look at the philosophical issues concerning artificial general intelligence (which I thought it would be), it's probably not the book for you.


Is this your first exposure to the paperclip thought experiment? You can find lots of things to read about it here: http://wiki.lesswrong.com/wiki/Paperclip_maximizer

The general reply for you is that the generally intelligent paperclip machine can understand law, can weigh consequences of potential actions, and can, if it wanted to, make paperclips without harming others. The key phrase is "if it wanted to". Its only goal is to make more paperclips, it simply doesn't care about anything else. When it recursively improves itself (makes itself smarter), the only thing it cares about for its successor version to do is to also care about making paperclips, and to make them more efficiently.

The problem of programming general intelligence seems to be orthogonal to the problem of programming goal selection, goal preservation, and beneficial goal changes, and making sure goals lead to actions which benefit humanity. That's the main point of the thought experiment.


Yes it is my first exposure to this thought experiment and I do not have trouble understanding the thought experiment. I am just going ahead and applying some common sense and logic, to conclude that the argument is pretty much entirely theoretical.

Yes, optimizing only for maximum number of paper clips could potentially have some bad side effects, I get that. If that's the point of the thought experiment, fine. However that's not how the author of the blog post put it: He expressed concern that this could happen in real life, in the future. And I don't think it could.

Why? Because in real life we'd not invent a super intelligent machine and then feed it with some objective function to maximize and then let it do its thing, watch it go out of control and destroy earth. In real life we'd make sure we're in control over that machine. In real life we'd make sure we put very clear and enforceable mechanisms into that machine to stop it from doing anything harmful in the first place while it is carrying out steps to reach its objective. In real life should we still see it doing something funny we pull the plug. End of story.

In addition: Implementing above mentioned mechanisms is probably the easier part of the whole exercise. The hard bit is inventing a machine that can build a paper clip factory. If we can invent such a machine by then we certainly have also invented mechanisms to control that machine and only have it do "good" stuff.


The point of the thought experiment is that there's a difference between intelligence and goals. There are other thought experiments (and just general study of human cognition) whose point is that accurately capturing human goals and values is hard, possibly harder than making a general intelligence with X goals in the first place. (See http://wiki.lesswrong.com/wiki/Complexity_of_value) So organizations like MIRI exist (http://intelligence.org) to try and solve this problem sooner rather than later, because once a more-than-human-intelligent agent is running, despite the controls, if its values aren't precise enough and if they aren't stable enough on improvement, there is immense possibility for failure. It's also sort of questionable to talk about effective controls of something that's smarter and faster than you are. (See the AI Box Experiments, it's suggestive that you don't even need anything beyond human intelligence to subvert controls you don't like. http://www.yudkowsky.net/singularity/aibox)

These solar system tiling examples are just a dramatic case for something terrible that could happen given a generally intelligent machine with non-human-friendly goals, or even friendly-seeming goals (like "make nanomachines that remove cancerous cells") that are improperly specified to cover corner cases, but if you spend time analyzing more mundane ways things could go slightly wrong to terribly wrong given an honest but flawed attempt at making sure they go right, carry your analyses years into the future after the intelligent software is started where things continue going right but then go wrong, you might come to agree that the most likely outcome given present knowledge and research direction will be bad for humanity.


Awesome. I ran the 99 bottles of beer program[0] and it worked, despite locking up Firefox for a bit...

[0] http://www.99-bottles-of-beer.net/language-malbolge-995.html


There's a related thought I have with this whole idea, and it's that these types of tools mainly end up used for "circus math". Math that once may have been useful, just as slide rules were useful, but in modern times have much less direct value and seem mainly to be for show. When you're revisiting calculus in order to accomplish some goal, are you doing everything by hand or are you leveraging software like Maple, Octave, Wolfram Alpha, Python, Julia, or a plain TI-89? Do you derive or memorize or keep a handy table of integration/derivative formulas? Do you ever do integration by parts and show all your work? Where there are nth order DEs, Laplace Transforms can be very useful (and lead to the tremendously useful Fourier Transform), but do you do partial fractions by hand so you can get an expression to something you can easily invert by inspection (with a handy table reference maybe)?

I wonder if there's not some way to skip a lot of the tedium of algebraic manipulation that is forced upon students, such that students can learn how to use algebra as a tool to solve problems, rather than as an interesting written dance where each step is shown that they must perform for points. These sorts of games may make the tedium go by quicker, and there is something to be said that understanding can come through rote, but once a student grasps the meaning of these things, I think we should immediately encourage that student to avoid as much tedium as possible and move on to higher subjects instead of more and more worksheets testing knowledge of process rather than knowledge of usefulness.

I occasionally link back to this text (ignoring the controversial remarks on violent video games): http://www.theodoregray.com/BrainRot/ In short, if you think of the brain as a limited resource, then all these numerical and analytical methods that were needed before computers have a cost -- one which our intelligent ancestors paid for out of necessity, and it's foolish to suppose these things don't require significant amounts of brainpower or cognitive resources. Is this cost still worth it for most of them, is the amount of brainpower in fact trivial despite our ancestors' struggles, were they just stupider back then? Do our children have enough resources that they can learn all they knew, at least until the final exam, and then all we've found out about higher levels of math and about automated computation this last generation? I don't think so.


This is a point of view that I'm hearing a lot now, mostly from technically capable people who know that computer algebra systems exist and are more reliable than doing everything by hand. And there is merit in the argument, but I've always felt uncomfortable about it, as if something was missing.

More recently I think I've identified what it is, and I included a little rant about it in my blog post about the birthday problem[0].

In particular, you've said:

    > I wonder if there's not some way
    > to skip a lot of the tedium of
    > algebraic manipulation that is
    > forced upon students,
I'd like to compare this with the idea of missing all the tedium of practising the cross-court forehand drive in table tennis. And the answer in that case is no, not if you want to be a top flight player. You need your body to recognise the shot automatically and play it without thinking, so your brain is released to do the higher-order stuff necessary to work on the problem, not the detail.

But more than that, sometimes it's the hours of practice in algebra (or similar) that means that when something turns up in disguise then you still recognise it, and still know how to torture the equations to twist them into the standard form.

It's really hard to explain. Sometime I'll have another go at it, try to put into words the meta-intuition I've developed over the past 40 years. In the meantime, the side-box with the rant is the best I've managed.

[0] http://www.solipsys.co.uk/new/TheBirthdayParadox.html#toc_na...


For your example bug I think you're wrong that static types would have made the bug impossible. What you need is strong type checking and a lack of auto-conversion. For example, in Java, this compiles:

    int[] bodies = {1, 2, 3};
    for (int body : bodies) {
        String formatted_body = "<p>" + body + "</p>";
        callMethodWithStrArg(formatted_body);
    }
And if in this hypothetical case it was previously a `String[] bodies` and a `String body`, I bet a lot of programmers would use an auto-refactoring tool because "static types and auto-refactoring go together for being confident in changes like apples and pie" and I bet the error wouldn't have been noticed even at review time. God help you if you're using a static language without generics that has implicit type conversions. In Python, though, this raises an error:

    bodies = [1, 2, 3]
    for body in bodies:
      formatted_body = '<p>' + body + '</p>'
The error is: "TypeError: cannot concatenate 'str' and 'int' objects".

Dynamically typed languages still have types.


That's very true (and I really wish that Lua didn't do implicit type conversion of numbers to strings --- it's a major wart on an otherwise very nice language).

I had totally forgotten that Java does it too, despite having done `""+i` lots of times as a cheap and easy and evil way to convert numbers to strings.

...I am currently rewriting a big chunk of the primary data storage to use immutable data structures, because it makes implementing Undo easier. I am having to fight the urge to redo it all in Haskell.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: