Hacker Newsnew | comments | show | ask | jobs | submit | fauigerzigerk's comments login

>If the only change is that it loads faster, and he doesn't grow his audience, then he hasn't really achieved anything.

Sure he has achieved something. He has made his readers a little bit happier. Presumably, he has also made himself a little bit happier. It's called progress.


I guess it also have increased the rentability of his hosting (rentability=page views/hosting cost). Servers are still damn expensive.

How do you suggest we prevent any technology we create from getting used for everything it is suitable for? We'd have to start by not publishing any scientific results and never open source anything. Then we'd have to prevent industrial espionage as well as democratically elected law makers forcing us to reveal our secrets. It's different than with nuclear technology because AI is so versatile and so cheap.

And to wrap it all up, we'd have to stand by and watch as a medieval death cult like Islamic State overruns an entire region.

War is very rarely the right solution. Arguably, wars had a large part in creating IS in the first place, and I was very much against most recent wars. But for reasonably enlightened democratic nations not to have the capability to win a war if need be, ultimately means to be at the mercy of any insane barbaric ideology or religion that comes along.


I think you're misunderstanding ilaksh's point. Yes, knowledge is social potential energy. The potential energy lacks the moral quality until it is implemented for a purpose by a person in society. I, as the person who knows how to build the killer robot, have a moral obligation to not build the killer robot.

The criminal in this scenario is the government.


I don't think this is a valid argument when it comes to AI. Sure you can say I'm not going to be the one who builds that missile or that gun. But a killer robot is but a thin layer on top of a general AI or on top of a variety of specialised AI functions. The more general the AI becomes, the more "dual use" it is necessarily going to be.

In other words, should we not build intelligent robots because someone might give them a gun to make them killer robots?


I'm imagining that in the case of AI general enough to be trained for any arbitrary task, but not general or physically able enough to choose its own tasks, you still need technologists to teach the warfarers how to install and train the AI. So in that case, build the AI; don't turn it into a killer robot.

If it doesn't take expertise to train the AI though, the goose was cooked a long time ago.


That's true, but the scenario you are thinking of is one in which the higher ups order a massacre and the soldier on the ground refuses to carry out the order. I believe that the more frequent case is when soldiers on the ground are having to make split second life and death decisions, quite possibly panicking themselves. If they get it wrong, they're dead.

An AI can make split second decisions without panicking and without any consideration for its own "survival". For any mistakes an AI makes, those higher up will get the blame (ideally), not some 20 year old who was scared for his own life when he made the decision to kill everyone on that overloaded truck because he it wouldn't stop 10 seconds earlier.

That said, I shudder at the thought of a world in which people get killed by machines who will never be whistleblowers, who do not go home with post traumatic stress disorder, telling everyone how horrible war really is.


> I believe that the more frequent case is when soldiers on the ground are having to make split second life and death decisions

Tha't ok, but I think you are assuming that the "good guys" are the only ones that would have access to this. Truth is that once one government start doing this, all the others will follow and even the organized crime would have access to that technology.


Even if both sides use machines, that doesn't make it any worse than both sides using humans, on the contrary. But as I said, the wider consequences are a different matter.

I think I didn't explain myself clearly... What I meant is that once one side has it, it will go outside of common warfare pretty quickly.

You are thinking in Gov. A vs. Gov. B both using machines. My concern is that after that, Gov. B will use machines against humans C inside the same country, or in the neightborhood. And in those cases, "technical errors" can be used as excuse after a tragedy. And that's only one of my concerns... add organized crime and terrorism to the mix and you will have a very explosive soup.


I think your assumptions are very realistic and I share your concerns. But looking back at the history of war or combat, I don't feel that human nature has been a mitigating factor. On the contrary.

I don't know anything about the French constitution, but many constitutions contain very vague or general language and some are very old. Courts often have to rule based on analogies, especially where new technologies are involved.

Auto makers will have to make sure that they can update their software remotely or this going to become really expensive very soon.

What's to say this wasn't precisely that? Maybe at the talk we will learn that accidentally uconnect was listening to the update FW messages from the sprint network.

Often the board that handles the radio acts like a bridge in modern cars. It tends to be the beefiest computer hardware in the vehicle. So it is on the high speed CAN bus and a node on the more star shaped low speed bus.

When they updated the FW, they removed the bit of code that prevented dangerous messages IDs from leaving the controllers on the radio board and possibly added code that could put arbitrary CAN bus messages on the buses as relay from the sprint network.

I am very eagerly awaiting the talk.


They probably already can, if the use the right exploit.

What continues to amaze me is that there does not seem to be any convincing scientific evidence that predictive patterns can be found in past price movements, and yet many reputable investment firms are throwing a lot of money at this problem.

I understand that not all quantitative approaches depend on finding patterns in past prices, but some do.

I am completely undecided on this one. It baffles me. It's not like every company applying materials science also experiments with witchcraft somewhere in the basement, so what's going on here?


There are predictive short term patterns, but once the knowledge of the patterns spreads, they start to disappear. There is also constant change in long term patterns because the world outside financial market changes unexpected ways.

Financial markets are self reflecting system where people who invest are inside the the system and change it.

If you discover pattern that makes money, you must keep it secret and establish firm that tries to exploit it. If you are successful, others notice it and start to analyze your game and the edge starts to disappear. You may use machine learning to discover constantly new patterns but that does not change the situation, because others also use machine learning to detect new patterns.

Portfolio managers seeking high alpha are often poker players, because Poker and alpha seeking are very similar games.


But banks also employ technical analysts who use well known technical signals like support and resistance. Your and lumberjack's theory does not apply to them and yet they get paid.

>But banks also employ technical analysts

No they don't. Technical analysis is to professional trading what astrology is to astronomy and big banks don't use them. Those books are snake oil sold to hobbyists and small time traders.

Large banks, investment management firms use completely different statistical methods.


It's because if you do believe you have found such evidence, you don't publish it. You start a hedge fund.

Multithreaded programming is not inherently more complicated than everything else, but finding bugs caused by multithreading is inherently more difficult than finding any other kind of bug.

It all comes down to this: You can't reliably find concurrency bugs via testing. You have to prove they don't exist or you will never know. "Don't do it!" is one way of proving that multithreading bugs don't exist.

But I find it annoying when people act like it is always down to choice. Sometimes we need shared mutable state for logical reasons. We can only avoid it where it is an implementation detail.

-----


I travelled to Croydon a couple of times recently and it was never just 20 minutes. The trains are hardly ever on time. Also, "quite affordable" still means around £900 for a 1 bed flat + another £110 or so for council tax.

-----


My problem with that argument, which I have heard quite often, is that we cannot do anything useful with a term (awareness/experience of yellow in this case) that has neither extension nor intension.

What you experience when you see yellow is what you experience when you see yellow. Nothing more can be said about it and it cannot be compared to anyone elses experience of yellow, human or machine.

So if you make that experience a condition for calling another being intelligent, you are effectively saying that nothing but yourself can ever be intelligent.

You need to break down that thing you call experience of yellow or awareness of yellow and explain what properties it has (intension) or you need to be able to show that it influences the behaviour of humans in a way in which machines cannot be influenced by it (extension).

-----


"So if you make that experience a condition for calling another being intelligent, you are effectively saying that nothing but yourself can ever be intelligent."

I think you are confusing the principles of whether an observer can determine whether an entity has an attribute with the question of whether it does in fact have that attribute. But those are entirely different issues because, for one thing, the observer may simply not have the tools yet to make such a judgement.

Matter is made of atoms. For a long time we did not have the tools to determine that. But that didn't change the fact that it was, and is, true.

Now suppose the human race died out before it ever developed such tools. Obviously, it would still be true.

Now suppose that due to our limited intellectual abilities or for some other reason, it never was, or will be, possible to ever develop such tools. To me, it's obvious that that has no bearing on the truth of the proposition any more than if we didn't have time to develop the tools.

Similarly, the question of whether you have awareness is independent of whether I or any other third party can prove you do.

-----


I don't necessarily disagree with that, but your argument is nothing more than an appeal to believe in the existence of such properties based on how you yourself feel when you look at something yellow. I don't find that very convincing.

But I do agree with you in that the question you asked is worth asking. Not even mentioning the issue is certainly a frequent mistake made by authors taking a purely technological perspective.

-----


"I don't necessarily disagree with that, but your argument is nothing more than an appeal to believe in the existence of such properties based on how you yourself feel when you look at something yellow. I don't find that very convincing."

No... I'm not inviting you to be convinced based on my experience. I'm inviting you to notice how you yourself feel when you yourself look at something yellow. My bet is that you'll experience that which is referred to in the literature as "qualia".

-----


My own experience isn't any more convincing than yours because I don't know what properties are necessary for me to feel the way I feel. I also don't know if my feeling is in any way similar to yours or that of a fly on the wall.

So if someone built a very complex machine that was able to communicate with me like a human, and if that machine claimed to feel something very specific at the sight of yellow, it would not feel right for me to say "no, you're not feeling anything like a human would feel" because I don't know how other humans feel. I have no way of comparing my own feelings with those of other humans or non humans.

-----


Everyone knows qualia exist. The dispute is over whether they're components of the mind's perceptive systems, or whether they're metaphysically queer. The latter position is obviously wrong and requires reams of magical thinking, but being magical, it's more interesting to talk about, so human philosophers, being dumb and untrained humans who don't get the most basic principles of naturalistic wizardry, concentrate on it, to the neglect of the obviously correct and far more useful former line of inquiry, which would actually augment their spellcraft quite a lot.

:-p

-----


Can you point me to the source of your claim about memory overhead? The Go team specifically states their goals for the 1.5 garbage collector as follows:

"Hardware provisioning should allow for in-memory heap sizes twice as large as reachable memory and 25% of CPU cycles". http://llvm.cc/t/go-1-4-garbage-collection-plan-and-roadmap-...

As far as I know, that collector is not a copying collector. I also know that all JDK GCs have significant memory overhead (not sure about the Azul one).

-----


They are still using a simple and slow GC, non-copying, tri-color M&S, but at least incremental.

So they need to scan the complete heap, while a good copying collector (e.g. a two-finger Cheney with forwarding pointers) would only need to scan the stack and some roots. My GC needs ~4ms on normal heap sizes, the fastest M&S GC's need ~150ms. But I haven't found a good version besides the Azul one, which works fine threaded.

Memory overhead: A non-copying GC has none. Lookup a GC book or explanation. Refcounts have plenty: 1 word per object. malloc has plenty for its free-list management, growing with the heap size.

A semi-space GC is different as it reserves for every heap segment a mirror segment. A fast version reserves max heap and divides it by 2, so can use max 2GB of 4GB. A normal version can do that incrementally.

Java has a huge memory overhead from the kernel and run-time alone, not so the GC, but since they have various swappable GC's you need space for that. You can write better and smaller run-times with GC which do run circles around Java, .NET, Go or Ruby. As I said mine needs ~4ms, the fastest java is ~20 - 150ms. v8 has a good one, but I don't know their stats out of my head.

-----


>Memory overhead: A non-copying GC has none

Do you have any idea what could make the Go folks ask for "in-memory heap sizes twice as large as reachable memory"? This seems to be completely at odds with what you are saying.

-----


Why? Reserving virtual memory has nothing to do with actual memory usage.

-----


It would be great if they really mean virtual memory, but I very much doubt it as it makes no sense as a goal and is inconsistent with mentioning "Hardware provisioning".

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: