Hacker Newsnew | comments | show | ask | jobs | submit | archgoon's comments login

You mean something like this?

http://i.imgur.com/e9s29Jl.png


The actual guts of the algorithm are in compiled javascript

https://github.com/imeckler/diffgeo/blob/master/Native/ODE.j...

However, this seems to simply be using the numeric library.

http://www.numericjs.com/documentation.html

Which uses 'Dormand-Prince RK'.

https://en.wikipedia.org/wiki/Dormand%E2%80%93Prince_method


I was expecting them to put forward an existential risk (rogue AI), but this seems much more mundane. Granted, they might be downplaying that angle to taken more seriously. However, from a mundane perspective, the main issue that you have with arms races is not both sides having a technology, but one side having the tech, and willing to go to war to prevent the other side from getting it (see for example, Cuba, Iraq, Iran).

Furthermore, as they point out, you don't need access to special materials, or laboratories. The main reason that nukes are controllable is not primarily due to the science being secret (it's mostly not at this point), or hard to rederive (it's really not), but that you have to build up a ton of ore refinement plants to get enough U235 (other fissile material) to actually build the bomb. And it's really hard to do that in secret (or cheaply). Nothing makes the Manhatten project any cheaper today, and it cost about 23 billion dollars in today's dollars, and it involved 130,000 (twice the size of Google) people.

However, with Autonomous weapons, you don't need anywhere nearly as many people (Google X has on the order of 250 people[1]), or resources, and it can (as the article itself points out) be done much cheaper. In a few years, all the necessary components could even be conceivably cobbled together from Github projects. Any nation could easily fund it, likely without being detected, or even it being clear that they were aware that the "R&D" dollars were being used in such a way.

Given that, banning it seems like it would actually lead to more warfare, as the US would take it on itself to enforce the ban, and declare 'pre-emptive' strikes on nations that had a secret Autonomous Research project.

[1] http://www.fastcompany.com/3028156/united-states-of-innovati....


They already did the rogue AI thing: http://futureoflife.org/AI/open_letter

I was expecting them to put forward an existential risk (rogue AI), but this seems much more mundane. Granted, they might be downplaying that angle to taken more seriously. However, from a mundane perspective, the main issue that you have with arms races is not both sides having a technology, but one side having the tech, and willing to go to war to prevent the other side from getting it (see for example, Cuba, Iraq, Iran).

Furthermore, as they point out, you don't need access to special materials, or laboratories. The main reason that nukes are controllable is not primarily due to the science being secret (it's mostly not at this point), or hard to rederive (it's really not), but that you have to build up a ton of ore refinement plants to get enough U235 (other fissile material) to actually build the bomb. And it's really hard to do that in secret (or cheaply). Nothing makes the Manhatten project any cheaper today, and it cost about 23 billion dollars in today's dollars, and it involved 130,000 (twice the size of Google) people.

However, with Autonomous weapons, you don't need anywhere nearly as many people (Google X has on the order of 250 people[1]), or resources, and it can (as the article itself points out) be done much cheaper. In a few years, all the necessary components could even be conceivably cobbled together from Github projects. Any nation could easily fund it, likely without being detected, or even it being clear that they were aware that the "R&D" dollars were being used in such a way.

Given that, banning it seems like it would actually lead to more warfare, as the US would take it on itself to enforce the ban, and declare 'pre-emptive' strikes on nations that had a secret Autonomous Research project.

[1] http://www.fastcompany.com/3028156/united-states-of-innovati...


> For future HN bug bounty/black market threads: note the absence of Facebook XSS vulns on these price lists. Nobody is paying tens of thousands of dollars for web vulns. Except the vendors. :)

Is this due to the fact that the value of a Facebook XSS vuln is very low, or that the high likelihood that Facebook will notice the vulnerability (possibly from another source) and patch the issue before a profit can be realized?


see: https://news.ycombinator.com/item?id=7106953

Makes sense to me.

It's because Facebook, Google, and some others run generous bug bounties / white-hat programs. Without committing a crime, people can make a lot for disclosing it directly and confidentially. Vulnerabilities can pay 5-15k and there have been 30-40k payouts. Occasionally, you'll see a blog post explaining the process from detection to payment, like: http://homakov.blogspot.com/2013/02/hacking-facebook-with-oa...

FB paid out 1.3m in 2014. http://www.zdnet.com/article/facebook-bug-bounty-program-pai...


No. Facebook is probably not outbidding the black market for their vulnerabilities. I think 'grugq is exactly right: the market for serverside vulnerabilities with hours-long half lives is very thin. Facebook could pay $500 for RCE, and so long as they do everything else they currently do for security, a thriving black market would not emerge for their vulnerabilities.

It's interesting to me that there's a real market price for an Adobe Reader flaw, but that Facebook flaws have (generous) fiat prices set by Facebook.


I didn't say they're outbidding the black market, but they are quite generous and it is a legal action at that point. Makes sense to me, because they actually care about security and it's worth paying out to protect users and the brand. It also serves as a unique recruiting tool.

Seems to me you could "quite easily" double dip.

Sell your exploit on the black market. A day later, sell it to the vendor. "Sorry, they must have found and patched it"? Just ask Facebook not to disclose your information when highlighting vulnerability payouts.


If you read this whole post, you'll see that buyers expect this behavior, and payments are escrowed or tranched to account for ti.

Google and Mozilla pay bounties for vulnerabilities in their browsers, too. Yet the market for these bugs is striving.

Is selling a vulnerability a crime? Does that depend on whom you sell it to?

1. No. 2. Not currently, unless you know they're using your bug to commit a crime, in which case you can assume liability for that crime by actively helping them.

My heart wants to agree with Thomas, but I think we should remember this is a single data point and not generalize too hastily.

The catch tends to be if you're able to detect when you're in a more complicated situation and bail out accordingly. One very useful thing is auto-labeling, which turns out, getting it wrong is more problematic than one would expect.

Take for example the 'Gorilla' issue that Google encountered.

http://nyti.ms/1JlTBab [www.nytimes.com]

Then there was the issue with the bookmarks and suicide categories

http://bit.ly/1JFkmCr [productforums.google.com]

(This one isn't as big of a problem, as the frustration was probably more due to the compulsory usage of the new bookmarks, but it still shows some hazards of auto-categorization).

With social things, it can sometimes be very hard to solve a problem that works for 95% of the population, but doesn't piss off the remaining 5% (often with good justification). Unfortunately, detecting when you're in that situation in the first place is sometimes equivalent to the problem you're trying to solve.


Please don't use link shorteners. They just obfuscate the link target.

I don't understand why a big company like Google can put some feature online if it's not thoroughly tested or thought. It is because it is a web product and it wants the users to be the testers?

What would the testing plan look like?

Simple procedure: take a large sample (1M pictures or so), tag them with the system and then show each picture with the label to a person (using Mechanical Turk or similar) and ask "is this label offensive?"

Use simple statistics to identify trouble spots.


I do not believe that this is a serious question, and is rather a parody of this one:

https://news.ycombinator.com/item?id=9928556

If this is serious, could you please indicate what you mean by "web-focused", but not in a technical sense, means?

-----


> The product does much better than any other I've used, it's easy to clean, and it takes all manor of abuse.

Dyson! It takes on all manors. It'll take on yours. ;)


  > This application was developed, with permission,
  > from the Java circuit simulator by Paul Falstad.
Original author approves? Check.

  > Thanks to the kind permission of Paul Falstad 
  > the source project for this version of the 
  > application is now available on GitHub under a 
  > GPLv2 license.
  >
  > https://github.com/sharpie7/circuitjs1
Released source code for modification? Check.

Denigrating the hard work of others is not cool, especially when they make it freely available to others.


Facebook's inline video player uses it. On linux, I get a popup saying that I must install Flash.

-----


In Safari on OS X, Facebook does use the native <video> player.

I'm guessing that Facebook encodes video h264 which isn't natively supported in Firefox; rather it relies on support in the operating system. I'm not sure if Chrome on Linux supports h264, however since Chrome also includes its own Flash player I guess that Facebook may be using their own flash player anyway.

-----


Pretty sure there are some settings you need to enable in about:config to get H264 working in Firefox (assuming you have the right gstreamer stuff installed).

-----


Firefox can also install the free Cisco h.264 codec.

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: