Hacker News new | comments | show | ask | jobs | submit | zaroth's comments login

It's amazing how much this line of code actually makes happen;

  http.ListenAndServeTLS(":443", "cert.pem", "key.pem", nil))
A full HTTP and TLS stack ready to start dispatching requests to your callbacks. I know there are other languages that can do the same, but I still think it's impressive. I've written just a few thousand lines of Go and while I'm pretty sure I'm not really "doing it right" I was able to port some decent complexity C# and very easily had all cores processing data at a higher overall rate with the Go code. High level libraries like this, and managed memory makes it almost feel like a scripting language, but with all the advantages of a compiler.

Operational question... If you're terminating client-side TLS like this, then does that mean either it's a single server or you have L4 (or lower) load balancing in front of it? I assume it's more common to have haproxy or nginx or something like that in front terminating TLS, with the API servers sitting behind.

The way I see it, this is an open source shim layer. I don't see any copyright issue whatsoever. Whoever chooses to use the shim is using ffmpeg anyway, so they are already exposed to the license in exactly the same way. This just makes it easier to use ffmpeg 'the right way' on various devices, if I understand correctly.

If somehow MS can't make and open source clean compatibility layers for their own software, then that's a big problem.

Wildcards are important and LE should support them, but it will take perhaps some more work on the validation rules. Dynamic subdomains are powerful stuff, and even a real-time automated cert request is a poor substitute for just having the wildcard. If you're doing sub-domain per customer, the wildcard cert is definitely preferred particularly if you're proper multi-tenant all the way down the stack.

  "All trademarked and copyrighted liscences are property
   of their repsective owners."
Obviously some magic YouTube incantation to protect against take-downs :-/

With 1m+ views, it's nice to see it's made it so long. Don't feel good about its prospects for much longer though!

The 'magic' is called Content-ID and it looks for video and audio matches with content uploaded to a private (and unviewable) system by copyright holders and it takes automated action if a match is detected.

Theoretically any matches are reviewed by a human, but they make it quiet easy to automate the 'review' process. Automated actions are typically Claim-Remove (claim content for copyright holder, remove from YouTube) for some content types (Movies, TV shows, some music) or Claim-Monetize (claim content, ad advertisements if there were not already on, and if they were already on all revenue is redirected to claimant).

This automated system is actually what causes the majority of 'this content is not available in your country' messages, as if it gets claimed by a studio / copyright holder who only has the rights distribute in USA and UK for example, everyone else would see that error message after it has been claimed-monetized.

In case anyone was curious.

Edit: realized I had a contextual homonym typo... and it has been too long so I can't edit. (Why does HN do that... gah)

> or Claim-Monetize (claim content, ad advertisements

ad --> adds

Which is an interesting distinction because this means content that had no in video ads (pop over text ads, pre-roll ads, trueview ads) can get ads added to it when it is claimed, if that is what the claimer chooses to do.

How does a group get upload access to ContentID? Can a small independent record label or filmmaker use ContentID to protect their works, or does one have to be a member of a AA?

You can apply. Manual approval and doesn't seem to be the most consistent.


This is considered best practice for languages where you can't trust your "constant time" comparison won't be optimized out from under you.

Performing a timing attack requires control of the bytes being compared. If you can control the bytes of the output of a SHA256 then there are some Bitcoin miners who will pay you a lot of money.

If you want to be over-the-top about it you can get some secure randomness and add it to the values being compared before hashing, and then attacker would have even less control over the bytes being compared.

You don't need to control every byte for this to be catastrophic. You can't decode every password like you can with the previous comparison, but if you can generate a rainbow table that contains the password you're trying to crack, you can just do a timing attack using the hashes instead. My intuition is that this might even require fewer attempts than the original comparison assuming a reasonable password length, but I haven't done the math.

I suppose you could theoretically deduce a large enough part of the hash to perform a bruteforce attack though.

You don't need that much of the hash to perform wordlist attacks and find likely candidates.

Yes, this is true if you are talking about unsalted passwords, or if the attacker knows the salts. If there's an unknown salt, having the salted hash of a given plaintext input match the first few bytes of the target salted hash does not help you narrow the word list at all. If there is no salt, or the salt is public, then that's a case where you could append some ephemeral CS-PRNG output to both sides before hash-comparing... but probably better to fix the underlying issue.

I mean, it's funny to be far enough down the rabbit hole to be doing hash-compare. Then to say we wanted randomly keyed hash-compare is the final step. The nice thing is adding some random bytes imposes a fairly miniscule performance hit and it's purely computation, no additional storage. Probably still too slow to use for every comparison, but for constant-time critical comparison, it literally can't hurt.

You are directly contradicting the table presented in the article as well as in the main site: http://flintwaterstudy.org/2016/01/the-flintwaterstudy-resea...

Did you not read the article or did you have some better source where you are getting your data?! Going by the article, the GoFundMe page and the project site itself it seems perfectly clear what money they spent, how they spent it, and also how much volunteer time was also donated.

I don't understand why you would launch a personal attack against Professor Edwards and his team by just making up numbers and trying to stir up controversy where there is none.

The volunteers are not expected to be paid. Their time was freely donated and it's clear in the GoFundMe where the funds are going and how they will be used.


  Windows 10 hosts running RDP services fail to prevent remote logon to accounts that have
  no passwords set.

  An attacker could exploit this vulnerability by using an older version of the RDP client
  to connect to the Windows 10 host. Once connected, the attacker could generate a list of
  user accounts on the host and attempt to log on as those users. If one of the user
  accounts has no password set, then the attacker is allowed to log on as that user, 
  despite the default system setting that restricts access to accounts without passwords to
  local logon only.
Is the user enumeration a feature and not a vulnerability of its own?


Windows displays a "Welcome" screen with all user accounts listed when you connect via RDP. I don't know how to automate enumeration, though. Perhaps scraping a screenshot after connecting and then OCRing the usernames.


It might be even easier than that, since RDP remotes the screen at the GDI level, so if you're looking at the protocol, you'll see a regular pattern of DrawText calls.


Out of interest, is there something like a Wireshark dissector that'd show this level of detail?


yuhong is correct: RDP rides inside a TLS connection (if I recall, pre-Vista didn't use TLS but there was some other encryption scheme). I'm not sure how much work it would be to sniff/log session keys and display the decrypted traffic in a nice format. I've never seen a tool for it. I'd be surprised if Microsoft doesn't have at least some netmon.exe tooling for it but it may not be released.


There is a dissector, but no, it doesn't show this level of detail.


It is encrypted I think.


I think the problem is with your $1.2M valuation. You have a company with $2m in preferences or debt which is first in line before any common stockholder, and most likely their liquid assets are less than $2m, and they have negative net income. That makes the common stock effectively worthless at this time.

Also, if you keep reading;

  Once our valuation rises and the cost becomes prohibitive, we’ll move to an
  extended exercise period model instead, where you will have 10 years to
  purchase your options. By that time we’ll either have had an exit (in which
  case you can do a cashless exercise), or we will have arranged some other
  form of liquidity.


I'm not suggesting that the valuation is truly worth $1.2M but if they did a 409A valuation or had their board decide on the FMV to determine strike price, the value would be at least 15% of the post-money valuation. My point is simply that if they've issued a reasonable number of options (over 0.25% of the total authorized shares), there is no way that their employees would only pay a few hundred dollars to exercise.

In an extended exercise period model, there's still the issue of Alternative Minimum Tax on exercise or long-term vs. short term capital gains. There will also likely be lockups on those shares whether inherently built in or in an eventual IPO.


RSSI -- Receive Signal Strength Indicator -- is reported in dBm, which is a log-scale unit of watts. +20dBm is approx. the max transmit power for WiFi, and equates to 100mW. 1nw is eqivalent to -60dBm which is a pretty decent signal strength for WiFi, but won't provide max data rate. By comparison, the minimum power to operate at 1Mbps is about -98dBm.

Then again, receiving at 60mw (+17dBm) would overdrive any of the WiFi devices I ever tested and path loss of just 3dB (assuming +20dBm transmit power) is not achievable over the air, even with a near-field antenna. Keep in mind many devices actively tamper with the RSSI value reported up through NDIS, so while it's possible for a card to report it is receiving at +17dBm, unless you have hard-wired the hirose connectors between AP and STA it's not actually the case. Nor would a receiver work very well with so little path loss in any case, it would be completely over-driven.


Comparisons after hashing are naturally resistant to timing attacks, because you are not in direct control of the bytes being compared.

Just ask Bitcoin miners how hard it is to pick an input which results in a hash with a desired n-bit prefix.

But as a belt-and-suspenders you often see an attempt at fixed time comparisons of digests in any case.

Coincidentally, hashing before comparing can be used in scripting languages where the compare function will often be optimized out from under you, making constant time compare difficult or impossible to actually guarantee.



Applications are open for YC Summer 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact