Hacker News new | comments | show | ask | jobs | submit login

Money is mostly used to compromise civilians who are in debt as that pressure makes them more vulnerable, especially if it's debt to organized crime. A common tactic is to give desperate people a medium chunk of cash to do something minor that the handler can still spin as treason afterwards, which then hooks the informant and allows the handler to pay him relatively small amounts until they are burned. Money doesn't work as well because the people in a position to command large bribes usually have emotional connections that money can't overcome (but M+ICE sometimes can).

I don't know many details about the CIA's black budget but I'd imagine it can be used to quickly gather hundreds of thousands or millions of dollars for especially promising informants.


Wow, savage burn! Epic win!

Gambit is a fantastic scheme system. The performance you get from it exceeds even SBCL in my limited use cases. Additionally, Gerbil Scheme (http://cons.io), built on Gambit, has expanded syntax like racket's #lang features for dsls. Several packages have also been create for it for most of the things I need.

Agreed. The Google Ad Machine will never stop running. But, I think they will diversify their revenue streams in the coming years. Forces outside of Google's control such as governments and competition will begin to push back on the "all you can eat buffet" of personal data that Google has enjoyed in the past years.

Based on the numbers in the article like doubling their "large business" user base in the past year, I think Diane Greene will lead them to generating huge revenue in the Enterprise sector. Also, I think freemium versions of Google software, directed at consumers, will be increasingly popular in the future.


According to the US State Department, anyone who visits Iran and stays for longer than a year requires an exit permit to leave the country. I don't know much about the politics of Iran but I'd imagine that applies to everyone born there. Due to that exit permit requirement it's probably much easier for women and children to leave than for the father.

You can, and should, knock out the really obvious stuff with regex validation but you're smart to be concerned about one thing that most people miss in this conversation.

A confirmation email doesn't solve the problem if the user mistyped the address to begin with. You're right, they wait for a confirmation email that never comes.

Using a third party service like BriteVerify's rest api is the only way to get an actual verification that the inbox exists.


This sort of situation can have tragic consequences too. In high school, a friend of mine was in the same situation: Mother and kids in the US while the father is stuck in Iran. They were wealthy so they didn't go through the added stresses of poverty in the US and my friend seemed to everyone to be a happy go lucky guy. He was popular, involved in the community, and did very well at academics but we found out too late that the stresses in the family over the separation had developed into a drinking problem. A few days before his father finally managed to get a Visa and arrive in the US, he got too drunk at a party and stopped breathing. The psychological toll on the entire family is hard to understate, even when life is otherwise good.

This kind of separation happens quite a bit within immigrant communities, especially nationalities that have long waiting lists for green cards, let alone undocumented immigrants. Due to some quirks in the Visa system, you have to leave the country in order to change your status which runs the risk of delays or outright rejection. When the parents are on separate visas, one can get let through with the kids while the other is stuck indefinitely. Often times that means that the parent not granted a visa becomes an undocumented immigrant in the host nation and the other is forced to return home because all of their stuff and financial obligation/jobs are in the US. If they can't find a job in the host nation, the stuck parent often moves back to their support network in their birth country, further complicating things.


This is a gross exaggeration. As far as I can see, what "leaked" was the "shared driver source kit" that nearly any hardware vendor (like chipset manufacturer) can get; basically anyone who puts up a few thousand bucks and signs an NDA.

Good point.

So this would only apply when a specific sequence is required.

In analogy, if making a puzzle the first piece could be cut randomly. However, the following pieces would have to fit with the proceeding pieces. And each additional piece would grow in specification.

So for a cell, it is true that there could potentially be a large number of proteins that could prove useful. However when that one protein requires 99 other protein types that "fit" with it in the puzzle of a single cell, then you have a specific sequence required which would be more and more precise for each additional protein.

In addition, since there is a need for 1,000s of copies of each protein type, there is also a need for factory proteins of even greater complexity.

In the case of every life form on earth, they all have these protein factories built-in which read the rna to create specific proteins.

Although, there might potentially exist other possible protein combinations that could create an alternative functional protein factory, any protein factory would have many interacting parts that each require increased specification as each part is included in the design.

In addition, it would be hard for a protein factory to function without a healthy cell holding all the necessary parts close together.

---

Another analogy would be like sodoku.

With a blank board, it Is possible to put any number anywhere.

However, as the game gets closer to completion, it requires a specific answer for each square.

If randomly putting a single digit in each square, the likelihood of getting a correct solution would be:

- 9^27 ~ 6e25 (possible random configurations of 1-9 in each square)

Divided by

- ~ 6e21 Number of correct solutions

So it would be like this for a random single cell:

- Number of possible amino acid combinations for ~100,000 proteins of average length ~50.

Divided by

- Number of protein combinations that would function as a living cell


Another good data science post from okcupid. The reason I stopped using the site was low response rate, so this is very salient

Your understanding is mostly correct. Whether or not traffic directed to any of the /24s involved "would have eventually reached the destination" is undetermined (and possibly irrelevant), meaning it's both possible and not possible. We simply don't know. Speaking strictly about "internet routing" (BGP in this case), it is 100% possible for an announcement to send traffic through an AS which literally dumps it on the floor -- it's happened many times over the years (the Pakistan/Youtube one was noticed by many): https://en.wikipedia.org/wiki/BGP_hijacking#Public_incidents

The question is whether or not the /16 (of which most of the advertised/withdrawn /24s make up) was used by actual devices, or if it's address space NTT has yet to use. If it is assigned to NTT but unused space, then effectively no harm done. If it's actually used IP space, then that would be very inappropriate.


AS15562 appears to be an employee of NTT out of the Netherlands. I can't tell if AS15562 is for personal or work, maybe both: http://bgp.he.net/AS15562

I couldn't care less about the IPv6 prefixes, but the IPv4 ones are all /24s made from 209.24.0.0/16, which is registered to NTT (AS2914). 209.24/16 is publicly announced (and has been for a very long time), and is routed through NTT Amsterdam routers.

I haven't looked at BGPlay to review all the data, but it looks like many of the /24s that make up that /16 were individually announced through AS15562, then later withdrawn, gradually over 4 months, to make said graph. I would hope this would be unused v4 space. That AS announced almost 98% of a /16 (probably 209.24/16): https://stat.ripe.net/AS15562#tabId=routing

Another user voiced their concerns, particularly if it was actively used: https://news.ycombinator.com/item?id=14621859 -- there's no way any of us could know this; NTT would be authoritative, and jwhois -h rwhois.gtt.net.net -p 4321 209.24.0.0/16 doesn't give any clues.

While the antic made me smirk, it doesn't (publicly) "look good" when we're living in a world that lacks (or has greatly limited) v4 space. What this says is: "NTT has a /16 they're fooling around with publicly", even though it (presumably) is harmless.


You should try tsx (typescript jsx). It is 100% first class code: autocomplete of names and attributes, type safety of attributes, unlimited mixing of tsx and JavaScript, rename refactoring, navigate to definitions, etc.

It is similar to a function based approach, but has many advantages that reduce code and increase productivity.

Here is a page of examples from me React typescript setup repo:

https://github.com/toldsoftware/boilerplate-react-typescript...


I'm actually surprised with such effort the rate of exposure of hosts was only 40%.

Even still, this seems pretty sensationalist to me. I find it hard to believe an organized service to "scam" Airbnb would be valuable to a host to save 3-5% at the cost of losing the legitimacy of being a part of the Airbnb platform. Furthermore, such a service would at least have the hurdle of being in violation of the ToS of using the site https://www.airbnb.com/terms#sec14 .

Also, all of the numbers in the article assume the Wisconsin data set would be representative. The article states less population dense areas would be more vulnerable to this algorithm. So, why does it extrapolate based on only 84 homes in a more "vulnerable" area? The article ignores the fact that Airbnb operates in countries that do not have the same laws as the US. So, not even all 3 million homes are vulnerable to this method. This article is fishing for a result that will make a good headline.


Finally some Chalmers has graced the front-page. Aleluja!

My parents were born in Iran, and the culture is very family-centric. Much more than European cultures.

The power of e.g. matlab for engineering disciplines is in the profound library. The optimized performance of the core language not withstanding, optimizing the library takes many people, often enough graduates, doctorates and professors.

They said on hacker news.

Personally I don't really see the problem. If you want to argue for a different state of things after some event, "post" seem like the correct word. Just like you would say "post war period". You might of course think that the statement itself is pretentious. But at least they argue their case, i.e the importance of her actions, in the article.


If this is the Jason Kostempski I'm hoping it is - not everything can be as clean as OptiRoute!!

If you're the one that used to own your user@gmail, please drop me a line, email in profile.


It seems to me that accusations of harassment can be as damaging as harassment itself, and that the presumption of innocence applies to both (or all) parties.

hey, author here

Thanks for the feedback.

"Something's not working properly here" - I disagree. The model will overtrain (i.e. perfectly reconstruct the original waveforms of a small training set), which indicates it's capable of learning the necessary transformation. The problem lies in the limited amount of training time I had. To reiterate from an earlier comment, I trained on only 10 epochs, while the paper this is base on claimed to train on 400. Much more training is required for this model to generalize well without degrading the signal-to-noise ratio.


With minor Emacs Lisp compiler patch (addition of %return, %goto and %label intrinsics), it is now possible to output Lisp that is optimal.

Possible implementation (about 20 lines of code): https://github.com/Quasilyte/goism/issues/57

Not sure if "defadvice" around "byte-compile-form" is acceptable for all users.


hey, author here

Thanks for the feedback.

"applying a similar technique in the frequency domain", "Maybe training an image reconstructor on the short term spectrogram" - This is what I originally thought to do. However, this approach suffers from information loss whenever you transform from the frequency domain back to the time domain. Since the goal was super-resolution in the time domain, working in the time domain is more sensible.


hey, author here

Thanks for the feedback.

"the reconstructed audio sounded terrible" - I think this is referring to the amount of static noise in the reconstructed waveform. Indeed, the SNR clearly shows the reconstruction is slightly worse than the downsampled waveform. As mentioned in the post, I strongly believe this is due to the limited amount of training I performed. The number of epochs of training data in my case was only 10 while the paper this project is based on trained for 400 epochs. During training I noticed a strong dependence on training epochs and perceptual performance.


I'll do that. Thanks for the answer.

Realised it now. Won't happen again!

So after seeing the profile I immediately deleted it. You think Google would reindex and remove it from search results? or the stale link still exist?

I am a little curious as to how this factors into fundamental information theory.

In my mind, you are simply taking a 0-2khz signal and combining it with an entirely different 0-8khz signal that is generated (arbitrarily IMO) based on the band-limited original data. I can see the argument for having a library of samples as additional, common information (think many compressor algorithms), but it is still going to be an approximation (lossy).

"The loss function used was the mean-squared error between the output waveform and the original, high-resolution waveform." - This confuses me as a performance metric when dealing with audio waveforms.

I think a good question might be - "What would be better criteria for evaluating the Q (quality) of this system?"

THD between original and output averaged over the duration of the waveforms? Subjective evaluations (w/ man in the middle training)? etc...


Throwaway account for obvious reasons. Does anyone have a link to the leaked data?

At this point avoiding links is pointless as the source code will be essentially public knowledge in matter of days/weeks. Damage control is the only strategy left. The sooner security researchers outside Microsoft can start analyzing and reporting vulnerabilities, the better.


That or they will have someone else read the emails for them... like the NSA. Also, all "cloud" emails older than 6 months don't require a warrant to be obtained [1].

1. http://www.mcclatchydc.com/news/politics-government/congress...

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: