I don't know many details about the CIA's black budget but I'd imagine it can be used to quickly gather hundreds of thousands or millions of dollars for especially promising informants.
Based on the numbers in the article like doubling their "large business" user base in the past year, I think Diane Greene will lead them to generating huge revenue in the Enterprise sector. Also, I think freemium versions of Google software, directed at consumers, will be increasingly popular in the future.
A confirmation email doesn't solve the problem if the user mistyped the address to begin with. You're right, they wait for a confirmation email that never comes.
Using a third party service like BriteVerify's rest api is the only way to get an actual verification that the inbox exists.
This kind of separation happens quite a bit within immigrant communities, especially nationalities that have long waiting lists for green cards, let alone undocumented immigrants. Due to some quirks in the Visa system, you have to leave the country in order to change your status which runs the risk of delays or outright rejection. When the parents are on separate visas, one can get let through with the kids while the other is stuck indefinitely. Often times that means that the parent not granted a visa becomes an undocumented immigrant in the host nation and the other is forced to return home because all of their stuff and financial obligation/jobs are in the US. If they can't find a job in the host nation, the stuck parent often moves back to their support network in their birth country, further complicating things.
So this would only apply when a specific sequence is required.
In analogy, if making a puzzle the first piece could be cut randomly. However, the following pieces would have to fit with the proceeding pieces. And each additional piece would grow in specification.
So for a cell, it is true that there could potentially be a large number of proteins that could prove useful. However when that one protein requires 99 other protein types that "fit" with it in the puzzle of a single cell, then you have a specific sequence required which would be more and more precise for each additional protein.
In addition, since there is a need for 1,000s of copies of each protein type, there is also a need for factory proteins of even greater complexity.
In the case of every life form on earth, they all have these protein factories built-in which read the rna to create specific proteins.
Although, there might potentially exist other possible protein combinations that could create an alternative functional protein factory, any protein factory would have many interacting parts that each require increased specification as each part is included in the design.
In addition, it would be hard for a protein factory to function without a healthy cell holding all the necessary parts close together.
Another analogy would be like sodoku.
With a blank board, it Is possible to put any number anywhere.
However, as the game gets closer to completion, it requires a specific answer for each square.
If randomly putting a single digit in each square, the likelihood of getting a correct solution would be:
- 9^27 ~ 6e25 (possible random configurations of 1-9 in each square)
- ~ 6e21 Number of correct solutions
So it would be like this for a random single cell:
- Number of possible amino acid combinations for ~100,000 proteins of average length ~50.
- Number of protein combinations that would function as a living cell
The question is whether or not the /16 (of which most of the advertised/withdrawn /24s make up) was used by actual devices, or if it's address space NTT has yet to use. If it is assigned to NTT but unused space, then effectively no harm done. If it's actually used IP space, then that would be very inappropriate.
I couldn't care less about the IPv6 prefixes, but the IPv4 ones are all /24s made from 126.96.36.199/16, which is registered to NTT (AS2914). 209.24/16 is publicly announced (and has been for a very long time), and is routed through NTT Amsterdam routers.
I haven't looked at BGPlay to review all the data, but it looks like many of the /24s that make up that /16 were individually announced through AS15562, then later withdrawn, gradually over 4 months, to make said graph. I would hope this would be unused v4 space. That AS announced almost 98% of a /16 (probably 209.24/16): https://stat.ripe.net/AS15562#tabId=routing
Another user voiced their concerns, particularly if it was actively used: https://news.ycombinator.com/item?id=14621859 -- there's no way any of us could know this; NTT would be authoritative, and jwhois -h rwhois.gtt.net.net -p 4321 188.8.131.52/16 doesn't give any clues.
While the antic made me smirk, it doesn't (publicly) "look good" when we're living in a world that lacks (or has greatly limited) v4 space. What this says is: "NTT has a /16 they're fooling around with publicly", even though it (presumably) is harmless.
It is similar to a function based approach, but has many advantages that reduce code and increase productivity.
Here is a page of examples from me React typescript setup repo:
Even still, this seems pretty sensationalist to me. I find it hard to believe an organized service to "scam" Airbnb would be valuable to a host to save 3-5% at the cost of losing the legitimacy of being a part of the Airbnb platform. Furthermore, such a service would at least have the hurdle of being in violation of the ToS of using the site https://www.airbnb.com/terms#sec14 .
Also, all of the numbers in the article assume the Wisconsin data set would be representative. The article states less population dense areas would be more vulnerable to this algorithm. So, why does it extrapolate based on only 84 homes in a more "vulnerable" area? The article ignores the fact that Airbnb operates in countries that do not have the same laws as the US. So, not even all 3 million homes are vulnerable to this method. This article is fishing for a result that will make a good headline.
Personally I don't really see the problem. If you want to argue for a different state of things after some event, "post" seem like the correct word. Just like you would say "post war period". You might of course think that the statement itself is pretentious. But at least they argue their case, i.e the importance of her actions, in the article.
If you're the one that used to own your user@gmail, please drop me a line, email in profile.
Thanks for the feedback.
"Something's not working properly here" - I disagree. The model will overtrain (i.e. perfectly reconstruct the original waveforms of a small training set), which indicates it's capable of learning the necessary transformation. The problem lies in the limited amount of training time I had. To reiterate from an earlier comment, I trained on only 10 epochs, while the paper this is base on claimed to train on 400. Much more training is required for this model to generalize well without degrading the signal-to-noise ratio.
Possible implementation (about 20 lines of code): https://github.com/Quasilyte/goism/issues/57
Not sure if "defadvice" around "byte-compile-form" is acceptable for all users.
"applying a similar technique in the frequency domain", "Maybe training an image reconstructor on the short term spectrogram" - This is what I originally thought to do. However, this approach suffers from information loss whenever you transform from the frequency domain back to the time domain. Since the goal was super-resolution in the time domain, working in the time domain is more sensible.
"the reconstructed audio sounded terrible" - I think this is referring to the amount of static noise in the reconstructed waveform. Indeed, the SNR clearly shows the reconstruction is slightly worse than the downsampled waveform. As mentioned in the post, I strongly believe this is due to the limited amount of training I performed. The number of epochs of training data in my case was only 10 while the paper this project is based on trained for 400 epochs. During training I noticed a strong dependence on training epochs and perceptual performance.
In my mind, you are simply taking a 0-2khz signal and combining it with an entirely different 0-8khz signal that is generated (arbitrarily IMO) based on the band-limited original data. I can see the argument for having a library of samples as additional, common information (think many compressor algorithms), but it is still going to be an approximation (lossy).
"The loss function used was the mean-squared error between the output waveform and the original, high-resolution waveform." - This confuses me as a performance metric when dealing with audio waveforms.
I think a good question might be - "What would be better criteria for evaluating the Q (quality) of this system?"
THD between original and output averaged over the duration of the waveforms?
Subjective evaluations (w/ man in the middle training)?
At this point avoiding links is pointless as the source code will be essentially public knowledge in matter of days/weeks. Damage control is the only strategy left. The sooner security researchers outside Microsoft can start analyzing and reporting vulnerabilities, the better.