> AT&T does have a plan where any provider can sign up to pay for their customers' data use, which is different from waiving costs for certain large, established sites.
It’s like a customer-acquisition cost shouldered by the company, and presumably an option available to all companies.
Zero-metering or other things being discussed privilege a fixed, arbitrary list of companies/websites, which is why they’re bad. Also, the extra cost is shouldered by either the ISP or the customer, which is like the company/website in question benefiting from a negative externality.
It's not great, but it's open to everyone who "just" has money, reducing it to the previously not-quite-solved problem of giving capital to potential upstarts. As a Spotify competitor, you can't just sign up for Binge On, unless perhaps you have connections with people who work at T-Mobile.
> As a Spotify competitor, you can't just sign up for Binge On, unless perhaps you have connections with people who work at T-Mobile.
Binge On is video. The zero-rating program for music is Music Freedom.
If you are a Spotify competitor and want to get included in Music Freedom, you contact T-Mobile at the email address documented in the Music Freedom. You don't need any inside connections. T-Mobile's stated policy is to get as many music streaming services as possible covered.
For video providers who want to be included in Binge On, there is a different T-Mobile address to mail to, also documented on the T-Mobile site.
For Binge On, the video service can actually choose one of four ways to participate:
1. Do nothing. Their content will not be zero-rated. If a T-Mobile customer has enabled Binge On T-Mobile will try to optimize the bandwidth usage if it can detect the video.
2. Be zero-rated. T-Mobile detects and optimized bandwidth usage.
3. Be zero-rated. The video service detects when the customer is on T-Mobile and handle optimizing bandwidth.
4. Disallow Binge On. Their content will not be zero-rated, and T-Mobile will not try to optimize its bandwidth use for customers who have Binge On enabled.
I’ve tried joining Stream On (the term for T-Mobile#s Binge On and Music Freedom outside the US), but the requirements are insane.
4 weeks before I make any changes to any service I run via Stream On, I need to inform T-Mobile, and they have to approve the changes, or can end Stream On immediately.
All Stream On content needs to run via separate hostnames, and has to transmit the hostname in cleartext.
I have to be using forms of adaptive streaming, the bandwidth will be limited during this so that the user at most will be able to watch 480p videos.
50'000 EUR fine for each violation from my side, no fines if T-Mobile just kicks me out.
And so on and so on. It’s ridiculous.
Give me a single form where I enter a URL, and it’ll be zero-rated – or don’t zero-rate anything.
> T-Mobile's stated policy is to get as many music streaming services as possible covered.
While that may be a stated policy, it is also obviously a lie.
Obviously, they only want specific kinds of services covered, or else they would just drop this crap in the first place, as that is the only and totally straightforward way to make sure that all services are covered.
If I run a bittorrent-based streaming service, they do not want to cover that. It's pure propaganda that they want to cover as many services as possible.
So?! Yes, it probably would be ... but how is it my responsibility that they chose to identify services in a manner that inherently discriminates against certain services? If it was true that they intended to cover as many services as possible, they would not have chosen an identification method that obviously doesn't work for some services ... or for that matter, not introduced any distinction at all, and instead just increased the data cap, which would automatically work for all services.
Let S/D mean a service that offers a speed S mbit/sec with a data cap of D GB per month.
Let P(S,D) be the resources required to support that service. In general, for positive x, both of the following are true: (1) P(S+x,D) > P(S,D), and P(S,D+x) > P(S,D).
Let's say a particular ISP has everyone on a 40/10 plan, so they need P(40,10) resources.
Now suppose they decide to offer something like Music Freedom. A person streaming a 256 kbit/s stream 24/7 would use about 90 GB/month.
The resources required to support their customers are now approximately P(40,10) + P(1/4,90).
If instead they just raise the cap of everyone by 90, the resources required are P(40,100), which is about the same as P(40,10) + P(40,90) [1].
The general cap increase will use around P(40,90) - P(1/4,90) more resources than the Music Freedom approach.
In general, for a given total amount of data transferred per month, the more smoothly that data is spread throughout the month, the less resources are needed to handle it.
Music streaming is both smooth and does not require much speed, so doesn't require much additional resources. Streaming of video requires more, because it needs more speed, but it is still less than is required to support the same total amount of data as arbitrary files downloads, because arbitrary file downloads don't have a built in rate limit.
[1] Not quite. I don't think it is quite true that P(S,D1) + P(S,D2) = P(S,D1+D2). I think the combined will be little less than the sum of the parts. That's because the resources needed are a function of the average data used, the variation in that, and how often you can have slowdowns due to congestion without getting in trouble with regulators. So I think it is like adding distributions...the variation in the combined will be less than the sum of the variations in the individual distributions (I think...)
For one, internet service generally is and is accepted to be overbooked. Not to a degree that customers normally notice (well, it sometimes is, but that's not really acceptable), but there is no guarantee that the nominal bandwidth is available at all times to all customers, in particular on mobile networks. The burstiness of the traffic of individual users is not really a problem for network capacity planning, as a large enough collection of users will have a much smoother traffic pattern than any given individual. Yes, one user's file transfers throughout the day are very bursty. The combined file transfers of a few thousand users are not.
What remains is variation throughout the day--but that also affects streaming services. When noone is transferring files, noone is listening to or watching streams either. So with or without zero rating, you still have to build more infrastructure than if the same amount of data were being transferred completely smoothly throughout the month.
Also, if your goal were to smooth out traffic, certain file downloads should actually be treated preferentially--namely, file downloads scheduled for late at night. You should give people free podcatcher downloads at night, so people can download stuff to listen to during the day at night, when the network is otherwise idle, to shift load away from the day.
But what I think really makes this a bad argument is the fact that this argument in no way is specific to the approach of treating certain service (operators) preferentially. If your goal is to incentivise smooth bandwidth utilization, there is no need to therefore require specific streaming technologies and a contract between the service and the ISP and all that--all you need to do is to say that bandwidth use below 256 kbit/s (or whatever the appropriate bandwidth is) is not counted against the cap, that's it.
You simply put a price on the actual network load that you want to (dis-)incentivize and leave it to customers to decide how to make use of the resources you are selling them. There is no reason why spotify's 256 kbit/s needs to be treated differently than my own server's 256 kbit/s in order to price smoother network load cheaper than non-smooth load. If you want to make things more transparent for the customer, provide them with an app that shows them their current data rate, maybe with a switch that enables "safe mode" (i.e. limiting their data rate to what is not counted against the cap, maybe with some token bucket built in to not slow down occasional web page loads).
Or, for that matter, if they were actually serious about the smooth load thing, they could create an internet standard that smart phones (or any other devices) could implement that would allow them to mark flows that are to be subjected to low-bandwidth shaping. Then, apps could potentially just request a "cheap, low bandwidth socket", and the operating system could make sure that on any ISP that supports a category of slow, cheap bandwidth, that socket's data transfer would be zero-rated, without any need to sign contracts between service providers and (thousands of) ISPs, without any discrimination against small services or self-hosted stuff.
The collateral damage of the approaches that ISPs are taking is unnecessary for reaching those goals, and everything about how they do it tells you that that is fully intentional.
I always thought Russell's paradox was basically just a formulation of the Liar's paradox in the language of set theory at the time, and that this was no secret. Its significance was not that it shed light on fundamental questions of philosophy, but rather that it poked a hole in what was supposed to be a foundation to all of mathematics.
I'm not familiar with the formal definition of Kolmogorov complexity, nor its related theorems, but informally, it appears to be about the length of the specifying program, not the time it takes to run.
Given that we have a finite domain, and 1:1 functions, we should be able to specify f^-1 with some constant overhead and embedding f.
Something along the lines of:
//embed the definition of f.
f^-1(x) = for a in Domain:
if f(a) = x
return a
Formalizing this would involve specifying what description language you are using and how you encode functions.
Technically not murder, as that requires premeditation. However it is apparent that this is a no-win situation for the victim, the officer is clearly intent on creating a situation for a "justifiable" homicide.
What's just as horrifying is that the jury acquitted.
a bit further, you're absolutely correct; it seems the US has a more nuanced definition of murder, for which any of the three definitions of second degree murder could potentially apply in this case[1]:
- A killing done impulsively without premeditation, but with malice aforethought
- A killing that results from an act intended to cause serious bodily harm
- A killing that results from an act that demonstrates the perpetrators depraved indifference to human life
As for my original point, that just makes the aquittal even more shocking!
Note that the US does not have a single definition of murder; there are loose common guidelines, but each jurisdiction within the U.S. has its own, slightly different, definition of each form of murder. The specific statutes matter a lot in actual cases even if they get ignored in general discussion, as do the actual charges filed. The fact that another form of murder exists that could have been supported based on the trial evidence will not save a conviction on appeal if the appellate court feels the evidence cannot reasonably support a conviction for the specific form of murder actually charged.
I have seen use-of-force training exercises that look very similar except there is actually a weapon being drawn. I don't think it makes the officer who pulled the trigger innocent, but I believe he behaved exactly the way he was trained, and holding him personally responsible would not prevent this kind of thing from happening again.
Note in particular it is a different officer who is talking in the video and created the whole situation.
Fair enough. The horrifying part to me was not the moment the shots were fired, but the escalation and screaming of inconsistent instructions by an unmistakably bloodthirsty officer to a man who was clearly scared out of his mind and literally begging for his life.
I watched some random streamer with 3 viewers play Skyrim an hour or so a day for several days, asking him what he liked/disliked about it, before purchasing it.
I feel like this is a very recent change; I've definitely created "burner" gmail accounts in the past and I don't remember providing my cell phone number.
It's pretty easy to imagine a counter-example that would have you traverse the length of the map, when the best route would be to detour on an existing traversal. Consider:
I tried describing here https://www.reddit.com/r/programming/comments/6oxra8/using_h... why I think that this is theoretically a TSP problem, I don't consider that necessarily the best way to treat it. tl;dr is, that at least to me, at the time, for this problem, the speedup of an optimal solution wouldn't have outweighed the additional thinking time to get to that optimal solution (and, most importantly, to actually make it useful; after all, I wasn't actually drawing a path, I was outputting a list of names).
You can even see that in the argument I added to the post; while a zig-zag line might've been, in theory, a stupider, easier way to solve the problem, in practice it would've meant spending some time thinking about the right discretization. With a Hilbert-curve, I could just choose some n that is definitely large enough and be done with it and the additional cost of the more complicated curve doesn't really factor in, as I could just copy-paste it anyway.
But yes. The theoretical problem is a TSP and with the right set of tools, I could've added some efficiency to the search by viewing it as such.
I think is TSP only if you start by the first point in the ordered list. If you start any other point, you will be going in only one direction (top-bottom/left-right or bottom-top/left-right) (See how the space is filled)
Given the precision of the points, lexicographical order on (x,y) would essentially be lexicographical order on just x, which then induces a lot of jumping around the map.
Basically, what others have replied is true. But, in essence, yes, I now believe that this would've worked too, if you choose an appropriate discretization of the grid. Hilbert Curves probably were simply the Hammer I had lying around. In the end, it didn't really matter, given that the actual Hilbert Curve part was mostly copy-pasted, but at least it gave me the opportunity to actually use them (and learn from it).
Yes, but we wouldn't actually walking the whole snake, just as we are not walking the whole Hilbert curve. We are walking the point cloud in the order dictated by either, and those will, in general, differ.
Perhaps for now, but the ubiquity of smartphones could also bring them back in new and interesting ways. The most recent runaway hit was Pokemon Go, and though it was clearly a fad, it may have had more staying power if the social aspects were stronger, e.g. direct PvP battling.
Caveat: Take what I say with a grain of salt; I've never built a game in my life.
The mouse genome is only 160 megabytes, and contains the instructions for building the brain as well as building everything else, so the "secret sauce" of how to make an intelligent brain should not be extremely large, once you figure out how to do it. :) A lot of the actual connections must be either random, or encoding things the mouse learnt while growing up.
There are four bases, so one base encodes two bits of information. Eight bits are one byte, so four bases are one byte. 2500 megabases = 625 megabytes. So yeah, Parent was off by a factor of 5-6 :) . But still, that fits on one CD.
Except that currently genomics requires even more information to be encoded - such as quality scores, allele frequencies, phase information, ... - so, depending on the format, this estimate is off by either one or two orders of magnitude still.
Take a bunch of source code. Compile it, obfuscate it, compress it, and encrypt it with AES such that the result is a 160MB blob. Now see how long it takes to figure out what it does, given a computer that costs a lot of money just to load your program and a long time to give you a result. The upper bound on the complexity of DNA as it relates to the complete expression of an organism phenotype is insanely high.
Most developers think of the genome like a big load of source code, and if only we could work out where the if and for statements were we could read it. This is an extremely naive and overconfident point of view; the analogy between source code and genomes is very poor. The genome is coding for proteins (by way of RNA). Those proteins are subject to all of physics (think: electrostatics, hydrophobics, ....), whereas your code is an abstract entity designed to run on a rather simple analogue of a Turing machine. The complexity of life is much harder I am afraid. Though that never seems to stop developers assuming that they can create a crude analogy which explains it. Also, the size is totally wrong; see previous comment.
It's true that genes and proteins is nothing like code, but in the context of understanding the brain, I think that should be cause for optimism, because it means that nature has its hands tied behind its back. The genes can't just contain a description of how the brain should be wired together, because the description also has to be "self-executing"; the entire object must robustly self-assemble just from proteins physically interacting. So although 700 megabytes of mouse genes could potentially contain a lot of stuff, it might be possible to do the same thing much more simply if we can program a digital computer instead.
Like, the connectome for C. elegans has been mapped out; it's can be written down as a 2 megabyte ascii text file. Just the connectivity is not enough to actually reproduce the behavior of the worm, you would also need data about the weight of each connection, but it's still a lot less data than the worm genome (about 25 megabytes---I hope I got the number right this time!). The worm genes also need to contain a lot of additional stuff to build functioning cells internals, etc, stuff which hopefully is irrelevant to the actual cognition.
> whereas your code is an abstract entity designed to run on a rather simple analogue of a Turing machine.
I cannot adequately put the insane laugh required as response to that into text form. So I will only write this and be just as right: going by physics the brain of a mouse can be adequately approximated by a perfect sphere.
The definition of a turing machine is mathematically perfect. No threading, no IO, no error correction, no errors, no asynchronous events, no processes fighting over shared resources, no resources that might or might not disappear at the blink of an eye, in short no nothing. In that it is equivalent to a spherical brain, any complexity relevant to the problem at hand removed.
You're making things far too complex, and confusing the issue, and yourself, as a result. Let's turn to the first sentence from Wikipedia:
"A Turing machine ... manipulates symbols on a strip of tape according to a table of rules"
Whichever programming language you are fond of ultimately reduces to this mode of computation. However, with DNA, RNA, and Proteins, that is not the case. The way that we compute is simplistic compared with the way that biology computes. Thus: the crude analogy in fact hinders understanding, and should be discarded.
the "learning" can come from things like the compounds in various food and so on! "learning" is, in a generalized sense, any non-genetically-bootstrapped environment->body information transference...
Wait, how is that different?