Doing `html('<div>Hello {name}</div>')` would be possible. I have a system that's based on it. Two issues:
- No tooling in Python will do anything with the string, so DX suffers.
- Evaluating the variable requires frame evaluation, which is...problematic.
You could do `html(f'<div>Hello {name}</div>')` and get f-string coding assistance. But you'd also get immediate evaluation. There's nothing the `html` function can do with the `{name}` part.
If you do that, people outside the package can also do Username(x) conversions instead of calling NewUsername. Making value package private means that you can only set it from outside the package using provided functionality.
I don't think most people make a special trip just to get gas. It's usually a quick added stop along the way of another trip, which is less of a nuisance than having to make two small walks to move and plug/unplug your car.
The GP was responding to a point about the closure of gas stations accelerating the decline of ICE vehicles. If all the stations near where you normally travel close, then you are forced to make a special trip to fill up.
The last gas stations standing won't be in remote areas far from everything else. They'll be in heavily trafficked areas that everybody goes to regularly, particularly near grocery stores.
Even if like 80% of gas stations close, the remaining 20% will be situated along major transportation arteries. I expect there will be a gas station somewhere along someone's regular weekly driving route for all but the most remote dwellers.
And in those cases plugging in your EV amounts to no net gain or loss, but there absolutely are people who already have to make a dedicated trip to the gas station because their own town where they do their shopping simply doesn't have one.
No, we're not. We're talking about Norway as a whole, a country where for the vast majority of people, "urban centers" are few and far between. With an anecdote that highlights that even in urban setting there are areas where it's apparently easier to drive an EV than ICE.
They show the DFA for it on the site, it's 3 states. There's a starting state for the first . and then two states that transition back and forth between whether z was the last character or not.
I think what's actually happening here is that they're doing the intersection on the DFAs and then producing a regex from the resulting DFA. The construction of a regex from a DFA is where things get ugly and weird.
> Edit: it is trivial to construct an algorithm with this property.
It's actually impossible to construct an algorithm that has its runtime decrease as the input size grows. You can do this for finitely many examples, but we're talking about asymptotics here. With discrete run-times and a lower-bound of 1 operation, the run-time will have to stop decreasing at some point and just stay at the same constant value for all but finitely-many exceptions. This makes it a constant-time algorithm.
A constant run-time is a counter-example to that caption though, as the problem doesn't take longer as the input grows. An example would be checking if a number is divisible by 2. You only need to check 1 bit, so it doesn't take any longer as you add more bits that the algorithm doesn't touch.
I guess even if you allow probabilistic algorithms and expected runtime, you still can't have an algorithm that runs faster as the input size grows. Because... it takes O(log n) time to calculate any random variable whose expected value has a denominator of size n? I'm not entirely sure but that seems right to me.
With probabilistic algorithms, you still have the fundamental limitation that there's a lower-bound to how fast an algorithm can run. You can't keep getting faster indefinitely.
Having a constant runtime means there's a constant bound on the runtime, not that the runtime is an exact constant value. 100 + 1/n would still be constant, as it's bounded above and below by constants.
To have sub-constant time it would have to go to 0 as n increases, as otherwise we could bound the time below by a constant. This isn't possible, as a computer will take at least 1 time unit on every input. So the expected time will always be at least 1.
Maybe not! With O(log N) such as binary search you are assuming some sort of structure to the data. It is not necessary for an algorithm to scan (otherwise O(N) is as good as it would get)
What if we devise a data structure that becomes more efficient as it grows to do a certain (perhaps useless - but that is OK!) operation.
For example we have a data structure at N=0 with a million disconnected nodes and one of them being the target node.
As N increases add a node with a pointer to the target node. The algorithm to find the target node is a random search through nodes until it finds one with a pointer then halts. As N increases, time on average decreases with complexity O(1-N/(N-K)) where K=1000000
You can't have that because when N=K that's not a valid expression and when N>K you're putting a negative number in the big-O notation. You can't have negative runtime.
All algorithms have a lower-bound on runtime of 1. If a sequence is decreasing and bounded below, it converges to a value [1]. If the sequence is discrete, that means it reaches exactly that limit and never deviates from it.
I'm not sure what your big-O expression is supposed to mean, but if a sequence is decreasing and bounded below (which runtime is) then it has a limit. Since the sequence is discrete, it can't get arbitrarily close to the limit without reaching it. So at some point the runtime will stop decreasing.
You can approximate the runtime with a function that decreases forever, but the actual runtime will be constant for large enough n.
“Tend to” is a mathematical term mean get closer to. Formally, for any e as small as you like there exists an N within e of the value being approached.
This is possible because we are sampling a probability distribution. A dice is discrete but the probability of rolling a 1 as the number of sides increases tends to zero even if there is a whole “1” in a given dice.
An algorithm can't have that property for its expected runtime, as it can only look at a sub-constant amount of the input of it runs in sub-constant time.
it's not possible for it to behave differently as n increases because it doesn't have enough time to distinguish between different values of n once it's large enough.
You could use just the asymptote and call it "constant time".
But that's an extremely limited and misleading analysis, so you should not do that. If the time taken goes from a quadrillion to 7 as the problem size grows, don't call it constant.
"constant time" in complexity theory just means there's a constant bound on runtime. It doesn't have to actually have the exact same runtime down to the instruction for every input. Here, the bound would be a quadrillion.
Of course, showing a constant upper-bound doesn't tell us that it isn't even faster than constant as in the proposition I was responding to. That's why I focused on the constant lower-bound.
I know what it means, and I stand by it being limited to the point of misleading in a case like this.
Running any (halting) algorithm on a human computer is constant time, because you're bound by the number of states you can fit into some terabytes, but nobody should actually try to use that as a final analysis.
> Running any (halting) algorithm on a human computer is constant time, because you're bound by the number of states you can fit into some terabytes, but nobody should actually try to use that as a final analysis.
That's the sort of thing I would usually say to explain why we do things like use asymptomatic analysis and ignore polynomial factors. If you have a different solution, that's great, but you're no longer taking about the kind of runtime the article is about.
It depends on how abstract you want to be. You shouldn't always shove the lever all the way to the most abstract single value.
My suggested solution is "don't delete all nuance". If the runtime doesn't behave nicely, give that a mention. Most things do behave nicely, so that's not much of a burden.
> the kind of runtime the article is about
The article spends plenty of time talking about computational feasibility, which is not about the limits as you approach infinity.
It even equates P with "trivial", which is... not good.
I think the key thing you're missing here is that they were talking about behavior as the number of flips goes to infinity. That long improbable upper end becomes more and more improbable over time. As time goes to infinity, the probability of being in it goes to 0. With a finite population, everyone is ruined.
I ran the same simulation as you. After 100 iterations, the richest one had $71. After another 1,000 iterations, the richest one had $78 (of a total $133). After another 10,000 iterations, the richest one had 10^-165 dollars. They were still an extreme outlier with 95% of the total wealth, but he had almost nothing.
The first half is wrong too. GPS has nothing to do with satellites being able to see lots of things from high up. The whole thing is just nonsense that looks plausibly like an explanation until you try to decipher it.
Achieving simplicity by glossing over details is distinct from achieving simplicity by stating something incorrect.
Which one is more appropriate and/or typical will probably depend on what questions the five-year-old is asking, but I think it's reasonable to say the former is usually preferable to the latter.
I didn't even notice that bit (maybe GPT is a flat-earther?).
Something GPT discourse has been demonstrating to me is that I'm not usually a very careful reader. I apparently skim a lot. Or maybe I skim GPT outputs because I'm biased in my expectations already?