This is a non-informational take which is just ahistorical, as evidenced by the fact that just 30 years a lot of European cities were very much car-centric and were much more awful by pretty much every metric than they are today.
About one in every two households in Amsterdam own a car. I would bet many that don't use taxi services themselves now and again, and almost everybody would indirectly rely on roads used for deliveries, workmen, emergency services. And that's about the best case for a European city. It's comparable to New York City.
Paris, London, Prague -- higher.
The reality is that these romantic notions people have of cities not relying on cars or roads is an unrealistic fantasy. Yes it's certainly nice if you have good public transport that many people can use for most of their transport. The reality is though that even in the very best cases of those like Amsterdam and Tokyo, personal car ownership rates are still enormous, and the cities would cease to function without small private vehicles for commercial operations, let alone removing the roads for garbage collections, busses, emergency vehicles, etc.
I haven't walked around those cities, I've walked around certain areas of London or Tokyo as a tourist mostly in central areas that are well serviced by public transport and are very dense, without any real understanding of what it actually takes to live, work, raise children, or anything else in those places. I certainly saw a lot of roads and cars, particularly when looking out the window of trains and busses into actual places people live.
> Cars certainly exist, but they are clearly not used by default for everything.
Sure, I've noticed a bunch of American cities I've been to are a more difficult to walk or get public transport than any big ones in Europe or Asia I've been to.
They're all automobile-centric though. You'll never get rid of cars, taxis, trucks, busses, or roads. Not in any of them. Better walking, riding, and public transport is great, it's just never going to do away with the car, nor is doing away with cars and roads ever going to solve any problems that cities have. There should be more honesty and pragmatism around this.
Everybody likes to drive. It's far more private, personal, and free than even the best public transport, which is why a country like Japan with some of the best public transport on earth has a car ownership rate of 1 per household, and even in Tokyo it's 33% of households, with a large taxi industry.
It astounds me that so many on the left seem to be converging on the opinion that it should be a goal to reduce, eliminate, tax private transportation for the common person.
Public transport is great and should be improved or encouraged, and road area usage should be as efficient as possible. But private transportation should be made cheaper and more convenient and accessible to as many people as possible. That is a great way to improve quality of life and expand opportunities for people.
I hate driving but every time I get on my bicycle I remember how fun it is... until a car starts driving behind me impatient to get somewhere a few seconds faster. I can bike to many places in my town faster than the drivers can drive if I put some effort into pedaling.
I don't mean everybody likes the act of driving, I mean everybody likes what driving [a private vehicle] can do for them. I didn't mean even that in the absolute literal sense either, but anecdotes from people who don't have a car or get driven by friends and never take taxis or uber anywhere would be interesting and will probably help us to understand why virtually everybody likes driving.
I don't think anyone literally likes driving. Some people like racing, some people like drifting. But who the heck likes getting in their car, waiting at stop signs and lights, staring down a road for hours, and wondering whether the oncoming vehicle veering left is going to turn on their turn signal or straighten their path.
I guess it is nice to know that he is also not perfect. But it’s still the case that his accomplishments outshine my own, so my imposter syndrome remains intact.
I don't know what they were doing, but I tried o1 with many problems after I solved them already and it did great. No special prompting, just "solve this problem with a python program".
When I worked as a post doc, I wasn’t paid much but I got a direct return on extra work: another paper etc. when I got my first job as a data scientist, I was paid much more but there was seemingly little response to extra work. It was distressing. But later I learned that making a good impression on people would pay off long term, through recommendations for new jobs etc.
First thing it makes we want to do is qualify success rates among individuals. Eg investors. Some are quite successful, but more so relative to what you’d expect give equal randomness?
>> Some of the problems don’t matter as much if your goal for the model is just prediction, not interpretation of the model and its coefficients. But most of the time that I see the method used (including recent examples being distributed by so-called experts as part of their online teaching), the end model is indeed used for interpretation, and I have no doubt this is also the case with much published science. Further, even when the goal is only prediction, there are better methods like the Lasso, of dealing with a problem of a high number of variables.
I use this method often for prediction applications. First, it’s a sort of hyper parameter selection, so you should obviously use a holdout and test set to help you make a good choice.
Second, I often see the method dogmatically shut down like this, in favor of lasso. Yet every time I have compared the two they give similar selections — so how can one be “evil” and the other so glorified? I prefer the stepwise method though as you can visualize the benefit of adding in each additional feature. That can help to guide further feature development — a point that I’ve seen significantly lift the bottom line of enterprise scale companies.
> Yet every time I have compared the two they give similar selections — so how can one be “evil” and the other so glorified?
Frequentist and Bayesian approaches often yield similar results but philosophically are different. In general I favor and recommend lasso because I see it perform as well or better than stepwise at variable selection but doesn't come with all the baggage.
Lasso avoid the multiple comparison problem by applying a regularization penalty instead of sequentially fitting multiple models and performing hypothesis testing. This also helps to prevent overfitting. If you want to see which variables would be included/excluded you can turn the regularization up or down (it is pretty easy to spit out an automated report).
Stepwise selection comes in different flavors: forward, backwards, or bidirectional; R-squared, adjusted R-squared, AIC, BIC, etc.; these often all lead to different models so the choices must be justified and I rarely see any defense for them.
Of course, if the point is prediction over coefficient estimation and interpretability then neither of these are great choices.
> I use this method often for prediction applications. First, it’s a sort of hyper parameter selection, so you should obviously use a holdout and test set to help you make a good choice.
What the article is talking about is inference, not prediction. It's a different problem domain, it's not about telling a company whether design A or B leads to more engagement, it's about finding out about the (true!) causal drivers of that difference. The distinction may seem subtle but it's important. The key problems outlined all talk about common (frequentist) statistical tests and how they get messed up by variable selection. Holdout sets don't address this, because if the holdout set comes from the same distribution as the test set (as it should), the biases would be the same there. Bayesian inference isn't a panacea either, the core problem is structuring the model based on the data and then drawing conclusions about their relationships (Bayesian analysis gives you tools to help avoid this, but comes with its own set of traps to fall into, such as the difficulty of finding truly non-informative priors).
Yeah, the title is a bit hyperbolic. I have not used selection methods that much, but not too surprising they would have similar results to LASSO as selection or predictive method for people who think of it in terms of "feature development".
The distaste for step-wise selection comes from its typical use. If one reads Harrell's complaints quoted in the blog post carefully, quite many of them are less about the selection method but what analyst does with it, namely, interpretation of inferential statistics. When you see step-wise in the wild, practitioner often has used step-wise or other selection method and then reports the usual test-statistics and p-values for the final fitted model ... that are derived with assumptions that don't usually take into account the selection steps. It is quite unfortunate in fields where people put lot of faith in coefficient estimates, p-values and Wald confidence intervals when writing conclusions of their paper.
With LASSO and its cousins, the standard packages and literature strongly encourage the user to focus on predictions and run cross-validation right from the beginning.
Neat, I wasn’t aware of available data like this. I recently bought a used smart car — lots of fun, but I admit I worried it was a death trap. It’s not in this list but a google showed they are actually not much worse than average.
I’ve seen only one of those little guys on a freeway before. I suspect many, many more miles inner city where lower speeds and larger impacts are less common.
They do sound fun to rip around in. You used to be able to rent them with an app around my city. I think it was helpful for a lot of people
Class suite actions as only moral option to protect consumers (which is not common in Germany), Citing Kissinger as a god like authority in intra - countries relationships, lack of knowledge in pro - market / competition regulation (very strong in Germany, EU [for different reasons]),
I regularly read German and Swiss newspapers. The arguments are very different (and in many cases more nuanced)
I never cited Kissinger as a "god like authority", and to insinuate that I did is offensive.
Furthermore, you made an earlier false statement about bias -- claiming I had lived in the US all my life -- and backing it up claiming that you had read my LinkedIn. My LinkedIn features my European high school. You're either lying or lazy; you can tell me which.
Making bombastic and trivially false statements doesn't help your arguments. Good luck with your "nuance".
Love this. Very interesting that same amount of compression (samples) can give ever more accuracy if you do a bit more work in the decompression — by taking higher order fits to more of the sample points.
This is pretty much the core principle underlying modern machine learning. More parameters means more faithful fit for the data, at the cost of over-fitting and generalizing poorly on unseen data from outside the range of data that was used to tune the parameters. In this particular application, we aren't that worried about overfitting because we know the actual function used to compress the data in the first place, so we know that our decompression function is "correct" and we know the range of the data. So we can keep adding parameters to reduce reconstruction error. Meanwhile in applied ML and stats, cubic and even quadratic models should be used and interpreted only with extreme caution and detailed knowledge of the data (how it was prepared, what the variables mean, what future data might look like, etc).
This also seems to a difference between interpolation and extrapolation. The table doesn't just fit a polynomial to theta between 0 and pi/8 and expect you to extrapolate for theta > pi/8. That would have catastrophic results. It has always seemed to me like one of the big problems with ML is knowing whether a given inference is an interpolation or an extrapolation.
https://www.betonit.ai/p/cars-could-be-even-more-convenient
reply