Hacker Newsnew | past | comments | ask | show | jobs | submit | chimtim's commentslogin

Uber currently doesn’t maintain a large (or any) fleet either. I wonder how much the marginal gains from not paying drivers is shaved off by fleet maintenance of expensive cars.


Nope. They shift all of the costs of owning a vehicle onto their drivers.

They will be happy to lease a vehicle to you as the driver (with a predatory, abusive contract), but even this is almost entirely operated by third parties.


I've been working in the SW industry for around two decades now, and I see a distinct difference since the last 5 years, which is that everyone is optimizing all their actions towards their next promotion.

Prior to this time frame and more so in the last decade, there was a lot of focus on mastery of SW skills and even towards random exploration and learning, and all of that appears to have been lost. I personally felt all this has accelerated with the increase in big tech compensation, and levels.fyi and blind information.

The challenge with a greedy approach is that a career spans over multiple decades, and a lot of careers plateau over local minima and often one may have to down level or restart careers to find their true calling and excellence and the career growth that follows. This is better done at the beginning of the career when risks are lower than at a late career stage.


Deep software skills are critical for success. But we've found that many software engineers are surprised with how many other skills they need to build in order to advance in their career (and I'd argue these skills have always been important).


Unfortunately, one in 10 times is far from good enough (and this is with good prompt engineering which after using large language models for a while, one starts to do).

I feel like the current generation of AI is bringing us close enough to something that works once in a while but requires constant human expertise ~50% of the time. The self-driving industry is in a similar situation of despair where millions have been spent in labelling and training but something fundamental is amiss in the ML models.


You are correct. I feel this is why the service is called Copilot, not Pilot :)


I think 1/10 is incredible. If it holds up it means they may have found the right prior for a path of development that can actually lead to artificial general intelligence. With exponential improvement; humans learning to hack the AI and the AI learning better suggestions, this may in theory happen very quickly.

We live in a very small corner of the space of possible universes, which is why finding a prior in program space within it is a big deal.


I keep wondering how much time it could possibly save you, given that you're obligated to read the code and make sure it makes sense. Given that, the testimonials here are very surprising to me.


caching a computation result is called memoization.


For anyone not from Scandinavian countries or not used to the low sunlight and mostly cold weather, the downsides from the long-term side-effects of living in Sweden such as depression, low vitamin-D state significantly outweighs any short-term happiness.


Most people, even those close to the arctic circle have sufficient vitamin D levels in Sweden [1].

Regarding depression, it would be great If you could state a source for the claim that long term residency in Sweden creates depression.

[1]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4432023/


I'd say the opposite of that, as an Australian who moved to Norway.


I am glad I cancelled my subscription early this year and never bought its stocks.


Well, you'd have made a packet on the stock, and could've watched a lot too.

Does the value of the subscription really vary according to the company's HR practices? I'm not even sure that of the stock will.


This may appear to "work" but I doubt it is better than a 300K FPS camera by any means. It will not work perfectly across all scenes, it cannot interpolate uncommon movements and finally, it cannot interpolate many low-res cases like 5-20 FPS where there is actually significant missing information across frames.


That interpolates look to me significantly worse in case of highly volatile volume affecting reactions, like the ballon and the the immediate jelly post-impact sequence with the racket. There a clearly still hard limits to what you can believably hallucinate without knowledge of the higher frequency processes that have not representation in the source material and are to chaotic to be learned from watching random 240fps videos for training.

Also for a research output the ground-truth real slomo vs. super slomo is missing to make a proper quality assessment. The paper itself only list some fairly low-frequency inbetween frames as reference for comparison with other methods.

https://arxiv.org/abs/1712.00080


Have an expectation that the work-life balance is a circle i.e. it is zero


is it just me or these are really poorly designed courses? I took the RL course and it first started describing papers in RL with little context (under applications of RL). The course then jumped to bandits and I was hoping it will describe the foundations but it assumed students know about the regret minimization framework. Researchers are traditionally not the best educators -- I usually pick tutorials in conferences that are done by professors as opposed to researchers despite their content.


but it assumed students know

You jumped straight in at the 8th course in the programme and are complaining that it assumes you know something?

So many people on this thread looking for any nitpick they can, we get it guys, you just hate Microsoft. This isn't for you, then, no-one's pointing a gun at your head and saying, take this free course.


I went through all of them and I did not see any regret minimization background. If you think there is, please point me to it and clear my misunderstanding.


Then what is "Auto ML"? It sounds like just another cloud service.


Disclosure: I work at Google on Kubeflow

Correct, it's a cloud service, based on the research Google published for model exploration[1][2]. There are research examples today where this service provided better models than humans were able to achieve by hand or with genetic algorithms (models trained faster and/or with better error rates)[3]

[1] https://static.googleusercontent.com/media/research.google.c...

[2] https://blog.acolyer.org/2017/10/02/google-vizier-a-service-...

[3] https://arxiv.org/abs/1712.00559


Excuse me for saying this, but you don't have to put that disclosure sentence with every comment ...(personally, I find it kind of irritating)


My worry is that people deep link to a comment and think that I'm astroturfing. Trying to balance spam vs. full disclosure.

Note I left it off of this one. :)


Compared to ML Engine, where you provide the labeled data and model code, with AutoML, you just provide the labeled data (with your custom labels for whichever domain you may be working in) and AutoML builds the model for you.


Similar to Amazon's ML: https://aws.amazon.com/aml/details/

Eventually useful for building a very basic model to serve as a benchmark for a real model.


This does not sound very useful. The entire market and AI consulting is for that last 5% of accuracy. Is there a reason why Google is investing so heavily in this space? Perhaps a simple model/UI is a good starting point for many customers, and many of them end up renting the classic cloud after they gain some expertise. And the rationale for all this PR is that Google does not want to miss these customers to Amazon


Disclosure: I work at Google at Kubeflow

There are a few different reasons:

1) Most companies have very limited skill in building advanced models. While many folks are trying to achieve 5% or better accuracy, MOST folks are trying to achieve ANY accuracy (since they have nothing)

2) Many problems are not very complicated and do not require a custom model. From the blog post, the "cloud" example only requires a small amount of changes to classify for a specific domain - to have to build an entirely new model for that, or train on MM of images seems like overkill

3) AutoML (often) is better than humans already[1]. So if you want to achieve that 5%, you MIGHT need to use a machine anyway.

[1] https://arxiv.org/abs/1712.00559


Exactly, most non-IT companies have very limited skills in this area. This is why they outsource. As an analogy, when I want to furnish my office, I look for an interior decorator who takes care of ordering furniture etc., within a budget. I don't screw around with a 3D printing API to make chairs and tables. It is too low level and not my core expertise.

And second, your comment that most folks achieve any accuracy is strange. These are not real businesses, they are mostly developers and hobbyists trying to learn. These folks sign up for kaggle and poke a few scripts and view half a class on coursera on ML. They are not real businesses and they have no money. Most of the real businesses are hiring startups or large companies that hire data scientists with domain expertise like in oil, manufacturing etc. (the IBM model). ML as an API is a disaster as a business model.

Also, AutoML is not even close to being better than humans even for a specific problem (across datasets). These click-bait titles don't fly outside of AI conferences.


I think we may be talking to different customers. I talked with ~200 customers last year, and the most common question was "What do I use ML for?" and the second most common was "How do I get started?"

Put another way, the average customer has ~zero ML usage today. I'd guess that 95%+ of all businesses have ML usage today. Further guessing would say that <1% of ML usage actually care about levels of accuracy beyond "it's better than the hacked together set of rules/filters I use today." These are very large businesses with lots of money to spend on a solution.

There are many ways to measure "better", and AutoML does apply here. This includes "better == faster to train or develop" [1], "better == you need less data" [2], "better == lower error rates"[3]. While I agree that many of measures do not apply across datasets, most customers only have one dataset per problem.

[1] Predictive accuracy and run-time - https://repositorio-aberto.up.pt/bitstream/10216/104210/2/19...

[2] Less data - https://arxiv.org/abs/1703.03400

[3] https://link.springer.com/article/10.1007/s10994-017-5687-8

Disclosure: I work at Google on Kubeflow


Sorry, that total should be-

- Average customer has zero ML

- Nearly no customers are using any ML (difference between median and mode)

- Of those that are using, very few care about better than human perf


I actually have worked with lots of customers in deploying ML. This was my perspective. Thanks for sharing your perspective at Google.


From the article "There’s a very limited number of people that can create advanced machine learning models." -- Curious if this is really the case? It is certainly the case with my generation of engineers but half the student interns I interview from top-20 comp sci programs do this on weekends for hackathons.

Is the argument that it is easy to implement stock models but hard to tune the models for specific types of image inputs? Inst that pretty easily solved with some parameter grid searches? How much specialized skill does it take to re-do networks from traditional inception architecture or what not into something specific for hot-dogs or satellite imagery or medical images?


(I build machine learning models professionally)

* half the student interns I interview from top-20 comp sci programs do this on weekends for hackathons*

It's trivially easy to take what someone else has built and modify it slightly for a similar problem, especially in a hackerthon environment where you can ignore edge cases etc.

See if they can build a new model from scratch for a new type of problem. I'm not saying that AutoML can do this either, but I interview large numbers of PhDs who don't know where to start on doing something new.


Hence my use of the word "eventually".

"It's very basic but you can use the model as a benchmark" is actually the pitch you get from AWS.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: