Hacker News new | past | comments | ask | show | jobs | submit | more matmatmatmat's comments login

I don't get the hate for the CCS connector? I use it multiple times a week, it works fine. Now and then I come across a charger that refuses to start, OK, the connector is worn. I suppose someone will come along and fix it.

I have one in my garage; it does not sag under its own weight.


> I have one in my garage; it does not sag under its own weight.

I agree with you that the CCS connector is perfectly inoffensive, but you're probably thinking of J1772, unless you have a DC fast charging station at home!


Ah, you would be right, it appears I have a J1772 at home. I did not know that the CCS is a superset of the J1772 connector.


> Now and then I come across a charger that refuses to start, OK, the connector is worn.

Except this is about 50%+ of connectors in my area. Doesn't seem to happen to the Tesla connectors.


It so happens I recently took a Pixel 6 Pro and a Canon 80D on a trip abroad. I used a rebuild of the stock camera app that does away with the automatic over-sharpening that the stock camera app has, and with the 80D, I used the EF-S 15-85 mm lens that (I believe) used to be the kit lens for the 7D. I also used the EF 70-300 mm non-L lens.

There is, in my opinion, no question that the 80D takes sharper pictures in daylight. It's just hard to beat a sensor that's that much bigger. The lenses, also, just have way, way more light gathering power.

Now, in dark places, at night, I used the P6P more, and that worked better than the 80D. But I'm glad I had the 80D for the big landscape shots and for the tight shots of people's faces.

The A7 III is way lighter and smaller than the 80D, and takes way better pictures. I would suggest considering finding a space for it in your bag. At least take a few pictures with both the P6P and the A7 III and view them at 100% to see if you're happy with the results.


If you're willing to post process your images, the 80D will look way better for night pictures.

The problem is that there is no built-in function for it and you have to manually process each pictures. You might even need more than one tool if you want to take advantage of the same type of AI fakery that phone have.


One thing I love about my A7S is the ability to tilt the screen and take candid photos of people while we're having a conversation. Also that thing pretty much shoots in the dark so I find that magic.


I regret not buying a Sony when I got my Canon 6D. Almost all of the lenses I use now are old/vintage and it sucks not having image stabilization for the extra 2 stops and a digital viewfinder to properly focus the lens. I almost resold my 6D many times in the past but I got too attached to it to ever pull the trigger.


Question on the ML side of this post: How are these "parameterizations" used? Is this really just feature engineering with a new name? Are they including this information when training the model?

In the article, they mention using the new labels to build a "more balanced" dataset -- is this a realistic possibility in practice when most teams still have a dearth of data?


Hello! I wrote the article so happy to answer this. It is partially feature engineering but partially not. It’s essentially using feature engineering to curate/correct a dataset, but a neural network as the actual end model without explicit input of these features(we call them quality metrics). I abbreviated a good amount of the process in the article so that it wouldn’t run forever, but essentially we allowed ChatGPT to select and write its own features and then used the strategies it came up with to apply these features to improve the dataset.

In terms of if it’s realistic in practice, the answer is yes. Some teams have a dearth of data, but many AI companies we work with have more data than they can use, and it’s more a question of how to sample, curate, and correct the data and labels they have to improve their models rather than collect new data. Great questions!


With Germany, in particular, I think there was also a lot of pressure from Green parties.

In any case, I would agree it looks like a mistake in hindsight.


AFAIU, the Germany case is complicated: the non-Green government followed half of the Green parties plan (going out of nuclear) but half-assed the other half (compensate by expanding the required renewable sector) that was as important for the Green parties.

There can be tons of explanations for that: going out of nuclear was more popular than building renewables in people's backyard, going out of nuclear was an easier path to follow than develop the renewable sector, the effort of going out of nuclear may have been used as an excuse to say "see, we do green stuffs, no need to do more", the coal/gas generation may have looked an easier or more profitable path for some politicians, the industry had less resistance against going out of nuclear than going out of coal/gas, going out of the nuclear may have been an easy concession to give to the Green and to look good to the public, or even the government had low incentive to succeed in the transition because if it fails they can blame the Green parties ("but it was your plan ... see, Green parties don't have realistic ideas")...


The greens also pressured to install solar and wind as replacement, not coal and gas. They raised the issue of energy dependency early and constantly. It’s easy to blame them, but this is not what they asked for.


Unless you capture it, as Amsterdam has been doing.


CO2 capture is a scam, perpetrated by the petroleum industry to make people think it's okay to burn more fossil fuels.


$10 MM of compute doesn't seem all that out-of-reach for most "mid-size" companies, especially if the result is economical.


I don't think the result is economical purely because if it were, Google would be monetising their own models by now (of course, maybe they could monetise it if they were willing to go with a paid instead of ad model for search).


I think there are a lot of assumptions that Google hasn't already integrated this type of technology into their search engine. I suspect they have, but they've done so conservatively, to avoid changing how their "Golden Goose" lays eggs.


50% of us are below the median. Depending on the distribution, there could be many of us or few of us below the mean.

Now, does the employer use the median or the mean to evaluate their employees? Interesting question..., I don't know.


This isn't really here or there, but I've recently been going through the deeplearning.ai course by Andrew Ng and friends, and at the end of each week, there is an interview with a luminary in deep learning.

A couple of weeks ago it was Andrej Karpathy. I got about three sentences in when I realized this guy is really, really smart. The way he spoke about neural nets and the problems he was working on suggested to me a deep and nuanced understanding, and a way of thinking that always tries to expand that depth and breadth.

Anyway, I figure if a guy like that couldn't make it work after so many years, even with a team that surely has other strong players, then it's just out of reach for the time being, with the hardware they're constrained to. It's even possible that deep neural nets will just never be able to do FSD at a level that will gain broad acceptance and some new architecture will be necessary.


> I got about three sentences in when I realized this guy is really, really smart. The way he spoke about …

As a non-expert, how would you be able to judge?

ChatGPT also sounds really smart while giving brutally false answers.


Well, if you'd permit not getting into details, I'm not exactly a total non-expert.


Ah, why so gloomy. Solving this probably comes down to 1) sensors, and 2) computational power available in a car. The sensors used in Teslas were a joke last time I looked (low res, bad low light performance, not enough cameras probably), certainly we can do better? And Moore‘s law is still alive in a way, allowing stunning progress like demonstrated in Chat-GPT. Ever-growing car batteries will also allow for way more power draw for computing. I expect vast improvements in the next few years. Maybe we can get from „drives like a drunk teenager“ to „drives like an overly cautious grandparent“ at least.


> 1) sensors, and 2) computational power available in a car.

Waymo and Cruise use dedicated hardware and aren't artificially constrained either by preexisting sensors nor compute. Yet they haven't fully solved the problem yet, with Waymo still struggling with unexpected but really should be expected stuff such as road construction, while being available only in specific geographic areas and with quite expensive sensors and years of work.


Care to link the interview? I love Karpathy's work.



Dang, I did not realize it is from 2015.


Legitimate question.


> In a purely market-driven world Europe would not be investing in solar.

Well, maybe, but there's more to a market than buy and sell price. There is also the cost of external energy dependency, for example, or environmental degradation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: