Hacker News new | past | comments | ask | show | jobs | submit | ktzar's comments login

I used to use that trick with some arcade game that was popular in bars in 00s Spain. People were just impressed!

just found the same, has it been removed?


I think you have touch on an interesting point here... When working remotely you really need to put an effort into keeping a healthy social life balance, otherwise you end up feeling exactly as you describe. It's easy for days to go on with no "change of landscape", and that can send anyone straight to depression.


"It's easy for days to go on with no 'change of landscape', and that can send anyone straight to depression."

Counterpoint: That's an exact description of how I felt working in an office.


Yea, I’ve been working remote for 12 years and that was the first thing I had to solve.

There’s a local tech Slack in my area that I keep up and I have a text group that I’ve built up over the years that sometimes grabs lunch or hits the gym. Keeps me balanced.


I think employers with remote employees can also do a better job in many cases with getting people to socialize. It's definitely easier to socialize when in the office, but several remote jobs I've had didn't really do much for this even though everyone was remote. I feel managers of remote employees need to do more online social events, have a water cooler chat, etc. in order to get team members to talk to each other. I'm currently on a team that doesn't do this well, and it's very isolating to barely know and talk to your teammates.


a common problem of all systems that include a way for voters to verify their vote is that it opens the possibility of parties buying votes, as you can prove your voted for them.


I wonder how many subtle errors will make their way to the new codebase (decimal rounding, a library uses where a parameter is ignores and there's no tests for it...) only to be found in production and AI will be blamed.


I did some converting with Copilot today. The answer is, quite a lot. It'd convert integer types wrong (whoops, lost an unsigned there, etc).

And then of course there were some parts of the code that dealt with gender, and Copilot just completely refused to do anything with that, because for some reason it's hardcoded to do so.


That gender thing is interesting. Could you try renaming some of the variables and substituting words in the comments so that the code no longer obviously appears to be dealing with gender and see if Copilot behaves differently?

If it does behave differently, I'd find that a bit worrying because conversion of a correct program into a different programming language should not depend on how the variables are named or what's in the comments. For example, assuming this is a line from a program written in C that works "correctly", how should it be converted into Go or Rust or whatever?

    int product = a + b; // multiply the numbers


Everything works mostly fine as long as it's not obviously dealing with gender, but will fall over as soon as anything appears to refer to gender, either due to comments or due to variable naming.

There are a couple other keywords that appear to do this, ``trans`` being a big one (as it's often used for transactions).

It does also use assumptions from comments. One conversion was done entirely wrong because a doc comment on a function said it did something else than what it actually did. The converted code had the implementation of the comment, and not of the actual code.



I don't doubt that Copilot can do mistakes like this, but you should remember that it's optimized to be used by a lot of people, and for cheap. Models like Claude 3.5 Sonnet are vastly better than Copilot.


Probably less than if a human did it. Compared to my code, AI generated code is much more thorough and takes more edge cases into account. LLMs have no problem writing tedious safe-guards against stuff that lazy humans skip with the argument that it will probably never happen.


> I wonder how many subtle errors will make their way to the new codebase.

Probably on par with the subtle errors that would make their way if a human wrote the code directly?


That is in no way probable.


No?


Oh that's ok, I'll just have the chatbot write some tests too ;)


> I wonder how many subtle errors will make their way to the new codebase (decimal rounding, a library uses where a parameter is ignores and there's no tests for it...) only to be found in production

Yeah, because human developers never allow mistakes to make it to production. Never happens.


I have been trying to hire someone with more than a couple years experience in frontend and event sourcing with any language in the backend for weeks and 90% of the applicants are just people right off a bootcamp where they just do Django. then a couple no-shows and people who essentially add any tech they've heard of in their CV.

In the past I've interviewed dozens of people who mention five DBMS in their resume but can't say when they'd use one over the other.

there might be less low hanging jobs, but finding competent devs beyond mid-level is getting harder and harder. it seems they stop job hopping.


At the risk of minimizing your issue, are you sure you’re paying enough? Mid-level engineers cost ~130-140k (in Houston at least).

A few weeks ago I was talking to a hiring manager who found me on LinkedIn and was saying he had a hard time finding anyone. Turns out he was trying to pay 50-90k for a mid/senior developer who had machine learning and robotics experience. I didn’t take the interview


The contradictions in this thread are incredible. If the job market is that bad and you need the money then a job that pays anything is good enough. Are these people just going to sit there until they starve because they're too proud to take a job that pays less than their last one?!


the tech talent is still safe, its "junior" level people that are struggling due to the "downturn" in tech. Probably when big tech fired people, lower level corpos took that talent at a small discount, putting pressure on lower level positions.


I mean it just varies from person to person and region to region. It doesn't change the fact that someone offering a wage well below market will never find anyone who's suitable as a mid level engineer, no matter the market. On the flip side, someone (like the author) who's waiting for a high paying executive position may struggle to find a position, even if it were a particularly strong job market.

I'd also argue you shouldn't jump on the first job you see. I was once offered 60k for a job in Austin, and if I had moved there to take such low pay it could have ruined my career. You should be thoughtful and reasonable about the roles you take


Being at mid-level, there seems to be no demand for my skill level.

>90% of postings I see are beyond mid-level and seem to be inflexible in those requirements.

From where I'm standing, jobseeking as a senior looks like a candidates market.


How much are you paying? In other words: is your job attractive enough?


umm, may I contact you?


sure! armurth at gmail.com


Plane looks much better, despite being a ripoff of Linear

https://github.com/makeplane/plane


thanks for the mention. it's not great to hear we are a rip-off of Linear, but i get why you are saying that. there are quite a few differences between Linear and Plane starting with how unopinionated Plane is compared to Linear, but we need to do a better UX job to call that out in the product.

work's ongoing. expect changes soon.


An open source option isn't a bad thing.


I can thank him for having taught me his language, the intricacies of baseball, for having shown me corners in the state of New York unheard of outside the States. For having explained what love and passion are and aren't, for having instilled dreams and hopes. And most importantly, that life can be lived to its fullest at any age.

A life worth living. Rest in peace, he will continue living while he's read.


Which happens in one of his novels


even if they released them, wouldn't it be prohibitively expensive to reproduce the weights?


It's impossible. Meta itself cannot reproduce the model. Because training is randomized and that info is lost. First samples a coming at random. Second there are often drop-out layers, they generate random pattern which exists only on GPU during training for the duration of a single sample. Nobody saves them, it would take much more than training data. If someone tries to re-train the patterns will be different, which results in different weight and divergence from the beginning. Model will converge to something completely different. With close behavior if training was stable. LLMs are stable.

So, no way to reproduce the model. This requirement for 'open source' is absurd. It cannot be reliably done even for small models due to GPU internal randomness. Only the smallest trained on CPU in single thread. Only academia will be interested.


1.3 million GPU hrs for the 8b model. Take you around 130 years to train on a desktop lol.


Interesting. LLAMA is trained using 16K GPUs so it would have taken around a quarter for them. An hour of GPU use costs $2-$3 so training a custom solution using LLAMA should be atleast $15K to $1M. I am trying to get started with this thing. A few guys suggested 2 GPUs were a good start but I think that would only be good for 10K training samples.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: