Hacker News new | past | comments | ask | show | jobs | submit | AlexanderNull's comments login

Neither Github Copilot nor GPT4 are worth your time. At best they partially guess the name of a function you're thinking about typing, at worst they give you almost a correct answer. I've been shocked by how close those models will get to almost understanding what you're attempting to do, while still fundamentally getting it wrong. Last month, after a while of realizing I was spending more time correcting suggestions than I was saving I stopped using them and will need to see some major improvements before I can feel comfortable using them again.


This resonates with my recent experience using Bard (not the latest version of Gemini). It would produce something that initially seemed surprisingly good, but then when I actually tried to run it it turned out to be totally broken. I'd ask it to fix the error; it would magically do so but then be broken in a different way. It felt like pair programming with a junior programmer who just didn't quite get it.

This was just me interacting through text prompts. I could imagine some kind of more integrated solution where you can provide some basic test cases, and the system would run those cases through code proposed by the LLM could go a long way towards improving this.

For now it seems mainly useful as a way of getting a quick first draft of some code which I then have to fix up and get fully working myself.


I see a whole lot of potential in these tools, and in some domains they are starting to deliver on some of the promise. But by and large I agree with this statement - they're actually costing me time because I have to do the research to see where they went wrong. I'm better off learning it properly and idiomatically from scratch right now.


I don't get this at all. What kind of code are you writing that you have to literally go and research what it spat out?

In my experience 90% of code is 90% the same as another piece of code in the same repo, with small differences, and copilot will make you fly writing that code.

If you can't read the output code, does it mean the rest of your codebase is similarly unreadable?

The complexity in a codebase or a system is usually from different parts integrating or an overall architecture, but that's totally different to an individual function


"What kind of code are you writing that you have to literally go and research what it spat out" - so in a recent case I was trying to work with Elasticsearch. I'm not an expert in that, so I asked it to do some things. It hallucinated a bunch, and I ended up having to dive in and learn it deeply anyway.

In that case I think I was better off not relying on the tool. I do find it nice to steer me in a direction, but the things I use tend to be niche enough that I don't get the benefit many others do.

I also have a feeling you and I are using it in different capacities.


This can happen when you are working with new codebases or APIs. For example, recently I tried to build this small gnome extension [0] but I had 0 experience with the API. So I tried chat gpt.

Even though the structure of the code in the file was ok, it called some APIs that did not exist, it created a new var `this._menu` for the dropdown that was not needed (this.menu already exists) and in the end I still had to go through gnome extensions docs to figure out how to do it right.

Overall I don't regret using it but the experience wasn't magical, as I guess we all want it.

[0]: https://github.com/onel/keyboard-cat-defense


Agreed that Github Copilot is not worth anyone's valuable time. You should check out the new Claude 3 Opus model, it's noticeably better. Right off the bat, it's less 'lazy' with its generations, and for me, it has been able to solve bugs that GPT-4 could not solve.

Just this week we made it available on https://double.bot (VS COde Copilot extension). Have been getting similar feedback from multiple users


I hadn't explicitly asked that, but that is what I've been curious about too, as in are these, currently, good enough. Which is why I wanted to start with whichever is the best, again currently, rather than bang my head against ones that are not good.

I had tried a simple experiment to generate some basic Go REST service, XML parsing code, using Bard and ChatGPT and they were actually not bad. But, that was a very simple and new code.


The 15mg/d they used in that study is just the standard dietary intake (top end, europeans). Looked a number of other studies that also used just regular intake levels as "supplementation". Weird how they all stay small when the no observable adverse effects level is much higher.


This research doesn't claim to show any cause of ADHD, it's only showing potential exacerbating influences on, or at least confounding factors of, the severity of symptoms those with ADHD present later in life. Children who expressed more negative emotional outbursts were more likely to be mistreated later, and those that were mistreated were more likely to express ADHD symptoms, and those those expressed more ADHD symptoms were more likely to be mistreated. There's no causal explanation expressed there, simply correlations that could be used to identify at risk factors earlier on in life. Everyone expresses their ADHD differently and identifying patterns in life that may either aggravate or alleviate the difficulties of the condition can potentially be helpful in the long run.


Good luck finding high quality cheap developers in any language really. Very few good developers are language specific so unless your HM is an idiot and actively screens out candidates without 10 years of previous experience in the specific framework you're using it won't be a limitation. If you want quality developers you need to offer either 1) lots of money, 2) amazing benefits, 3) interesting problems to work on. For many, an interesting language can provide an edge. I chose my current company because I would be working on Scala here as opposed to Java with the other offers I had. The quality of life improvement of the Scala position was enough to take it over the prospect of writing AbstractFactoryBeanImpl for the next few years.


This is hardly the first time that Glucosamine use is associated with lower cancer/mortality risks:

https://ard.bmj.com/content/79/6/829 - Associations of regular glucosamine use with all-cause and cause-specific mortality: a large prospective cohort study

https://pubmed.ncbi.nlm.nih.gov/33219063/ - Glucosamine/Chondroitin and Mortality in a US NHANES Cohort

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3175750/ - Use of Glucosamine and Chondroitin and Lung Cancer Risk in the VITamins And Lifestyle (VITAL) Cohort

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5870876/ - Glucosamine Use and Risk of Colorectal Cancer: Results from the Cancer Prevention Study II Nutrition Cohort

https://cancerci.biomedcentral.com/articles/10.1186/1475-286... - Glucosamine suppresses proliferation of human prostate carcinoma DU145 cells through inhibition of STAT3 signaling (in vitro)

https://www.pharmacytoday.org/article/S1042-0991(17)30735-1/... - Glucosamine and chondroitin decrease colorectal cancer risk

Some of those still showed significant correlation even when accounting for NSAID use however there have been other studies where the correlation significance disappears when accounting for NSAIDs: https://www.nature.com/articles/s41598-018-20349-6

Mortality reduction with glucosamine has been directly shown in more controlled studies on other organisms such as nematodes and mice, but may do so through a path that hasn't been shown to work in humans: https://www.nature.com/articles/ncomms4563

And in an actual double blind study glucosamine (+ chondroitin) use was found to lower serum levels of C-reactive protein in men and women: https://journals.plos.org/plosone/article?id=10.1371/journal... (small sample size)


<- a loose associate of this "everyone" here. (10+ years exp, proficient in FP and OOP languages.) Primarily Scala dev now so I write code that's both functional and object oriented. I'd say my code leans more towards the functional side of things, favoring composition, type inheritance over class inheritance, separation of data structures from behavior that acts on data, and a reduction of shared state to the bare minimum. However when it comes to persisting state throughout a programs lifecycle then objects/actors become a great way of handling that data in an easy to reason about structure. They're great for encapsulating data after all.

Seeing code that goes all in on OOP absolutely makes me cringe as I know it generally means the unit tests are going to be more complex than they would be in an FP environment, the code's likely going to be harder to extend as there's no amount of class inheritance planning that can prepare you for all future business requirements, and in most cases it's just going to be so much more verbose than an FP solution would be (I know this last point is a bit of a personal issue).

Code that goes all in on FP I absolutely love on a pure nerd level, but absolutely hate on a practical level. Pure FP is neat in the same way that seeing someone who can calculate out pi to a hundred digits is neat. Sure it's impressive, but like... I got this calculator that will handle it much easier.


> most of the time if one assertion fails, rest of the code is useless anyway

You either don't write tests or you're already writing them in the right way (sounds like the later). I've seen my fair share of what I would consider compound tests that have multiple asserts in tests that would crash execution of that test even though 3 lines down in that same test is a completely different bit of state being tested. This is hopefully less of an issue in unit tests but my gosh I've seen it way too much in integration tests.

It can get worse still when one of these initial assertions starts failing, a lazy dev goes in to address the problem, finds that one assertion isn't an issue worth addressing for now, labels the whole test as a KnownIssue and moves on leaving us at risk for the other issues covered in the later asserts to break without warning at a later point in time! (only seen this twice luckily)


I was doing the double red donation for a while before running out of hematocrit. Read about this study and decided to go back to donating but this time just doing the standard whole blood so I could dilute plasma a bit. Will update this post with results in 200 years if this works!


It would be interesting if they would try the same research but reduced to just doing blood-donation-level blood extraction from the mice over a period of time.

Unfortunately, without REPLACING your blood with something, the percent concentration of anything harmful in your blood with be the same. I wonder if it requires a quick replacement with a large percentage of the saline solution to have any effect.

Quick links:

https://www.umms.org/-/media/files/ummc/community/blood-fact...

""" The average adult has about 10 pints of blood in his body. Roughly 1 pint is given during a donation. A healthy donor may donate red blood cells every 56 days """

The original article says HALF the blood plasma was replaced. So if I understand this correctly, a blood donation would need to be done 5 times to match the amount taken out.

Is there an option to get a saline solution to replace blood volume when donating blood?


> without REPLACING your blood with something, the percent concentration of anything harmful in your blood with be the same

Not necessarily. If the process that produced the bad stuff is slower than serum production, the concentration will be lower. If it’s the same, for example, if it’s a problem with the serum production processes themselves, it won’t.


Ha! I'll be waiting!


I work in a predominantly Scrum division with rigidly set iteration lengths (technically not scrum as this should be decided by each team, I know). It is horrible, so much wasted time trying to play by the book on this.

Every once and a while we have to break out special teams for a cross group project, a special POC, or something super important with a tight deadline. In those cases the team becomes much more engineer led than management led and in most cases we revert immediately to Kanban style. Not surprisingly for me, we always get more done in a faster manner with Kanban.

Things get done when they're done, not when an arbitrary sprint end day has occurred. Kanban allows you to move work along as needed and instead of focusing on meeting that arbitrary end day, we all get to focus on continuously getting stories closed out. All we needed for this was a kanban board in whatever tracking system we were using at that time along with a reliable form of communication between team members.


Yeah, if you're not programming your models in binary then gtfo imposters!!

On a more serious note, yes understanding this and anything really takes time and investment. The problem for me at least originally not only with ML but originally with engineering back when I was trying to learn (and couldn't afford school), was finding quality sources for getting started in that process of learning. By providing simplified resources like this google one, the hope is that many beginners can get that one "aha!" moment where they start the basic understanding that allows them to start tinkering and learning.

People without a decent understanding shouldn't be submitting research papers, full on agree there. It's basically a waste of everyone's time and harmful for the field of research as a whole as it dilutes the overall signal to noise ratio. However there's so much space in the ML world that doesn't involve research, not only for fun hobby projects, but also even professionally. Resources like this are critical to reducing the knowledge gap out there between researchers and the programmers in the field that work on little ML projects for doing things like sentiment analysis for their company.

These sub-research projects are mission critical at a lot of companies yet are held up at the majority of non-FAANG companies because there's only one data scientist while the teams of engineers are clueless as to how to assist.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: