I get what you're saying, but this reasoning has always rubbed me the wrong way.
You will often hear scammers and con artists justify what they do by saying it's the only way they have to support their family. It's like, why is your family more important than any other family in the world? It's still a selfish act.
disagree. feeding your own family IS more important to you. it has to be, because that is your responsibility. you are not responsible for other families. not at the expense of your own.
the question is rather: why is unethical behavior the only way to feed your family?
in case of the suggestion to just quit after 9 months i would say that if there is no other way to get work, then what choice do they have? this is similar to lying about the intention to have children. some might consider lying unethical, but it is so important that it is in fact legally protected in some countries.
that's a very philosophical question, and i am happy to engage in a discussion if you like. let me start with that i believe that it is the very purpose of humanity to contribute to an ever advancing civilization. so yes, i do consider myself responsible for the wellbeing of society. in fact i am making that very choice right now, living with my family in a developing country so that i can make a meaningful contribution there and teach my children important values of life, instead of going back home where i could live on social benefits because caring for my children would prevent me from getting a job.
but at the end of the day i can't let my children starve to achieve that goal, nor am i responsible to feed all my neighbors who sometimes struggle to actually afford all their daily necessities. there has to be a balance somewhere. and as i said the balance is to not resort to unethical behavior. but that's not the same thing as giving other families a higher priority than my own which is what you are suggesting (you are not explicitly saying it, but the opposite reading of "why is your family more important than others?" is "why should other families be more important than yours?".)
i brought these children into the world, and my responsibility is towards them first. this does not mean that i ignore the needs of others, if i can help them. it also does not mean that i focus on giving my children an advantage over others, or that i protect them from their own mistakes. all i am doing is to make sure that they are healthy and learn, in school and in life. that is my priority because it is my responsibility alone. noone else is going to do that for me. i am the only one who can make sure that it happens. caring for society on the other hand is a group responsibility, not an individual one. i can and do contribute to that, but only as much as my primary responsibility of caring for my children allows.
When the actions you would take if you were acting entirely out of self interest and the actions that are the most ethical align with each other, then it's okay to take those actions, which is the case here and that's fine. I just mean that the primary reason shouldn't be self interest.
There are competing incentives. Governments/politicians want to make it easy for companies to comply in order to encourage economic investment, and also to gain goodwill among their voters. Legal institutions want to make things appear as complicated and uncertain as possible so that they make more money selling lawyers.
The end result is that you get mixed messages, depending on where the information ultimately came from.
I personally don't know how hard it actually is to comply with GDPR, but I know that it has to be easier than it's made out to be.
It's not about harder but about what error you can tolerate. Here if you have accuracy 99% for many applications it's enough. If you have 99% accuracy per trip of no crash during self driving then you gonna be dead within a year very likely.
For cars we need accuracy at least 99.99% and that's very hard.
I doubt most people have 99% accuracy. The threshold of tolerance for error is just much lower for any self-driving system (and with good reason, because we're not familiar with them yet).
I guess something like success rate for a trip (or mile) would be a more reasonable metric. Most people have a success rate far higher than 99% for averages trips.
Most people who commute daily are probably doing something like a 1000 car rides a year and have minor accidents every few years. 99% success rates would mean monthly accidents.
It beats Java and C# in many of them, and for C++ it's usually not more than 5x slower.
Of course, that's all optimized JavaScript. It's a lot easier to slip up and accidentally make your code much slower in a scripting language than in a compiled language. So yeah, for larger programs it's definitely worth rewriting.
If you run the benchmarks longer (i.e. more than a couple hundred milliseconds - the above benchmarks use validation values from the original benchmarksgame, not the ones used for evaluating performance), you will see a different picture:
> Perhaps most troubling is how cryptocurrency enables ransomware attacks on critical infrastructure, healthcare facilities, and local governments. Criminal groups can now easily monetize cyberattacks through untraceable ransom payments, creating an entire criminal industry that threatens public safety and national security.
Without commenting on the other aspect of cryptocurrency, I think this is actually a good thing for national security. The more people are enabled to execute ransomware attacks, the more they expose holes in the cybersecurity of various public systems and encourage others to actually fix these flaws instead of ignoring them.
The damage that's caused by ransomware attacks can sometimes be tragic, but it's better than the alternative: doing nothing to improve security until a real cyberattack by someone like Russia or China takes down your entire country's infrastructure because you've put no effort into securing it.
It doesn't really matter if it needs to extrapolate. Whether or not you called the variable "mungus", a sufficiently powerful LLM could infer that this is the best possible name for that variable to make it understandable for humans. Doing this across the entire codebase, it could create source code that is actually more readable than the original.
Of course we're a long way off from this, but there's no reason it couldn't be done in theory.
Please just look at my other comments because I find myself repeating myself over and over, my comment went past a lot of people's heads and they rushed towards defensiveness instead of understanding the simple, unarguable fact of life I laid out in my comment.
Sorry, I didn't mean to come across as disagreeing with you, I just wanted to point out what I thought was an interesting fact. Yes, the original source code is unrecoverable.
I tend to be username-blind and thought you were the person my comment replied to. Reading your comment on its own, I have no issue with it and I apologize for mistaking you for them!
This is getting downvoted but I would also recommend it. It's much faster than reading papers and, unless you are doing cutting edge research, LLMs will be able to accurately explain everything you need to know for common algorithms like this.
It's getting down-voted because it is a very bad advice, one that can be refuted by already known facts. Your comment is even worse in this regards and is very misleading - the LLMs are definitely not going to "accurately explain everything you need to know", it's not a magical tool that "knows everything", it's a statistical parrot which infers the most likely sequence of tokens, which results in inaccurate responses often enough. There is already a lot of incompetent folks relying blindly on these un-reliable tools, please do not introduce more AI-slop based thinking into the world ;)
You left out the "for common algorithms like this" part of my comment. None of what you said applies to learning simple, well-established algorithms for software development. If it's history, biology, economics etc. then sure, be wary of LLM inaccuracies, but an algorithm is not something you can get wrong.
I don't personally know much about DHTs so I'll just use sorting as an example:
If an LLM exlains how a sorting algorithm works, and it explains why it fulfills certain properties about time complexity, stability, parallelizability etc. and backs those claims up with example code and mathematical derivations, then you can verify that you understand it by working through the logic yourself and implementing the code. If the LLM made a mistake in its explanation, then you won't be able to understand it because it's can't possibly make sense; the logic won't work out.
Also please don't perpetuate the statistical parrot interpretation of LLMs, that's not how they really work.
I meant it also for the (unwittingly) left-out part of your comment. Firstly, by saying this parrot will explain "everything that you need to know ..." you're pushing your own standards onto everyone else. Maybe the OP really wants to understand it deeply and learn about edge cases, and understand how it really works. I dont think I would rely on a statistical parrot (yes, that's really how they work, only on a large scale) to teach me stuff like that. At best, they are to be used with railguards as some kind of a personal version of "rain man", with the exception that the "rain man" was not hallucinating when counting cards :)
> Also please don't perpetuate the statistical parrot interpretation of LLMs, that's not how they really work.
I'm pretty sure that's exactly how they work.
Depending on the quality of the LLM and the complexity of the thing your asking about good luck fact checking it's output. It is about the same effort as finding direct sources and verified documentation or resources written by humans.
LLMs generate human like answers by using statistics and other techniques on a huge corpus. They do hallucinate but what is less obvious is that a "correct" LLM output is still a hallucination. It just happens to be a slightly useful hallucination that isn't full of BS.
As the LLM takes in inconsistent input and always outputs inconsistent output you * will * have to fact check everything it says. Making it useless for automated reasoning or explanations and a shiny turd in most respects.
The useful things LLMs are reported to do where an emergent effect found by accident by natural language engineers trying to build chat bots. LLM's are not sentient and have no idea if the output is good or bad.
Exactly this. The thing which irritates and worries me, is that I notice a lot of junior folks tend to try and apply these machines in solving open-ended problems the machines don't have the context for. The lawsuits with made-up referent cases are just the beginning I am afraid, we're in for a lot more slop endangering our services and tools.
Exactly. Nothing wrong with LLMs, but we’re trying to have a human conversation here – which would be impossible if people would have all their conversations with LLMs instead.
Standard RAID configurations can only handle 2 failures, but there are libraries and filesystems allowing arbitrarily high redundancy.
reply