Downvoting on Hacker News literally silences the posters. Not only are their comments hidden, they can't post more comments for some time (say in response to what people are replying to them).
Downvotes are supposed to be a tool for filtering out spam, off-topic comments, links to malware and scams. They're used to cancel people for having an opinion that differs from the group.
In this case apparently the group disagrees what the function of melanin is.
If you look for racism, you'll find it, even when it's not there. Skin color is a function of two things:
1. Amount of environmental sunlight (latitude).
2. Amount of circumstantial exposure to sunlight (lifestyle).
If the assumption is that early humans have lived more outdoors than sitting in caves all day like the modern human does in practice, then yes they'll have darker skin overall.
Is it racist to just state basic facts, or should we brainwash everything to be uniform and average across time and space?
The essay I read on skin color says that dietary sources of vitamin D and folate plays a big role. As well as prehistoric population migrations.
There are dietary sources of Vitamin D and it's produced via Sun exposure. Folate levels are reduced by sunlight exposure. And there are dietary sources. So the two are in conflict vis sun exposure.
Because of the mild climate you can grow grains in northern Europe. That diet is low in Vitamin D and high in folatess. As a result Northern Europeans rapidly lost the ability to produce melanin over the last 5000 years.
On the other hand there are no black skinned native Americans because the founding population had already lost some of the genes needed.
Clockrate is not a bottleneck on how fast your computer is. It's just a synchronization primitive.
Think about it like the tempo of a song. The entire orchestra needs to play in sync with the tempo, but how many notes you play relative to the tempo is still up to each player. You can play multiple notes per "tempo tick".
The point is that you don't. A clock tick is not the smallest unit of operation. It's the smallest unit of synchronization. A lot of work can be done in-between synchronization points.
Programming is already conversational. I'm telling a computer what I want, it does it, I see what it does, and elaborate or correct myself where necessary. Repeat endless times, until product exists.
That's kinda the case with Copilot, or I'd just type:
// Unify relativity with quantum mechanics.
I'm trying to say Copilot is not a fundamental shift to programming. It's what programming already is, and we already have IDEs assisting us with refactoring and second-guessing our intent with autocomplete (which in some IDEs is powered by AI now, as well).
Programming is like working in a team. You try to communicate with your teammates, and then everyone does what they can according to their skills, and how they understood the task.
The shift to higher-level communication in programming is inevitable, will it look like Copilot, I don't know.
> And these are the hand picked examples. This product seems like it needs some more thought.
Everyone's self-preservation instincts kicking in to attack Copilot is kinda amusing to watch.
Copilot is not supposed to produce excellent code. It's not even supposed to produce final code, period. It produces suggestions to speed you up, and it's on you to weed out stupid shit, which is INEVITABLE.
As a side note, Excel also uses floats for currency, so best practice and real world have a huge gap in-between as usual.
So how do you know if the code that Copilot regurgitates is almost a 1:1 verbatim copy of some GPL'ed code or not ?
Because if you don't realize this, you might be introducing GPL'ed code into your propiertary code base, and that might end up forcing you to distribute all of the other code in that code base as GPL'ed code as well.
Like, I get that Copilot is really cool, and that software engineers like to use the latest and bestest, but even if the code produced by Copilot is "functionally" correct, it might still be a catastrophic error to use it in your code base due to licenses.
This issue looks solvable. Train 2 copilots, one using only BSD-like licensed software, and one using also GPL'ed code, and let users choose, and/or warn when the snippet has been "heavily inspired" by GPL'ed code.
Or maybe just train an adversarial neural network to detect GPL'ed code, and use it to warn on snippets, or...
Verbatim isn't the problem / solution. If you take a GPL'ed library and rename all symbols and variables, the output is still a GPL'ed library.
Just seeing the output of GPL'ed code spitted by copilot and writing different code "inspired" by it can result in GPL'ed code. That's why "clean room"s exist.
Copilot is going to make for a very interesting to follow law case, because probably until somebody sues, and courts decide, nobody will have a definitive answer of whether it is safe to use or not.
Stack Overflow content is licensed under CC-BY-SA. Terms [1]:
* Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
* ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
In over a decade of software engineering, I've seen many reuses of Stack Overflow content, occasionally with links to underlying answers. All Stack Overflow content use I've seen would clearly fail the legal terms set out by the license.
I suspect Copilot usage will similarly fail a stringent interpretation of underlying licenses, and will similarly face essentially no enforcement.
Have you met programmers? Even those who care about quality are often under a lot of pressure to produce. Things slip through. Before, it was verbatim copies from Stack Overflow. Now it'll be using Copilot code as-is.
Not the parent, but people really like to get riled up on the same topics, over and over again, which quickly monopolizes and derails all conversion. Facebook bad, UIs suck, etc. We can now add to the list, "AI will never reduce demand for software engineering".
Copilot is definitely no replacement for anything except copying from Stack Overflow for juniors.
But in the long run, AI is us basically us creating our own replacement. As a species. We don't realize it yet. It'll be really funny in retrospective. Too bad I probably won't be alive to see it.
It's true I probably wouldnt have laughed quite as loudly if there werent a chorus of smug economists telling us that tools like this are gonna put me out of a job.
Business types hate dealing with programmers, that's a fact. And these claims of "we'll replace programmers" happen with certain precise regularity.
Ruby on Rails was advertised as so simple, startup founders who can't program were making their entire products in it in a few days, with zero experience. As if.
If I want random garbage in my codebase that I have to fix anyways I might as well hire a underpaid intern/junior.
It's easier to write correct code than to fix buggy code. For the former you have to understand the problem, for the latter you have to understand the problem, and a slightly off interpretation of it.
> Everyone's self-preservation instincts kicking in to attack Copilot is kinda amusing to watch
Nobody is threatened by this, assuredly. As with IDEs giving us autocomplete, duplication detection, etc this can only be helpful. There is an infinite amount of code to write for the foreseeable future, so it would be great if copilot had more utility.
Excel rounds doubles to 15 digits for display and comparison. The exact precision of doubles is something like 15.6 digits, those remaining 0.6 digits causing some of those examples floating (heh) around.
My suggestion was a way to comment or flag, not to kill the product. These were particularly notable to me because someone hand-picked these 4 to be the front page examples of what a good product it was.
I agree with you. This is basically similar to autocomplete on cellphone keyboard (useful because typing is hard on cellphone), but for programming (useful because what we type tends to involve more memorization than prose).
What customers hate is tier 1 support acting as a brick wall between them and the people with the solution to their problem. Or at least that's how they perceive it.
While I don't shout at tech support, as I know what the script is, you better believe my blood is quietly boiling as I'm rebooting various devices that have nothing to do with the issue on their instruction, just so I can get to an appointment with technicians to fix my real issue.
Same thing occurs with GP doctors, and basically every other system in tiers, where it inevitably organizes to basically stall you, so people don't overload the higher tiers.
That's a different issue, though. We have problems that literally nobody knew the reason for, and they required formatting the hard drive and reinstalling Windows to fix. They did eventually figure things out, but it was over a year later, IIRC.
In the mean time, the techs would get yelled at for not knowing what was happening. I wasn't surprised when they invented reasons... And they may even have believed them.
As for the rebooting... I've seen too many weird things to deny them the reboots. And a few times it has actually fixed my problem, even though I believed it impossible. (And had the same happen to customers I was supporting.) So I know it's frustrating, but it's necessary in a surprising number of cases.
Also, techs were often the worst customers. They thought they knew everything, even if it was just Dunning-Kruger. Forcing them to reboot was painful, but actually worked more often than non-techs because the non-techs would blindly try things like that after having been told it once in the past. There were plenty of times I fell back on the "I can't send this up until you reboot it" because I knew there was a good chance the reboot would actually work if the customer tried to avoid it.
Agreed, customer service folks feel like they need to give an answer (totally understandable) and ... customer service isn't given many tools so they just make do with what they think might be true.
Most of us are programmers here, you are just as used to hex.
Also technically we don't have that many fingers. We have one more finger. We don't have a number digit for ten right? They go zero to nine. But out fingers go zero to ten.
If our numeric base matched our fingers, we should've used base eleven. Not many people think this through :)
I think the real breakthrough is having "order of magnitude" in numbers. So indeed the Roman Numerals suck and probably wouldn't last regardless.
Right. It lets you see the bits more easily: 0-F is a good representation of 4 bits.
Say i were to name a random hex value like #$9C right now it would take me a few seconds to convert that to decimal in my head though... 156 took me a few seconds to sort out. I don't have to think about what 156 means in decimal because i just know what it is.
I'm not quite there, but close to being bilingual (binumeral?) between decimal and hex, and I think it's all about developing better intuition for each digit and their relationships.
For instance, you say 0x9C... that's just over half (0x80) of 0x100, close to 2/3rds (0xAA). Given in embedded we're often using a byte to represent a quantity, that gives enough feel.
I should practice multiplying hex by hand, I reckon that would assist in getting there.
Replying to sibling since we've maxxed out comment depth...
> It was $62 degrees Fahrenheit yesterday. I can't just go displaying that in a program. Nor is it meaningful to me without a decimal conversion.
It's just as meaningless to me even if you do the conversion to base10 for display... I don't do deg F intuitively and would have to convert to Celsius in my head. It's all about what we are familiar with.
Right but the world runs on base 10 is all i'm trying to say. It's needlessly difficult to use anything else (aside from hex or binary in very specific situations). In some college sophomore philosophy class you could argue for base 27 but it doesn't make your system usable or intelligible.
> It's needlessly difficult to use anything else (aside from hex or binary in very specific situations)
Totally agree. I'm a programmer, so I do need to know those, and as an embedded developer, even more. The average person not so much. I thought that's what this particular thread was all about.
Sure, I can work it out, and I do use F when talking to friends from the USA. My only point was that I deal with hex numbers all day, so they're more intuitive to me than Fahrenheit is.
I think that under certain circumstances the reply link doesn't show past a certain depth, but you can still (unless the comment is dead) click on the time link to get the page for the comment and reply there.
It's interesting that octal used to be popular and isn't any more. I'm not familiar with how that culture shift happened, but I remember learning C in the 80s and thinking it was odd that it supported octal when I'd never seen it anywhere else.
I think their point is that we use our 10 fingers to count to a value of 10. We can use our 10 fingers to represent 0 (no fingers) through 10 (all 10 fingers). This is essentially base 11 if you try to assign a specific digit to each finger.
While I agree with that viewpoint I think it's missing the point. As humans with 10 fingers it's easy for us to group things into increments of 10, so base 10 comes naturally. Think about how you count a quantity over 10: once you run out of fingers you mark down (or remember) that you've already counted one quantity of 10, now you're counting the next quantity of 10, etc...
It's more like a shifted base 10 where we represent digits 1-10 instead of 0-9.
Everyone is thinking about a "shifted" base 10, yes.
But every base starts with 0, there's no such thing as "shifted base" because then you literally can't represent 0.
Also "zero fingers" is still a thing that exists in this shifted base 10. So it remains base 11.
This is like the classic "0-based indexing" vs. "1-based indexing" dilemma. The "first" thing is represented by 1, we think.
But the "first" year of our life, we're zero years old. The "first" hour after midnight is 0 o'clock. Building your "first" million as a business is the period before you have 1 million. And so on.
A base 10 with symbols only for 1 through 10, where zero is represented with an empty set is a Bijective Base 10 numeration. The columns in excel are bijective base 26.
https://news.ycombinator.com/item?id=27722154
What matters is just grabbing some pitchforks and cancelling someone.