This is a discussion that happens in a lot of different fields of practice. In medicine, you might have the argument over to what extent chiropractors or naturopaths are physicians; /r/justrolledintotheshop, one of my guilty pleasures, has recurring posts about embarrassing things that shadetrees have done. Right now they're doing the, "hah, car mechanics? Try being a boat mechanic! Hah, boat mechanic? Try being a diesel mechanic! Hah, diesel mechanic?..." It's all pretty light-hearted, one of the reasons that I like that sub, but still, there are similarities.
I get where they're coming from. I've put some effort into teaching programming to other people too: young kids, guys with electronics backgrounds but not software, even a homeless kid. I look at programming as a skill, like dancing, martial arts, or swimming, that can be practiced and improved for anyone that wants to put the time in to it.
But then sometimes I find myself on the other side of the fence, where a project is being made a lot more difficult by someone because, "I know Wordpress, so I'll just handle this complicated not-Wordpress-related hosting issue myself." Or, "my system was acting strange recently, and I saw this thing about hackers on NCIS, so I..."
So that's where I start to have a problem with thinking of spreadsheets as programming. Technically, Jacques is right, it absolutely is. People do hilariously incredible things with Excel -- even flight simulators! (https://www.youtube.com/watch?v=AmlqgQidXtk) But it's also not the same as developing an api or wrangling some other more advanced project, and when the people you're working with understand programming to be as difficult as a spreadsheet, it can make for some hopeless no-win situations.
At least in martial arts, if you decide to spar with somebody that's a lot more advanced than you, you'll learn your mistake pretty quickly. If you practice swimming in a backyard pool and then decide to have a go at the ocean, you'll have a pretty sobering experience if you're lucky. But in software, it's possible to muddle along for quite a long time, making a really expensive mess, before you realize that you're in over your head. (Which probably most of us have done at some point.)
Hell, look at this very thread on HN. There are people further upthread essentially saying, "Whatever, I like it because it's convenient."
I have adopted a passive stance on privacy and security: I stay up-to-date on news in this area, I choose for myself products and systems that minimally increase my risk, and I will answer questions from clients or other people. But, I won't evangelize it. Most people really just don't care all that much.
> Hell, look at this very thread on HN. There are people further upthread essentially saying, "Whatever, I like it because it's convenient."
Is that so ridiculous a concept? People routinely trade privacy for convenience.
Sending my location to Google through their maps is ridiculously convenient. Getting around using public transportation, especially in a city I'm unfamiliar with, would be a pretty awful experience without it.
While it's a small convenience, Gmail parsing my airline confirmation emails into an easy-to-read format is pretty cool, and I like that it's done. To do this (and have a spam filter), they must be parsing my private email in some capacity.
I've personally never been a fan of digital personal assistants. I've only used Google Now and found it more annoying than effective. But I can certainly understand why getting up-to-date traffic information when you're about to drive home from work would be a really useful thing to have. To do that, it has to learn your daily habits.
Convenience and privacy are almost always at odds with each other. It's a give and take, so ideally I should be getting more convenience for whatever privacy I'm giving up. That may not be the case here, and I'm not saying where your personal line should be, but don't assume people are ignorant just because they're choosing convenience over privacy. (Not saying you are personally, but others in this thread are.)
Yeah, I don't really think of people with different priorities as "ignorant". I get that there are tradeoffs, it's the same deal with network security.
I'm a little ... frustrated, disappointed, bothered? ... though at the number of people that don't seem to consider at all the consequences of where information systems are headed. It's one thing to look at the benefits and the consequences and say, "OK, I'm willing to trade information on my position in exchange for realtime updated traffic flows and generally perfect maps and directions and nearby points of interest." I totally get that. It's not a choice I've made -- I remember getting around before GPS was everywhere -- but I can shrug and empathize and understand the decision.
But, "meh, I don't really care about this news, I just want more convenience" ... that bothers me a bit.
Unfortunately, I really can't think of any way to convince anyone else they should be bothered. I could cite historical cases where that hasn't worked out so well, I could dream up fictional scenarios where it might not work out so well, I could point to more recent events where things like identity theft are costing some people years of their life to sort out. But, none of that really makes much of an impact. I don't think anybody who isn't bothered now will become bothered up until it affects them directly.
Why does convenience and privacy have to contradict each other?
Parsing airline confirmations should be possible to do offline. Preferably by some standard data format like iCal but even if airlines can't agree on such a format it shouldn't be difficult to compile a scraper that can be run offline, with an updater that contains all the various formats.
Sending your location should not be required to display a dot on a map of where you are. Even navigation can be done offline to a certain extent, OsmAnd does it and that's what tomtom and cars have been doing for years.
>>Is that so ridiculous a concept? People routinely trade privacy for convenience. Sending my location to Google through their maps is ridiculously convenient.
If you cannot tell the difference between you sending your location to Google when you need to and half the people in the world sending ALL their information to Microsoft ALL the time BY DEFAULT, I don't know what to tell you.
Hmm. Amazon has a couple of original series with a-list talent, but they don't seem to be getting the kind of excitement that Netflix has been. This kinda looks like Amazon taking aim at Netflix, and after already de-throning bookstores and electronics suppliers and cloud hosting and getting into the services market too.
I've been really happy with Amazon so far but I'm not a big fan of their fingers being in so many different markets.
Oh! This is one of my favorite questions about scientific progress.
People seem to regard scientific progress as this kind of wave function, where some theory comes along and then some other theory or discovery disproves the previous one and so on and so forth.
But if you want to think of scientific progress that way, then you have to think of it as a dampened sine wave (https://en.wikipedia.org/wiki/Damped_sine_wave) -- that is, discoveries and theories tend to refine previous discoveries and theories, and scientific progress over the millenia has been rapidly zeroing in on the truths of our reality.
What is left at this point is mostly things at very large scales -- cosmological stuff like dark matter, dark energy -- and very small scales -- stuff like sub-subatomic particles and the Higgs boson and the nature of mass and so on -- and very complex things like biology and climate science.
What isn't left at this point is a revision of the laws of conservation. It is exceedingly unlikely that anything will be discovered at any point in the future which will do anything other than add perhaps the very smallest of exceptions to those laws, and even then, I doubt very much that will happen.
So, essentially, the age of physics discoveries in garages is mostly over. There will be many more discoveries in physics, but they now require the collective efforts of entire nations.
I'm sorry, but your reasoning completely escapes me. The only things that are left are very big, and very small? In addition to this being an incredibly vague catchall (which misses all the possible discoveries of 'black swans', that is phenomena and occurrences we have never seen), your statement more importantly misses the fact that humans don't even understand what something as common and important as light (or electromagnetic radiation in general) is. Is light really big or really small? We've certainly characterized light's behavior (to some degree), though we still have basically no understanding of what it actually is (though I may be the only one who finds the circular definitions of electromagnetic fields unsatisfying). The same is true of things like matter and gravity. Once we have better understandings of what these are, I may grant that further fundamental insights will be difficult to come by; but we're not there yet.
You might be correct in saying that humans have characterized particles, objects, and phenomena that are closest to their scale and location, but we have done a poor job of understanding their fundamental nature. We also don't understand and have not even characterized things we've never observed (the 'black swan' problem).
One could say the same of a stone, a plant, a planet, or a star, yet we do not simply characterize any of these, we endeavor to understand its origins and constituent parts. In the case of the star, we could have analysed its spectrum, size, trajectory (through space), and surface characteristics, then called it a day; yet we continually try to understand the elements which make up the star, what their state of matter is, how they are reacting, and many other aspects of the star. I disagree with you when you say (of light) that "all we can do is to characterize its behavior".
Well maybe that's not all we can do. Maybe the abstraction of treating light and the rest of the Universe somewhat separately will break (of course we describe the coupling too). Maybe the abstraction of treating light as a non-dividable entity in the Universe will break. It would only create other axioms and/or other non-dividable entities, at what point would you be satisfied? It could very well be that you can't divide light any further and I would be satisfied with it if we can describe its behavior fully. If one says that light is a broken abstraction and you can describe the Universe using only "string" then you would ask "Yeah, but what strings are?".
I'm not saying that asking what light is is not an important question. But there is a possibility that there is simply no answer and yet we can have fully working physics. At least nobody will question your theories just because you don't tell what exactly your elementary particles are. They are elementary, we just have them, maybe light is not one of them.
I think that your last point gets to the heart of the question, and you are right that we may never achieve a deeper understanding of light, matter, or gravity; but that does not mean that a deeper understanding is not desirable, or that we should stop asking the questions. Further, we will never know for certain when we have reached the bounds of our capacity to understand, so I will continue to ask "Yeah, but what strings are?". I always like to keep in mind that 'atom' means 'indivisible' in Greek, and yet we found that bound of knowledge to be surmountable (with debatable consequences).
I would also like to be clear that I am not criticizing anyone for making insufficient progress; it is just that there is a lot out there left for us to understand, and not all of it is marginal stuff that's "very big or very small".
> we still have basically no understanding of what [light] actually is.
Finding the true nature and, whether it exists or not, is the domain of philosophy.
Science builds models. And we do have at this point extremely good models of electromagnetism which can correctly predict results of all experiments done so far.
Whether these models reveal some fundamental truth or not is a matter of opinion (or perhaps religion). It has nothing to do with refining the models or finding experiments that might prove them wrong.
I think you misunderstand my point. Model-building is only one aspect of science, and happens to be the most prevalent in some (though not all) of today's mathematical and statistical based studies of physics (which are often criticized as an exercise in curve-fitting). One example is how string theorists are making a valiant (though perhaps errant) effort to understand matter; they are not simply modelling well characterized behaviors. There are many physicists and other scientists who are continually trying to understand their subject matter (pun intentional) at a deeper level, such as those who discovered and described DNA.
I am not interested in whether these scientists reveal "some fundamental truth or not", but we cannot say that we understand all there is to know about electromagnetic phenomena because we can make a few simple predictions. It would be tantamount to saying that we understand the sun because we know where it is relative to us, and how bright it is, and can predict both of these parameters for a while. Understanding that it is a star, how it was formed, how it will cease to be, that it is made up of a number of elements, and converts large amounts of matter to energy are all very important discoveries, and we must continue to seek this kind of advance in the frontiers of our understanding in electromagnetic phenomena, matter, gravity, and many other areas.
So, I originally argued the point that progress in the physical sciences generally doesn't involve utterly canceling out previous discoveries. The link to Asimov's article elsethread illustrates this better than I could hope to. Relativity for instance did not find that Newtonian mechanics was wrong, it simply revised it under pathological cases. The Bohr model of the atom isn't completely wrong, and it's still useful in chemistry, it's just that as physicists have learned more they've found details where it's not completely right either. This kind of constant revision is a theme throughout the history of scientific progress.
So that leaves me a little bit stumped at what point you're trying to make here. Are you disagreeing with that view of science? Are you trying to say, "well, that's how it's been for hundreds of years, but at some point in the future I believe we'll all discover that physics is completely and utterly and embarrassingly wrong in every way"?
I've gone back and reread your reply to me at least half a dozen times. You seem to be disagreeing with me, but none of the examples you brought up really actually disagreed with me. At best, I chose some poor metaphors and that led you to misunderstand me, but then I still can't figure out what it is that you're actually trying to say.
"we cannot say that we understand all there is to know about electromagnetic phenomena because we can make a few simple predictions"
This is quite an understatement. QED, which is the underlying theory of EM, is the most accurate scientific theory ever. Full stop. It's theoretical predictions match experimentally measured observables out to ten decimal places. By your standards we don't really understand any physical phenomena. To bring up your analogy with the sun: our understanding of QED is like saying that we know the exact chemical composition of the sun to an accuracy of one part in ten billion. I think we would be justified in stating that we understand the sun in those circumstances.
On a related note, QED does provide a lot of context for what "light"(photons) is(are). It's the gauge boson (read: mediating particle) of the electromagnetic force. In my mind it doesn't get much more elegant than that.
I guess you can say that ariving at Maxwell equations or QED is an exercise in curve-fitting in some sense. In the end, it is the simplest set of rules that matches a set of observations.
Similarly, the current model of the sun is the simplest model that explains some observations (neutrino flux, the fact that some stars seem similar spectroscopically, our knowledge from other branches of science, etc.).
I see no fundamental difference between these two.
Yeah, that specific article was on my mind when I wrote my comment. But I didn't say there were no more discoveries left to be made; I said they weren't happening in people's garages anymore, which is true in the majority.
Laymen tend to underestimate the very vast body of research which supports current physical models. "New physics" would imply that something has been fundamentally wrong with physics for a long time, and that is unlikely.
I have a few little trophies I go back to every once in a while when I'm feeling like a crappy programmer.
- I worked out, on pen and paper, sorting networks on my own a few years before the Wikipedia article on them existed. I was looking for shortcuts in a Quicksort implementation. I hadn't read Art of Computer Programming yet, which is probably the only other place I would've been likely to read about it. It hadn't been covered in any of the other programming literature that I was devouring at the time.
- I wrote a variable interpolator in COBOL. COBOL has no string operators or anything resembling a string data type. This one was tricky. I was working as a programmer/operator at a school district at the time and the central hub of their IT was a Unisys mainframe that ran COBOL and WFL. There weren't any punch cards anymore, but everything ran as if there were; for any given job to run, say, report cards, you had to go into the WFL job and edit a two-digit school code in half a dozen places, in "digital punch cards", which would then be fed one after the other into COBOL programs. This was error-prone and I wanted a way to define a couple of variables at the top of the job file and then have everything work after that.
- I worked for a BigCo that used Remedy for its internal support systems. There were some latent training issues in the internal support department and support requests kept getting modified by unknown people, which would cause the requests to get mishandled and would irritate various other departments. I found a way to sneak some code into the Remedy forms system and I cobbled together a very rudimentary communications protocol between several forms so that all changes to any form got logged to another form, along with the user's id. Remedy had no loop logic at the time. That actually made it to a Remedy developer's group mailing list once and I was a big fish in a very tiny little puddle for a day.
- I reverse-engineered portions of the .dbf format that FoxPro uses, and wrote software that could convert .dbf files into MySQL tables. The date format was tricky. It was an 8 byte field where the first four bytes were a little-endian integer of the Julian date (so Oct. 15, 1582 = 2299161), and the next four bytes were the little-endian milliseconds since midnight. This is not documented anywhere.
Those are some of my favorites anyway. 30 years of programming, there's been some fun stuff along the way.
unless you have a damn good reason not to (--checksum is essential to prevent corruption/malicious modification, without it you are implicitly assuming the version on the remote machine is exactly how you left it: that assumption is why Linus built git around shasum in the first place).
rsync is even easier than SSHing to git pull, or opening up a pushable repo on a server. For once the simple approach is clearly better!
> (--checksum is essential to prevent corruption/malicious modification, without it you are implicitly assuming the version on the remote machine is exactly how you left it: that assumption is why Linus built git around shasum in the first place)
That's not true (though not completely wrong).
rsync is stateless. It does not assume the version on the remote machine is "exactly how you left it"; Rather, it compares file size and file modification time; if either changed, it will do a transfer -- efficient delta transfer, usually - which might be as little as 6 bytes if the contents is exactly the same.
--checksum makes it ignore file size or modification time, and compare the file checksum in order to decide if it's time for a transfer (delta or not).
A malicious actor, or bad memory chips, might change your file's contents, but keep the file size and time/date the same. In that case, --checksum will overwrite that file with your source version, and a --no-checksum wouldn't. So it's not bad advice. Whether the cost in disk activity is worth it depends on your thread model, data size, and disk activity costs. (Though, if corruption is due to bad memory, this is the least of your problems)
However, a corruption because of a program error / incompetent edit to the file is very unlikely to leave both the size and modification date intact - and a standard rsync will figure that out as well.
One approach that I've found works well (YMMV, etc) is deploying with Ansible. It has a Git module built in (so it's almost 0 work to configure), and you can set up SSH agent forwarding so you never have put keys on the server that have access to your source control, nor manually SSH in and pull.
Amen. Don't use git to deploy code. Use it to version code. Use a script on your CI to compile/test/minify/convert your code into a deployable tar ball and stick that somewhere highly durable like S3, Swift, or your own company filestore.
Remember, git providers go down (i.e. DDOS to GitHub or internal fail at BitBucket). Don't depend on git being up to deploy your code or you'll look like a fool next time a DDOS at GH coincides with a deployment.
I only just recently had to figure that out. I opted for setting up a .kdb KeePass file in a private git repo and giving everyone ("everyone" = myself + one other) access to that. I'm pretty sure that's not a very good solution.