That is apples and oranges. He's talking about a tacit understanding, and not necessarily in the context of upward movement. You're talking about explicitly telling management that you are actively looking. They'd have to be idiots to promote you under those circumstances.
Assuming no relevant facts were omitted from the description of events, it does, in the context of the question it was offered in response to.
> In many, many, cases people start looking for jobs wishing they could stay at their current job.
Sure, they do. But the answer to the question "Where do you want to be in your career?" in those cases would focus on the things that they wanted to enable themselves to stay in and love the job with their current employer, not the fact that they are looking for outside opportunities (the latter might be mentioned in the context of specific desires and the fact that certain outside opportunities seemed to be the only way to realize them, but even then the looking for outside opportunities would be secondary to the main answer about desired job features, not the main answer to the question.)
Just because you are willing/able to leave a company doesn't mean you advertise it. Hence "explicit" vs "tacit" in my comment. What a person says about being loyal is irrelevant, you can't assume anyone will stay long term (kind of the point). But if they go out of their way to say they're looking, then they don't want to be there and you shouldn't waste the time & resources training them up.
OP: "When asked about where I wanted to be in my career [...] I was honest about having my resume out there"
He didn't "advertise" it -- he just gave a honest answer when questioned. If this is "advertising" for you, then your "default" behaviour would be "be economical with the truth", i.e. white lies, i.e. being fundamentally dishonest... which means OP is right.
> He didn't "advertise" it -- he just gave a honest answer when questioned.
"I am actively looking for jobs at other firms" is not an answer to the question of "where do you want to be in your career", except insofar as it can be read to imply an answer of "not here".
So, it was honest, but not really (except indirectly) an answer to the question asked, and quite likely, in any case, not the most productive and relevant honest answer.
If the reason other opportunities were being sought is that those opportunities offered features X, Y, and Z that the employee's current position didn't, an honest but more direct and relevant answer would be "I'd like to be doing more of things like X, Y, and Z". That would directly answer the question, and provide something positively actionable by the employer, and be no less honest than "I've got my resume out and am actively looking at outside opportunities".
There's two possibilities (based on the scenario as described): either the employee was fed up with the company and really wanted out, and then the answer given was not only honest but reasonably relevant (if somewhat, perhaps diplomatically, indirect), or the employee had particular things they wanted in their career that they weren't currently getting, and failed to give the most relevant perfectly honest answer to the question asked, and instead gave an incomplete, tangentially relevant non-answer which implied an unfortunate and inaccurate answer to the question actually asked.
Not a PuTTY fan. I generally install Fedora or Linux vms on personal Windows systems (where they are mandatory/heavily advantageous and where security policies permit) in large part to have openssh-client, and then use vmhgfs shares to move files between them as necessary. Among other reasons.
Test driven development usually refers to unit testing or integration testing. It would be interesting to know if you went beyond that and the nature of contract work you were doing (testing for an internal service with a RESTful API being very different from a mobile app, or a website).
John Lakos differs from you. In his book "Large Scale C++ Software Development" he advocates starting at the lowest level of your own code, that is, subroutines that only make system calls or call libraries that are provided by the system.
Then you unit test the second level - subroutines that only call that first level, or that call the system or libraries.
main() sits at the top level.
Not every program is straightforward to levelize. His point is that one should do so.
Also the unit tests for any one level should focus on what is new on that specific level, with the assumption that all the lower levels are flawless. In general they won't really be but when that's the case you write a unit test for the lower level.
This keeps the LOC on the tests about the same as the LOC in the deliverable.
It is unfortunate that his book focusses on C++; really he should have written a separate book on testing that was language-agnostic. There is very little of that section that's really specific to C++.
In addition to unit tests I do integration tests. If there's a file format involved I create lots of input files that contain various edge cases.
With the caveat I haven't read Lakos' book, there's a pretty big backlash going on in the industry against hyper-granular unit testing. Comes down to people realizing that when their implementation needs to change, they often have to change a bunch of unit tests that were written to the implementation.
One problem is that modularity should let you change implementation without friction; that's the whole point of modularity to begin with. So there's not much profit in writing a bunch of client code (tests) that intentionally break modularity and put friction back on the process. It's not quite as bad as just random breakage, because at least you know where it all is, but it's still painful.
Another problem is that if you're not careful, you end up with a set of tests that don't tell you when something works as expected or not, it just tells you when something changed. Having a test that tells you something isn't coded as expected anymore is pretty useless. You know you changed it.
And still another problem is that the testing itself has become too invasive. We're architecting things for dependency injection that would never really need it if it weren't for invasive tests. It's fine and well to drop some testing hooks in, but if you're having to completely invert control in your code to do it and that makes things significantly more complex, that may not be great.
I think there were some fairly recent Martin Fowler posts on this, but I believe his point was that we're very possibly doing it wrong: if the point is to guarantee an interface then tests should be to the interface. And maybe injecting test doubles is good in some cases but too invasive in others--especially when it's done to test an implementation--and so forth.
So not sure where Lakos is on the scale there, but what you describe about putting in layers upon granular layers of tests strikes me as setting yourself up for these issues. I'm a pretty big believer in testing to the interface, and do try to draw a line between functions that are there to serve a particular implementation and functions that describe something more abstract. I test to the latter.
Modularity is the key to maintenance; the biggest benefit of this type of testing to me is to validate your modularity, even more than validating the code within the modules.
Ad 2) most groundbreaking projects you know originated as messy ad-hoc personal projects and not in production-sanitized environments (look even at GPG, embarrassingly for crypto community with one almost bankrupt developer). Crypto-logy/graphy is an art, someone has a bright idea while lacking in other dimensions; the crypto community instead of embracing this idea and helping this person to bring something excellent to the world, shoots them instead down and point to obvious flaws that can be fixed in minutes by someone experienced, while keeping the new idea intact. The crypto requires such an enormous amount of talent that it is bright individuals, not companies, that make things move there, and quite often the more people involved, the worse results.
> someone has a bright idea while lacking in other dimensions; the crypto community instead of embracing this idea and helping this person to bring something excellent to the world, shoots them instead down
I'm sorry, but this is essentially never the case. This is no different than in other fields, for instance math or physics, where complete novices come in every day believing they've had a completely novel idea that will revolutionize the field. 999,999 times out of a million they haven't, and in the one remaining case they've come up with a solution in search of a problem.
"Oh, you've come up with a new cipher? Congratulations. Assuming it is secure, why should we use it ? Is it faster than existing ones? Simpler and more likely to be implemented correctly? Resistant to timing attacks? Resistant to CPU power analysis? Resistant to differential cryptanalysis? Suitable for low-CPU and low-memory embedded devices? Oh, none of these things? Gee, how interesting."
> Crypto-logy/graphy is an art, someone has a bright idea while lacking in other dimensions; the crypto community instead of embracing this idea and helping this person to bring something excellent to the world, shoots them instead down and point to obvious flaws that can be fixed in minutes by someone experienced, while keeping the new idea intact.
Crypto is an environment where a single mistake can get people killed. The stakes are very high. We're not talking about a slight rendering error in CSS here. This is not an appropriate place to be universally warm, fuzzy, encouraging, and forgiving of mistakes. This is incredibly serious stuff that must be treated appropriately seriously - and everyone attempting to touch the field needs to understand that.
In addition, stouset is right. The frequency with which apparently novel ideas are actually novel is much, much, much smaller than a naive guess would lead one to expect. I've watched people attempt to introduce ideas that strike them as novel, only to discover that they're just creating exploitable weaknesses, right here on Hacker News.
Maybe you could suggest what might be "more helpful" but I personally could not think of anything more useful that than telling people to not do something that puts their customers' and their own critical information at risk.
Good crypto algorithms don't get stale or anything. The point is they're fundamentally difficult.
Perhaps you could give a single decent reason for rolling your own crypto algorithm, vs something as easy as 4096 bit RSA or something.
I certainly agree with you. I just think the "don't roll your own crypto" advice is overly ambiguous. Ironically, I think my original comment was ambiguous as well. Let me clarify. I'm not endorsing rolling your own cryptosystem (e.g. a replacement for RSA). Rather, I think the advice should often be paired with additional insight on what "rolling your own" means. When building some sort of software, not everyone (currently) has the luxury of a cryptographic library that handles everything painlessly.
For example, I think most would say that I'm not "rolling my own crypto" if I'm implementing some piece of functionality in my application leveraging the use of some API with "mac", "encrypt", and "decrypt" functions. There are still ways I can screw up using these functions, but I'm arguably not "rolling my own" crypto. So in this situation, the mantra is confusing at best.
Which is part of what's meant. Don't create your own crpyto algorithm and don't write your own implementations of existing algorithms. There are so many gotchas that it would take a lot of effort to get something that's less buggy even than the much-maligned OpenSSL. Someone (I don't remember who) said that even typing the letters "RSA" is too close to rolling your own crypto.
It will turn into a cloud of asteroids surrounding the earth intended to prevent the human race from unleashing whatever hell we create with those autonomous mining robots on the ISS (would be my guess, at least). Feel like the agent/patient distinction is pretty blunt foreshadowing against it being a survival/resilience story against and abstract antagonist like a random natural disaster.
That's at odds with what I understand of the celestial mechanics of the earth-moon system, roughly the same thing happened to Saturn and even though the rings have thickness they are relatively thin compared to their width.
What reason would there be for the moon debris to form a cloud, that would require a lot of energy added to the pieces after the initial collision to get them to go into other orbits? (I'm sure Stephenson has researched this but it makes no sense to me at first sight.)
The options really are quite limited, re-unification in a 'lump', one or more large chunks escaping earth orbit or a ring, I don't see how a cloud is possible.
All but the second would result in quite a bit of debris landing on earth as the debris from the collisions would be scattered in all directions.
Energy was added, so I'm sure it will be a dynamic mess..
On the other hand, the moon is beyond earth's Roche limit so I don't think the final result will be rings.
Also what is the tensile strength of the center of the moon? I'm not sure it's possible to have 7 large non-spheroidal shaped pieces, but google suggests ~800 km is the lower limit for rocky bodies, so maybe it's just possible.
Typical process might be - and this is at a very hight/crude level - 1) job that checks out & runs unit tests triggering 2) a job that creates a build triggering 3) a job deploying to an integration/staging environment and possibly even triggering (or with an air gap, requiring manually running) 4) a job to promote to a production environment.
Testing can/should be added between various steps there depending on the type of product you are working with. Automated functional tests be integrated into a process like that at several points (qa ownership of portions of unit/integration testing framework and automated tests that smoke test new deployments to a staging environment) and knowing those points/tools that integrate well with your CI system/ability to implement those integrations is probably something you want a qa manager to be familiar with.
This is a pretty good list. It's a bit self-serving on the technology questions as well as the general line of questioning for TestMunk (understandably).
If you are really hiring a QA Manager, also consider asking them about their team structure/building philosophy, elaborate on the technical requirements for those roles, if they have experience developing a testing process that works with your company/group's development process. What experience they have with problem hires and how they've dealt with them.
This is really a list of questions of a principal test engineer. A good manager should be able to answer them, too, but it's far more important that they can make a group of QA engineers perform well (otherwise you should be hiring a QA lead).