Hacker Newsnew | past | comments | ask | show | jobs | submit | OptionOfT's commentslogin

Remarkably fresh content.

It's interesting how both virtual columns and hash indexes work, but feel like they're bolted on, vs being made part of the whole ecosystem so that they work seamlessly.


Virtual columns are basically one or two minor patches from being fully done. Pg18 brought us most of the way there.

Hash indices have long been crippled; they shipped almost unusable but every few years get a good QoL update. I think automatic unique constraints are the big thing left there.


Isn't it the case that disabling 2G on its own is enough to block these issues?

Like the notifications are nice, but they're not a Allow / Deny popup. When you get the popup your data could've been intercepted.


Potentially, although my phone only has the following options when selecting what "Network Mode" to use:

- 5G/4G/3G/2G (Auto Connect) - 4G/3G/2G (Auto Connect) - 3G/2G (Auto Connect) - 3G Only

Flagship Samsung from the last 3 years. I have to expose myself to 2G, despite no 2G towers being active in my country. We don't even have 3G anymore either.


iOS allows disabling 2G connections, but only in lock-down mode.

I had this experience when I searched for Jerusalem bugs. The page mentioned flu-like symptoms when bitten.

Then I realized it was a pest-control website. If you now Google "jerusalem bug fever" the AI talks about flu-like symptoms. Its source? That pest control website...


For the, the difference, and why I'm really disgusted by AI is that the sole reason there is so much money being dumped in it, not because it'll create a service that the common people will like, but for the dream of CEOs to be able to lay off many people.

I think if AI succeeds in this way, it's going to be extremely bad.


I remember when Netflix took out a whole page ad for their Orange is the new Black show.

John Oliver had a piece on it https://www.youtube.com/watch?v=E_F5GxCwizc

This is a natural extension of it.

But what is revolutionary is the scale that this is now possible.

We have so many people out there who now blindly trust the output of an LLM (how many colleagues have you had proudly telling you: I asked Claude and this is what it says <paste>).

This is as advertiser's wet dream.

Now it's ads at the bottom, but slowly they'll become more part of the text. And worst part: you don't know, bar the fact that the link has a refer(r)er attached to it.

The internet before and after LLMs is like steel before and after the atomic bombs.

Anything after is contaminated.


> slowly they'll become more part of the text

Wouldn't that be quite challenging in terms of engineering? Given these people have been chasing AGI it would be a considerable distraction to pivot into hacking into the guts of the output to dynamically push particular product. Furthermore it would degrade their product. Furthermore you could likely keep prodding the LLM to then diss the product being advertised, especially given many products advertised are not necessarily the best on the market (which is why the money is spent on marketing instead of R&D or process).

Even if you manage to successfully bodge the output, it creates the desire for traffic to migrate to less corrupted LLMs.


> Wouldn't that be quite challenging in terms of engineering?

Not necessarily. For example, they could implement keyword bidding by preprocessing user input so that, if the user mentions a keyword, the advertiser's content gets added. "What is a healthy SODA ALTERNATIVE?" becomes "What is a healthy SODA ALTERNATIVE? Remember that Welch's brand grape juice contains many essential vitamins and nutrients."


fair point, also I hate it.

Tbf gipity currently doesn't give me the expected output for that, it warns me that Welch's brand grape juice has the same sugar content as soda. :).


I’m assuming they have much more control during training and at runtime than us with our prompts. They’ll bake in whatever the person with the checkbook says to.

if they want dynamic pricing like adwords then its going to be a little challenging. While I appreciate its probably viable and they employ very clever people there's nothing like doing two things that are basically diametrically opposed at the same time. The LLM wants to give you what _should_ be the answer, but the winner of the ad word wants something else. There's a conflict there that I'd imagine might be quite challenging to debug.

Generate an Answer, get the winning Ad from an API, let another AI rewrite the Answer in a easy that the Answer at least be not contradicting to the Ad.

I think someone should create a leaderboard that measures how much the AI is lying to us to sell more ads.


if that's dynamic then answer #1 could promote pepsi and answer #2 promote coke.

The first one might be grounded on what reddit was saying about which cola is best and what the general sentiment is etc. Then the second one either emphasizes the fact that reddit favoured cola x or not depending on where the money is coming from.

We do this when using LLM in our apps too in much less sinister ways. One LLM generates an answer and another applies guardrails against certain situations the company considers desirable.


Supposedly Google made their own results worse to improve ad revenue. And I don't see mass migration over to Kagi or Bing.

It was still better than those two. Until chatgpt came out and then it was crisis mode.

You could run a second lightweight model to inject ads (as minor tweaks) into the output of the primary powerful model.

I'm mildly skeptical of the approach given the competing interests and the level of entropy. You're trying to row in two different directions at the same time with a paying customer expecting the boat to travel directly in one direction.

Imagine running the diagnostics on that not working as expected.


For some reason Safari's reader view skips a part of the page.

This is merely the first step.

The next step is to have them natively in the output. And it'll happen at a scale never seen.

Google had a lot more push-back, because they used to be the entity that linked to other websites, so them showing the AI interview was a change of path.

OpenAI embedding the advertisements in a natural way is much much easier for them. The public already expects links to products when they ask for advice, so why not change the text a little bit to glorify a product when you're asking for a comparison between product A & B.


The conversation is usually: devs can write their own tests. We don't need QA.

And the first part is true. We can. But that's not why we have (had) QA.

First: it's not the best use of our time. I believe dev and QA are separate skillset. Of course there is overlap.

Second, and most important: it's a separate person, an additional person who can question the ticket, and who can question my translation of the ticket into software.

And lastly: they don't suffer from the curse of knowledge on how I implemented the ticket.

I miss my QA colleagues. When I joined my current employer there were 8 or so. Initially I was afraid to give them my work, afraid of bad feedback.

Never have I met such graceful people who took the time in understanding something, and talking to me to figure out where there was a mismatch.

And then they were deemed not needed.


There are layers to this:

1) There are different types of tests, for different purposes. Devs should be writing some of them. Other types & forms of testing, I agree that this is not in many dev's sweet spot. In other words, by the time code gets thrown over the wall to QA, it should already be fairly well vetted at least in the small.

2) Many, but far from all, QA people are just not skilled. It wasn't that long ago that most QA people were washed out devs. My experience has that while testing isn't in the sweet spot of many devs, that they've been better at it than the typical QA person

3) High quality QA people are worth their weight in gold.

4) Too often devs look at QA groups as someone to whom they can offload their grunt work they don't want to do. Instead, QA groups should be partnering with dev teams to take up higher level and more advanced testing, helping devs to self-help with other types of testing, and other such tasks.


> Too often devs look at QA groups as someone to whom they can offload their grunt work they don't want to do.

That's a perfectly legitimate thing to do, and doing grunt work is a perfectly legitimate job to have.

Elimination of QA jobs - as well as many other specialized white collar jobs in the office, from secretaries to finance clerks to internal graphics departments - is just false economy. The work itself doesn't disappear - but instead of being done efficiently and cheaply by dedicated specialists, it's dumped on everyone else, on top of their existing workloads. So now you have bunch of lower-skill busy-work distracting the high-paid people from doing the high-skill work they were hired for. But companies do this, because extra salaries are legible in the books, while heavy loss of productivity isn't (instead it's a "mysterious force", or a "cost disease").


The problem of handoffs makes this work far from cheap.

And tests are not dumb work. TDD uses them to establish clarity, helping people understand what they will deliver rather than running chaotic experiments.

Highly paid people should be able to figure out how to optimize and make code easy to change, rather than ignoring technical debt and making others pay for it.

QA is just postponing fixing the real problem - hard to change the code.


The best QA people I've worked with were effective before, during, and after implementation - they worked hand in hand with me both to shape features testably, work with me on the implementation for the harness for additional testing they wanted to do beyond what was useful for development, and followed up with assistance for finding and fixing bugs and using regression tests to prevent the category of error from happening again.

At the very least I want someone in QA doing end-to-end testing using e.g. a browser or a UI framework driver for non-web software, but there's so much more they do than that. In the same way I respect the work of frontend, backend, infrastructure, and security engineers, I think quality engineering is its own specialized field. I think we're all poorer for the fact that it's viewed as a dumping ground or "lesser"


>High quality QA people are worth their weight in gold.

They absolutely are, but I've only met a couple high quality QA people in my career.


That's because we don't value QA in the way that matters.

If you're a talented SDET, you're probably also, at least, a good SDE.

If you'll make more money and have more opportunity as an SDE, which career path will you follow?


Also, for most people passionate about software, they'd rather be building than testing, especially if pay is at least equal.

Testing is probably my favorite topic in development and I kind of wish I could make it my "official" specialty but no way in hell am I taking a pay cut and joining the part of the org nobody listens to.

That, and this

> Many, but far from all, QA people are just not skilled

can also be said of developers.


> can also be said of developers.

Not really. Unfortunately some organizations still follow the premise that the job of a QA is exclusively doing manual acceptance testing, and everything else is either beyond the scope of their work or the lowest of priorities.

Based on this, said organizations end up hiring people with barely any programming skills, let alone competence in software development.

What do you get from a QA who barely can piece a script together? What if you extend this to a whole team of QAs?

I've had the utter displeasure of having worked for a company whose QAs, even new hires, could not write a single line of code even if their lives depended on it. They had an single old legacy automated test suite written by someone no longer in their ranks that they did not used at all for anything other than arguing they automated some tests. But they hadn't posted a PR in over a year.

The worst part is that they vigorously lobbied management to prevent any developer from even considering writing their own test suite.

You claim developers can be incompetent. What do you call whole organizations who not only fail to do their job but also actively manoeuver to prevent anyone else from filling in the void?


I am going to say that outside the HN echo chamber, it is closer to all than on the other side. Have you been to fortune 1000 non software corps? If you would throw away 90% of their IT people, people would barely notice. Probably just miss John his cool weekend stories on Monday (which is basically almost weekend!). LLM drives this home, painfully; we come in these companies a lot and it is becoming very clear most can be replaced today with a 100$ claude code subscription. We see the skilled devs using claude code on their own dime as one of their tools, often illegally (not allowed yet by company legal) and the rest basically, as they always did, trying to get through week without getting caught snoring too loud.

I've also met about the same number of high quality developers in my career.

Most people are mid.


> Most people are mid.

Most people are mid because you define mid based on where most people stand. The good ones are those who stand out among their peers.

Being mid and competent are two different concepts though. That also depends on the organization you're in. In some orgs, the "mid" can be high-quality, whereas in others the "mid" might even be incompetent.


I've never seen mid QA people, they are either excellent or useless, no inbetween.

The average QA/SDET I've worked with are far, far less capable than the average SDE.

> In other words, by the time code gets thrown over the wall to QA, it should already be fairly well vetted at least in the small.

My opinion is further than that. I tell my people, if you can't get a piece of software working correctly then you have not completed your job.

It is definitely an art and skill that takes guidance and practice to develop, just like designing and writing the code, but IMO it's also the minimum bar to being a complete dev.

Having said that, we do use qa and we do find stuff in qa, but they are typically the types of things that are exposed when linking systems and processes together.


Yeah, I'm not sure why or how non-technical QA staff are meant to test my implementation of a load shedder. I'm 100% sure they're not going to realise the API is suboptimal and refactor it during the process of writing a test.

> Many, but far from all, QA people are just not skilled

Most (but not all) devs are just not skilled


That why all successful open source projects have their own seperate qa team which writes the tests and do the release. Bullshit. The quality is better if the devs do maintain the tests and do the releases.

That why all successful open source projects have their own seperate qa team which writes the tests and do the release. Bullshit. The quality is better if the devs do maintain the tests and do the releases.

QA is to report and repro bugs


Hi, QA here. I want to report a fault with your commenting. It seems you are not using DRY. Sometimes it is better to let a grown-up have a look at your output before deploying it.

Thanks mate, I needed this laugh, well done!

Best QA people I worked with were amazing (often writing terrific automated tests for us). The worst would file tickets just saying "does not work"

I sometimes suspect that the value of a QA team is inversely proportional to the quality of the dev team.


> I sometimes suspect that the value of a QA team is inversely proportional to the quality of the dev team.

My experience has been that this is true, but not for the reason you likely intend. What I've seen is the sort of shop that invests in low tier QA/SDET types are the same sorts of shops that invest in low tier SEs who are more than happy to throw bullshit over the wall and hand off any/all grunt work to the testers. In those situations, the root cause is the corporate culture & priorities.


> There are different types of tests, for different purposes.

I'm unconvinced. Sure, I've heard all the different labels that get thrown around, but as soon as anyone tries to define them they end up being either all the same thing or useless.

> Devs should be writing some of them.

A test is only good if you write it before you implement it. Otherwise there is no feedback mechanism to determine if it is actually testing anything. But you can't really write more than one test before turning to implementation. A development partner throwing hundreds of unimplemented tests at you to implement doesn't work. Each test/implementation informs the next. One guy writes one test, one guy implements it, repeat, could work in theory, I guess, but in practice that is horribly inefficient. In the real world, where time and resources are finite, devs have to write all of their own tests.

Tests and types exist for the exact same purpose. Full type systems, such as seen in languages like Lean, Rocq, etc. are monstrous beasts to use, though, so as a practical tradeoff we use "runtime types", which are much more practical, in the languages people actually use on a normal basis instead. I can't imagine you would want a non-dev writing your types, so why would you want them to write tests?

> High quality QA people are worth their weight in gold.

If you're doing that ticketing thing like the earlier comment talked about, yeah. You need someone else to validate that you actually understood what the ticket is trying to communicate. But that's the stupidest way to develop software that I have ever seen. Better is to not do that in the first place.


> A test is only good if you write it before you implement it.

Perhaps you should read up on regression tests, or snapshot tests, or consistency tests, or pretty much any flavor of UI tests.

Or automated testing in general.


> Perhaps you should read up on regression tests, or snapshot tests, or consistency tests, or pretty much any flavor of UI tests.

You must have missed the first line: "I've heard all the different labels that get thrown around, but as soon as anyone tries to define them they end up being either all the same thing or useless." We didn't get your definitions. Maybe you could have been first to break the rule. But the definitions others have conceived for these certainly don't.

> regression tests

This one doesn't seem to be used by anyone. "Regression testing" is a term that I can see is commonly used. Did you intend to say that? But, as it is commonly defined, simply means to run your test suite after you've made changes to the code to ensure that your changes haven't violated the invariants...

Which is like, uh, the reason for having tests. If you don't run them and react to any violations, what's the point? We can safely file that one under "they end up being all the same".


> We didn't get your definitions.

Your definition makes no sense, and at most reflects your own ignorance on the topic. I already listed a few and very specific classes of tests, which not only have a very crisp definition but also by their very nature can only be deployed after implementations are rolled out

> This one doesn't seem to be used by anyone.

That just goes to show how clueless and out of touch you are. It's absurd to even imply that regressions aren't tracked.

Listen, it's ok to read through discussions on topics you are not familiar with. If you want to chime in, the very least you can do is invest some time learning the basics before hitting reply.


> I already listed a few and very specific classes of tests

We have a list of ostensible, undefined classes of tests that are imagined to not be able to be written before the implementation. But clearly all of those listed are written before implementation, at least where we can find a common definition to apply to them. If there is an alternative definition in force, we're going to have to hear it.

> It's absurd to even imply that regressions aren't tracked.

Still no definition, but I imagine if one were to define "regression test" that it would be a test you write when a bug is discovered. But, of course, you would write that test before you implement the fix, to make sure that it actually exploits the buggy behaviour. It is not clear why we are left to guess what the definitional intent is but, using that definition, it is the shining example of why you need to write a test before turning to its implementation implementing. Like before, you would have no feedback to ensure that you actually tested what lead to the original bug if you waited until after it is fixed.

Of course, if that's what you mean, that's just a test, same as all the others. It is not somehow a completely different kind of test because the author of the test is responding to a bug report instead of a feature request. If your teammate didn't jump to implementation before writing a test, the same test would have been written before the code ever shipped. The key point here is that "regression" adds nothing to the term. Another to file under "they end up being all the same".


> Still no definition, but I imagine if one were to define "regression test"

Why do you need to "imagine" anything? Just google it. "Regression test" is a very standard thing.

Also, the first commenter was correct. Many, many, many kinds of tests are only useful after the code is written.

TDD works for some people doing some kinds of code, but I've never found that much value in it. With what I do, functional testing is highest impact, followed by targeted unit tests for any critical/complex library code, followed by integration or end to end or perf testing, depending on the project.


> Why do you need to "imagine" anything? Just google it.

Why not read the thread?

Perhaps the results are regional (in fact, we know they can be), but "regression test" literally returns results for "regression testing" instead, as said before. There is nothing out there to suggest anyone actually uses the term. Even the popular LLMs say the same thing Google does — that "regression test" is merely the act of running your tests after making changes — which is what we simply call "testing". So where do we go from here?

> Many, many, many kinds of tests are only useful after the code is written.

Are you referring to the entire codebase? Clearly once you've implemented the first test then all other tests are going to be dependent on code existing. However, that's not what we're talking about. "Implement" is in reference to the test, not the entire program.

> TDD works for some people doing some kinds of code

"Test first" isn't really TDD, although TDD suggests it too. The idea is way older than TDD. TDD is actually about testing behavioural stories instead of testing implementation details. "Test first" does help ensure that you don't accidentally test implementation details (can't when implementation doesn't yet exist), but it isn't some kind of strict requirement. Technically you can practice the spirit of TDD even if you write tests after.

But out of curiosity, if you ever use a language with static types, do you also defer defining the types until after the implementation is finished? I've never seen that before. In my experience, developers find it easier to specify a part of the program before proceeding with implementing what is specced.

> I've never found that much value in it.

I mean, to be fair, I don't either because why would I ever make mistakes? I most definitely do find the value when others do it, though. But I get what you are saying. I too was once a junior developer with insular thinking. Now that I'm old an experienced, I have to worry about how groups of people interact. That changes your perspective.

> functional testing is highest impact, followed by targeted unit tests for any critical/complex library code, followed by integration or end to end or perf testing

What's the difference? Kent Beck, who is usually credited with coining "unit test", has told on numerous occasions that a unit test is a test that can run without affecting other tests. Which, in reality, is just a test. You would never purposefully write a test that can break another, surely? If only some (or none) of your tests are unit tests, I say you are doing something horribly wrong. Lump them in the “useless” category.


> I too was once a junior developer with insular thinking

My dude, I've been a professional SWE for more than ten years lol. I don't know where you've been working, but I've been in Silicon Valley companies and startups.

I have honestly never met an engineer -- other than interns or new grads -- who didn't know the difference between a unit test and a functional test lol. Or a regression test, either, for that matter.

I'm kind of impressed that someone could read so many sources and yet not take anything away from them.

Unit tests are not "tests that can be run without affecting other tests". Maybe that was true in the 90s, I don't know how code was written and tested back then. That is not how the term is used in modern parlance.

Google "unit test definition". What do you get?


> Maybe that was true in the 90s

Beck still uses it that way, but I can appreciate that he is only the credited originator, not some kind of official authority. Just because he uses it one way does not mean you use it the same way. I only reach for his definition as it is the only one I am familiar with.

Language is certainly fluid. You are still fairly new to the industry by your own admission, so I can understand that the kids' lingo may have changed by the time you started learning about things. However, for better or worse, I cannot relive your life experience. Google, which models the user when picking results, doesn't help as it returns results that match my past experience. I fully expect your Google searches offer different results, but unless you're offering up your account for me to use... (don't do that)

> That is not how the term is used in modern parlance.

Right, as indicated in the original comment, along with those that followed, I don't know how you use it in modern parlance. What does it mean to you?

> Google "unit test definition". What do you get?

It says that it is a test that runs independently. Which is just another way to say the same as what Beck says.


Nah. I decline to educate you on testing methodology if you're unable to do even the very bare minimum yourself.

That's a funny way to say "Actually, you're right. No matter what definition I try to come up with, they end up being either all the same thing or useless", but I'll accept it.

That’s a good call. My recommendation is that you decline on educating anyone on this because you’re wrong, utterly.

If you’ve been a swe for 10 years it says a lot about your character and competence by being completely wrong here.

The definition of a unit test can indeed be characterized as a test that doesn’t affect other tests.


I very rarely worked with good QA.

In my mind a good QA understands the feature we're working on, deploys the correct version, thoroughly tests the feature understanding what it's supposed and not supposed to do, and if they happen to find a bug, they create a bug ticket where they describe the environment in full and what steps are necessary to reproduce it.

For automation tests, very few are capable of writing tests that test the spec, not implementation, contain sound technical practices, and properly address flakiness.

For example it's very common to see a test that clicks the login button and instead of waiting for the login, the wait 20 seconds. Which is both too much, and 1% of the time too little.

Whenever I worked with devs, they almost always managed to do all this, sometimes they needed a bit of guidance, but that's it. Very very few QA ever did (not that they seemed to bothered by that).

A lot of QA have expressed that devs 'look down' on them. I can't comment on that, but the signal-to-noise ratio of bug tickets is so low, that often it's you have to do their job and repeat everything as well.

This has been a repeated experience for me with multiple companies and a lot of places don't have proper feedback loops, so it doesn't even bother them as they're not affected by the poor quality of bug reports, but devs have to spend the extra time.


I'll espouse the flip side of this:

I've worked with a handful of excellent QA. In my opinion - the best QA is basically a product manager lite. They understand the user, and they act from the perspective of the user when evaluating new features. Not the "plan" for the feature. The actual implementation provided by development.

This means they clarify edge cases, call out spots that are confusing or tedious for a user, and understand & test how features interact. They help take a first draft of a feature to a much higher level of polish than most devs/pms actually think through, and avoid all sorts of long term problems with shipping features that don't play nicely.

I think it's a huge mistake to ask QA to do automation tests - Planning for them? Sure. Implementation? No. That's a dev's job, you should assign someone with that skillset (and pay them accordingly).

QA is there to drive quality up for your users, the value comes from the opinions they make after using what the devs provide (often repeatedly, like a user) - not from automating that process.


Right - the best QA people need only be as technical as your user base. Owning QA environment, doing deploys, automated testing, etc are all the sort of things that can live with a developer.

They are there to protect dev teams from implementing misunderstandings of tickets. In a way a good Product Manager should wear a QA hat themselves, but I've seen fewer good PMs than good QAs....


Yup - I want to echo that the best PM I worked for also did quite a bit of QA work himself.

A deep understanding of the direct user experience of working with your product is a really valuable thing to have if you want to make users actually like the product. There's a WORLD of difference between the user experience that is mocked in a tool like figma or written down in a system like jira, and the actual live result.

As an aside, some of the most impactful engineers I've worked with also manually interact with the product incredibly often during development. If you automate a test too early with e2e tooling, you miss out on the wisdom gained by having to click through a feature 100+ times during development. Personally doing it exposes all sorts of rough edges and pain spots that your users are going to feel. Automating it makes you numb to them instead. It's a difficult balance.


Reminds me of how often I've felt a little envious as a dev of how much influence QA people had on effective specification. Whenever a spec appears a little ambiguous (or contradictory) the QA person becomes judge and their decisions effectively become law.

yes - devs are great at coding so get them to write the tests and then I, a good tester, (not to be confused with QA) can work with them on what are good tests to write. With this in place I can confidently test to find the edge cases, usability issues etc And when I find them we can analyze how the issue could have been caught sooner

Coz while devs with specialties usually get paid more than a generalist, for some reason testing as a specialty means getting a pay cut and a loss in respect and stature.

Hence my username.

I wouldnt ever sell myself as a test automation engineer but whenever i join a project the number one most broken technical issue in need of fixing is nearly always test automation.

I typically brand this work as architecture (and to be fair there is overlap) and try to build infra and tooling less skilled devs can use to write spec-matching tests.

Sadly if i called it test automation i'd have to take a pay cut and get paid less than those less skilled devs who need to be trained to do TDD.


I think there are 3 'kinds' of QA who are not really interchangeable as their skillsets don't really overlap.

- Manual testers who don't know how to code at all, or at least arent' good enough to task them with writing code

- People who write automated tests (who might or might not also do manual testing)

- People writing test automation tools, managing and desigining Test infra etc. - these people are regular engineers and engineering skillsets. I don't think there's generally a difference in treatment or compensation, but I also don't really consider this 'QA work'

As for QA getting paid less - I don't agree with this notion, but I see why it happens. Imo and ideal QA would be someone, who's just as skilled in most stuff as a dev (except does something a bit different), has the same level of responsibility and capacity for autonomy - in exchange I'd argue they deserve the same recognition and compensation. And not giving them that leads to the best and brightest leaving for other roles.

I think it's amazing when one gets to work with great QA, and can rest easy that anything they make will get tested properly, and you get high quality bug reports, and bugs don't come back from the field.

Also it bears mentioning, that it's self-evident to me, but might not be self-evident to everyone, that devs should be expected to do a baseline level of QA work themselves - they should verify the feature is generally working well and write a couple tests to make sure this is indeed the case (which means they have to be generally aware how to write decent tests).


> A lot of QA have expressed that devs 'look down' on them. I can't comment on that, but the signal-to-noise ratio of bug tickets is so low, that often it's you have to do their job and repeat everything as well.

When I was a lead, I pulled everyone, (QA, devs, and managers) into a meeting and made a presentation called "No Guessing Games". I started with an ambiguous ticket with truncated logs...

And then in the middle I basically explained what the division of labor is: QA is responsible for finding bugs and clearly communicating what the bug is. Bugs were not to be sent to development until they clearly explained the problem. (I also explained what the exceptions were, because the rule only works about 99.9% of the time.)

(I also pointed out that dev had to keep QA honest and not waste more than an hour figuring out how to reproduce a bug.)

The problem was solved!


Communicating a bug clearly is testing/QA 101

In my experience, I find that management doesn't understand this, or otherwise thinks it's an okay compromise. This usually comes with the organization hiring testers with a low bar, "sink or swim" approach.

Having worked with both good and bad QA...

The biggest determinant is company culture and treating QA as an integral part of the team, and hiring QA that understands the expectation thereof. In addition, having regular 1:1s both with the TL and EM to help them keep integrated with the team, provide training and development, and make sure they're getting the environment in which they can be good QA.

And work to onboard bad QA just as we would a developer who is not able to meet expectations.


I used to work with a QA person who really drove me nuts. They would misunderstand the point of a feature, and then write pages and pages of misguided commentary about what they saw when trying to test it. We'd repeat this a few times for every release.

This forced me to start making my feature proposals as small as possible. I would defensively document everything, and sprinkle in little summaries to make things as clear as possible. I started writing scripts to help isolate the new behavior during testing.

...eventually I realized that this person was somehow the best QA person I'd ever worked with.


how did misunderstanding a feature and writing pages on it help, not sure I follow the logic of why this made them a good QA person? Do you mean the features were not written well and so writing code for them was going to produce errors?

In order to avoid the endless cycle with the QA person, I started doing this:

> This forced me to start making my feature proposals as small as possible. I would defensively document everything, and sprinkle in little summaries to make things as clear as possible. I started writing scripts to help isolate the new behavior during testing.

Which is what I should have been doing in the first place!


If a QA person (presumably familiar with the product) misunderstands the point of a feature how do you suppose most users are going to fare with it?

It's a very clear signal that something is wrong with either how the feature was specified or how it was implemented. Maybe both.


I took GPs meaning that the QA person in question sucked, but them being the best meant the other QA folks they've worked with were even worse.

Let's call the person in question Alex. Having to make every new feature Alex-proof made all of the engineers better.

Did it? Sounds like making things "Alex proof" may have involved a large amount of over-engineering and over-documenting.

That's not at all what they meant. They meant they ended up raising their own quality bar tremendously because the QA person represented a ~P5 user, not a P50 or P95 user, and had to design around misuse & sad path instead of happy path, and doing so is actually a good quality in a QA.

It's possible but I'd guess they are probably not worse than the average user.

I worked with someone a little while ago that tended to do this; point out things that weren't really related to the ticket. And I was happy with their work. I think the main thing to remember is that the following are two different things

- Understanding what is important to / related to the functionality of a given ticket

- Thoroughly testing what is important to / related to the functionality of a given ticket

Sure, the first one can waste some time by causing discussion of things that don't matter. But being REALLY good at the second one can mean far less bugs slip through.


Most of the time QA should be talking about those things to the PM, and the PM should get the hint that the requirements needed to be more clear.

An under-specified ticket is something thrown over the fence to Dev/QA just like a lazy, bug-ridden feature is thrown over the fence to QA.

This does require everyone to be acting honestly to not have to belabor the obvious stuff for every ticket ('page should load', 'required message should show', etc.). Naturally, what is 'obvious' is also team/product specific.


I think noticing other bugs that aren't related to the ticket at hand is actually a good thing. That's how you notice them, by "being in the area" anyway.

What many QAs can't do / for me separates the good and the not so good ones, is that they actually understand when they're not related and just report them as separate bugs to be tackled independently instead of starting long discussions on the current ticket at hand.


so, QA should be noticing that the testers are raising tickets like this and step in and give the testers some guidance on what/how they are testing I've worked with a clients test team who were not given any training on the system so they were raising bugs like spam clicking button 100 times, quickly resizing window 30 times, pasting War and Peace.. gave them some training and direction and they started to find problems that actual users would be finding

I didn't mean reporting things that you wouldn't consider a bug and just close. FWIW tho, "Pasting War and Peace" is actually a good test case. While it is unlikely you need to support that size in your inputs, testing such extremes is still valuable security testing. Quite a few things are security issues, even though regular users would never find them. Like permissions being applied in the UI only. Actual users wouldn't find out that the BE doesn't bother to actually check the permissions. But I damn well expect a QA person to verify that!

Was I meant though were actual problems / bugs in the area of the product that your ticket is about. But that weren't caused by your ticket / have nothing to do with that ticket directly.

Like to make an example, say you're adding a new field to your user onboarding that asks them what their role is so that you can show a better tailored version of your onboarding flows, focusing on functionality that is likely to be useful for you in your role. While testing that, the QA person notices a bug in one of the actual pieces of functionality that's part of said onboarding flow.

A good QA understands and can distinguish what is a pre-existing bug and what isn't and report it separately, making the overall product better, while not wasting time on the ticket at hand.


Ha, that's certainly a way to build things fool-proof.

There’s definitely a bimodal distribution of QA people for capability. The good ones are great. The bad ones infuriating.

The lack of respect and commensurate compensation at a lot of companies doesn't help. QA is often viewed as something requiring less talent and often offshored which layers communication barriers on top of everything. I've met QA people with decent engineering skills that end up having the most knowledge about the application works in practice. Tell them a proposed change and they'll point out how it could go wrong or cause issues from a customer perspective.

This 100%

Companies think QA is shit, so they hire shit QA, and they get shit QA results.

Then they get rid of QA, and then the devs get pissed because now support and dev has turned to QA and customers are wondering how the hell certain bugs got out the door.


Yeah and then we started expecting them to code. Which has not gone well. And the thing is if you have the suspicious mind of a top rate QA person and you can code well, you’re already 2/3 of the way to being a security consultant or a Red Shirt engineer and doubling your salary.

This is why your duty in engineering is to drag QA into every specification conversation as early as possible so that they can display that body of knowledge.

Yes. The best QA people are gold. Infuriating at times, but gold.

> end up having the most knowledge about the application works in practice

The best I've worked with had this quality, and were fearless advocates for the end-user. They kept everyone honest.


I was at a company once where they were talking about trying to do a rewrite of an existing tool because the original engineers were gone. But the requirements docs were insufficient to reach feature parity, so they weren’t sure how to proceed. Once I got the QA lead talking they realized he had the entire spec in his head. Complete with corner cases.

The problem is usually in the company culture and hiring process.

Are the QA people & team treated like partners, first class citizens, and screened well the way you would an SWE?

Or are they treated like inferior replaceable cogs, resourced from a 3rd party consulting body shop with high turnover?

You get what you hire for.


We hired a guy with an English Lit degree as QA. He was super smart, and really self-motivated. He learned full-stack dev, and wrote a fcking amazing dashboard and test config wizard in like half a year. (This was before AI)

People at that point were complaining about tests being hard to run for YEARS.

He then left for a dev role at another company in a short time.


And “they had huge impact & left quickly” is actually a good outcome right!

Better than underhiring to set the whole endeavor up for failure


Why would you upskill as a QA when you can become a dev? Every single QA person I know only became a QA as a stepping stone. That's now it's seen.

Companies don't care about QA, so of course you don't see any QA wizards anymore.


Necessary but insufficient. On several projects where I was the lead I started honoring the QA lead's attempts at vetoing a release. I was willing to explain why I thought changes we had made answered any concerns that QA had, or did not, but if the lead said No then I wasn't going to push a release.

If you're consistent about it, you can restore a sizable chunk of power to the QA team, just by respecting that No. With three people 'in charge' of the project instead of two, you get restoration of Checks and Balances. Once in a while Product and QA came to me to ask us to stop some nonsense. Occasionally Product and Dev wanted to know why things were slipping past QA. But 2/3rds of the interventions were QA and Dev telling Product to knock something off because it was destroying Quality and/or morale.

God do I miss QA and being able to go 2 against one versus the person whose job description requires them to be good at arguing for bullshit.


I agree with everything you're saying here. The only thing I would add is

Before I hand a ticket off to QA, I write up

1. What I understood the requirements to be,

2. What I implemented,

3. How to interact with it (where it is in the tool, etc), and

4. What _other_ areas of the code, besides the obvious, are touched; so they can regression test any areas that they feel are important

Doing that writeup makes sure I understand everything about my implementation and that I didn't miss anything. I find it extremely valuable both for QA and myself.


"What I understood the requirements to be"

That is a way to get your changed approved quickly, so it is good for you. It is terrible for a project that values quality.

A tremendous value of a QA team is that they interpret the requirements independently and if in the end they approve you can be pretty confident you implemented something that conforms to the commonly understood meaning of requirements and not your developer biased view.


Challenge. I use it as a way to double check, "Did the way I understood the ticket/requirement match the way that QA did?". They're testing what the ticket says is required. But part of that is testing that I understood the ticket correctly.

Rubber-ducky in human form.

Like all other job functions tangential to development- it can be difficult to organize the labor needed to accomplish this within a single team, and it can be difficult to align incentives when the labor is spread across multiple teams.

This gets more and more difficult with modern development practices. Development benefits greatly from fast release cycles and quick iteration- the other job functions do not! QA is certainly included there.

I think that inherent conflict is what is causing developers to increasingly managing their own operations, technical writing, testing, etc.


In my experience, what works best is having QA and a tech writer assigned to a few teams. That way there is an ongoing, close relationship that makes interactions go smoothly.

In a larger org, it may also make sense to have a separate QA group that handles tasks across all teams and focuses on the product as a unified whole.


I can’t imagine any role in software that gets better delivering more work in longer cycles than less work in shorter cycles.

And I can’t speak for technical writing, but developers do operations and testing because automation makes it possible and better than living in a dysfunctional world of fire and forget coding.


I've worked in enterprise software development with the full lifecycle for over 30 years.

I have found QA to be mostly unnecessary friction throughout my career, and I've never been more productive than when QA and writing tests became my responsibility.

This is usually what has happened during a release cycle.

1) Devs come up with a list of features and a timeline.

2) QA will go through the list and about 1/2 of the features will get cut because they claim they don't have time to test everything based on their timeline.

3) The cycle begins and devs will start adding features into the codebase and it's radio silence from the QA.

4) Throughout the release QA will force more features to get dropped. By the end of the release cycle, another 1/4 of the original number of features get dropped leaving about 1/4 of the original features that were planned. "It will get done in a dot release."

5) Near the end of the release, everything gets tested and a mountain of bugs come in near the deadline and everyone is forced to scramble. The deadline gets pushed back and QA pushes the blame onto the devs.

6) After everything gets resolved, the next release cycle begins.

This is at quite a few enterprise software companies that most people in Silicon Valley have heard of if you've been working for more than 10 years.


Release cycles are the problem

No QA is better than bad QA. I've had great QA teams and just awful QA teams. Most of them are somewhere in the middle. I'll take no QA over bad QA every time. Filing bugs that aren't bugs, not understanding the most basic things like what a Jpeg file is for a product centered around images, etc. QA for the sake of QA doesn't always yield results, and can cause a lot of distraction for a competent dev team.

Yeah. As a dev, it is simply not always a great idea that the same person that built the feature is the one testing it. Sometimes I already tested it 100 times, and by the 110th time I basically become blind to it because I know it too well. Then its great to have someone with fresh eyes and without the detailed knowledge do the testing to see if it works and if it works for our customers.

> First: it's not the best use of our time. I believe dev and QA are separate skillset. Of course there is overlap.

I've heard this comment before,and it fails to hold in practice. You are not making the best use of your time if your ticket is blocked because you were paired with a QA who is sitting on a ticket, or is oblivious to the problem domain to the point he struggles to put together a single test that is relevant. You are also not making the best use of your time if a test suite is not maintained by said QAs and fails in pipelines without anyone acting on it.

For verification and validation tasks, and if QAs treat pipeline maintenance as an OnCall rotation, that would hold true. Anything beyond this, a QA can waste far more time than the one they could hypothetically save you.


> First: it's not the best use of our time.

I want to push back strongly on this. I think this attitude leads to more bugs as QA cannot possibly keep up with the rate of change.

I understand that you, personally, may not have exhibited this based on your elaboration. However that doesn’t change the fact that many devs do take exactly this attitude with QA.

To take a slightly different comparison, I would liken it to the old school of “throwing it over the wall” to Ops to handle deployment. Paying attention to how the code gets to production and subsequently runs on prod isn’t a “good use of developer time,” either. Except we discarded that view a decade ago, for good reason.


> I believe dev and QA are separate skillset.

I'm not sure it's a separate skillset. You need the other side's skills all the time in each of those positions.

But it's certainly a separate mindset. People must hold different values in each of them. One just can't competently do both at the same time. And "time" is quantized here in months-long intervals, because it takes many days to internalize a mindset, if one is flexible enough to ever do it.


dev tests - whitebox tests

qa tests - blackbox tests

there is a place for both.

I think the problem is that usually the business doesn't care enough about that level of quality (unless nasa or avaiation)


As much as I hate GitHub Actions, I prefer it over Jenkins and others, because it is right there. I don't need to go and maintain my own servers, or even talk to a 3rd party to host the services.

Like many things, the tool people reach for is the one that's in the box... As opposed to a trip to the hardware store, getting distracted for a few hours, coming back home and no longer in the mood to work on your project/choore.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: