Maybe I am being contrarian, or maybe I don't understand; if I am reading input, I am always going to validate that input after parsing. Especially if it is from a user.
I understand that they should be separate, but they should be very close together.
> if I am reading input, I am always going to validate that input after parsing.
In the "parse, don't validate" mindset, your parsing step is validation but it produces something that doesn't require further validation. To stick with the non-empty list example, your parse step would be something like:
parse [h|t] = Just h :| t
parse [] = Nothing
So when you run this you can assume that the data is valid in the rest of the code (sorry, my Haskell is rusty so this is a sketch, not actual code):
process data =
do {
Just valid <- parse data;
... further uses of valid that can assume parsing succeeded, if it didn't an error would already have occurred and you can handle it
}
That has performed validation, but by parsing it also produces a value that doesn't require any revalidation. Every function that takes the parsed data as an argument can ignore the possibility that the data is invalid. If all you do is validate (returning true/false):
validate [h|t] = true
validate [] = false
Then you don't have that same guarantee. You don't know that, in future uses, that the data is actually valid. So your code becomes more complex and error-prone.
process data =
if validate data then use data else fail "Well shit"
use [h|t] = do_something_with h t
use [] = fail "This shouldn't have happened, we validated it right? Must have been called without data being validated first."
The parse approach adds a guarantee to your code, that when you reach `use` (or whatever other functions) with parsed and validated data that you don't have to test that property again. The validate approach does not provide this guarantee, because you cannot guarantee that `use` is never called without first running the validation. There is no information in the program itself saying that `use` must be called after validation (and that validation must return true). Whereas a version of `use` expecting NonEmpty cannot be called without at least validating that particular property.
Suppose you're receiving bytes representing a User at the edge of your system. If you put json bytes into your parser and get back a User, then put your User through validation, that means you know there are both 'valid' Users and 'invalid' Users.
Instead, there should simply be no way to construct an invalid User. But this article pushes a little harder than that:
Does your business logic require a User to have exactly one last name, and one-or-more first names? Some people might go as far as having a private-constructor + static-factory-method create(..), which does the validation, e.g.
class User {
private List<String> names;
private User(List<String> names) {..}
public static User create(List<String> names) throws ValidationException {
// Check for name rules here
}
}
Even though the create(..) method above validates the name rules, you're still left holding a plain old List-of-Strings deeper in the program when it comes time to use them. The name rules were validated and then thrown away! Now do you check them when you go to use them? Maybe?
If you encode your rules into your data-structure, it might look more like:
class User {
String lastName;
NeList<String> firstNames;
private User(List<String> names) throws ValidationException {..}
}
If I were doing this for real, I'd probably have some Name rules too (as opposed to a raw String). E.g. only some non-empty collection of utf8 characters which were successfully case-folded or something.
Is this overkill? Do I wind up with too much code by being so pedantic? Well no! If I'm building valid types out of valid types, perhaps the overall validation logic just shrinks. The above class could be demoted to some kind of struct/record, e.g.
record User(Name lastName, NeList<Name> firstNames);
Before I was validating Names inside User, but now I can validate Names inside Name, which seems like a win:
class Name {
private String value;
private Name (String name) throws ValidationException {..}
}
I didn't think there would be a gain of speed or productivity, but just a (possibly ominous) idea of 'cutting out the middleman'. Granted, that middleman is very important.
Natural language is not the best language for formal descriptions of projects, but it is the one that humans use most day to day. Perhaps a chain like this would be the start of on demand programs.
You are right that this is an error prone process, but as these tools are used now, are much like how compilers were used in the past. It is, after all, another layer of abstraction.
Is it foolproof? No. I do think the idea had some merit, though.
This may offend some, but I think the large amount of women joining the labor force may be a factor. American society, pre-WWII, usually had only one member of the household at work. More often than not it was the man who went to work, and the women stayed home to take care of the children. American society, pre-1930s (the Great Depression saw the rise of the female workers) was build on a one-income household.
And yes, there is a big income disparity in the US. However, the fact that labor has practically doubled is another thing.
This is surely part of the story historically, but not recently. Women’s labor force participation rate peaked in the late 90s in the US, while total fertility rate is down ~20% since then. https://fred.stlouisfed.org/series/LNS11300002
There could be a rubber band effect, where it takes time to get a feel for things like paying for childcare. The reaction is going to come from those who are observing what's happening to the "early adopters".
The U.S. birth rate was falling from the late 19th century until after WWII, when the Baby Boom began. This historical fact generally invalidates appeals to post-WWII women’s liberation as the primary cause of falling birth rates. They were falling decades before it happened, during the exact time you are citing.
This seems like the type of argument that is possible to perform a data analysis to defend or refute. Lot of countries collect data on female participation in the workforce and birth rates. Many countries also collect data that could determine if this has an impact on the individual household level.
In my opinion, it's this, though I think it's a second-order effect. I believe that the issue isn't so much that women are working, but rather that there is a shortage of household labor. This labor pool is what was traditionally used for childcare needs. When you pair that labor shortage with (terrible) modern parenting standards, there just isn't enough time to raise kids without becoming a zombie.
Edit: To be clear, I think there are multiple contributing factors. It's just that, in my view, the time/labor shortage is the core of the issue. Everything else feeds into it in some way. The factors eventually start stacking and problems that contribute to the time issue get exacerbated by their own contributing factors.
Economics pressures, for instance. Bad housing economics means couples work maximum hours to afford daily expenses, decreasing available household labor. It also fractures extended family systems when people have to relocate for cheaper housing or better jobs, eliminating the traditional labor-pooling arrangements for childrearing. Generally poor median household economics keep parents in constant anxiety too, which then requires time to be spent on coping routines.
Social atomization has further taken away the kind of pooled childcare labor that used to absorb overflow. Media has displaced churches, bars, parks, and bowling alleys with private screen time, shrinking social circles with scarce opportunities to rebuild them. Car-based infrastructure further reduces local community interaction and subtly dehumanizes neighbors into obstacles who steal parking and slow you down. Remote work and online shopping accelerate this deterioration. The result of all of this? Parents who already don't have extended family, also don't have friends, neighbors. or community to cover childcare needs. The sort of "Hang out at the neighbor's house while I go to my book club meeting." scenario has largely gone extinct because of this.
Even if a couple does better than the average bear in these areas, and they have options, ambient paranoia bottlenecks their outsourcing of childcare anyway. Our media environment has normalized constant fear. Fear that every blade of grass conceals a potential predator, so every adult is regarded as a serious risk to your kid(s). This compounds further because it's gotten to the point where children (and teenagers) can't play outside or otherwise exist independently without supervision. This increases the time parents must spend on daily childcare needs. So not only can they not decrease the time spent, but they now have to spend even more because of it.
On top of all of this, the fraying social fabric creates an effect similar to cellular breakdown. Where those who become disconnected from the larger biological system stop acting for the collective benefit and further prioritize the self, becoming cancerous. This leads to growing numbers of extremist, anti-social individuals with poor mental health. Individuals who both compound the scarcity and isolation of parents, and justify their media-sourced fear of other adults. This is an example of the contributing factors to the contributing factors.
While I think I see myself in the chart, I am not exactly sure what it says, especially the "Controlling for children under 5" and the time.
This seems like a good place for a study using matched subjects. Do 23 year olds of a certain generation spend more time with 7 year old children than another generation? Etc., etc., then you can calculate the baseline and excess for each generation
I would guess it's a lot more to do with globalism and the increasing ability for work to be done remotely (offshore). The US actively encourages American companies using foreign labor, which I have no moral qualms with, but it does make the value of American labor plummet when we're competing with groups of people that will do similar work for 1/10th the price or less.
This is the most milquetoast take I have ever seen.
From what I read of the article, a 'Bad Person' is just someone you have bad vibes with. Quote:
> They tend to be charming, capable, but since the beginning your intuition tells you that something’s not right.
So if someone tells you, "No, that is physically not possible, you will lose money, here is my evidence" and you just have some bad vibe about them, ignore them! They're just a bad person. In fact, remove them!
What a joke. The idiom "If you lie down with dogs you get fleas" comes to mind. Usually this rhetoric comes from people who have been on their high horse for too long, and call many other kettles black.
You see, advice like that are "guidelines" - good advices that not to be taken without checking they apply. Really the same as invoking "high horse for too long" case whenever moral advice is encountered...
Life is not only black or only white but staing in "viper nest" destroy you just by learning "doing" methodology by unintentional osmosis.
And "bad" people often do, sometimes consciously, terrible or naive business/tech decisions.
But someone will say good people sometimes also do terrible or naive decisions. Sure, but continuation of such decision (even acquisition/selloff) is most likely to not be a disgust.
I think that a lot of web developers and users have gotten too comfortable with large amounts of memory usage. On one hand of the spectrum, web browsers are essentially remaking the idea of the operating system, but on the other hand, is the "HTML is good enough" crowd. Quite frankly, there SHOULD be a balance between these factions; and personally, I lean towards the "HTML is good enough" crowd. I've never been a fan of web apps, and I think systems programming is not beloved enough. A web page should be just that; a web page, like paper. Anything fancier shouldn't be stuck in the browser, IMHO.
I agree fully, up until the end of the sentence - my take is: everything else should be Webassembly/wasm.
If it's not a page, it's an app and that has to be wasm, because you are not allowed to treat the DOM of the browser so badly and unnecessarily bad, that it chokes under the pressure to render 1 view.
I guess it would have to provide audio in order to be accessible, and if blind and deaf... well, do most people prepare applications for the blind and deaf?
in the EU or the US where there are laws requiring that things are accessibly you can be sued if you are not, the EU law is quite new, the US is not. Some of the things that go into a decision as to what the amount will be you are fined is how inaccessible you are, so an image would be not at all.
If you are rendering a single image of a page - does this page have interactive parts? How are you planning on people actually interacting with the image? If you have it solved for sighted people interacting with the graphical rendering of your application you also have to provide solutions for people who are not sighted, have mobility issues, combinations of the two...
If you have provided an image that is not at all accessible, how much is because you haven't any understanding of accessibility and how much is "screw the disabled" thinking, because this would also affect how much you get fined.
When you first get fined and the plaintiff against you gets awarded money because your page is inaccessible (this would be the US), it's not over. Because there are still disabled people who want to use your application and they can't, and they will sue too, and you get fined more and more because you got fined once and you kept up with your behavior. Sooner or later you might receive a running court order - make your stuff accessible or pay this fee until it is. This sooner or later would probably be what would happen in EU, you have until this date to make your site accessible, or you are getting fined a lot.
Since your solution is totally not architected for accessibility you will need to put a lot of work into it.
Finally providing audio may or may not be considered good enough, depending on a lot of things, but most disabled people use screen readers that interact with the DOM and the Accessibility Object Model (depending on version of the application, figure as big a variation between screen readers as between IE 6, 9, Safari 2 years ago, Newest Chrome, and Firefox) if you provide something that they cannot use with their preferred accessibility tool I would bet it wouldn't be considered good enough.
The fact is your idea is not going to work as a tool for others to use, because accessibility is a legal requirement for lots of people, you will probably be sued for a lot of money, if any company did use it they would probably get sued too, maybe they would sue you if they didn't like getting sued, and in the end the AI companies can probably figure out the graphics you generate good enough to scrape it anyway.
The most likely result for following this plan to build a product would be financial ruination, how great would really depend on how initially successful you were. The greater the initial success the worse off you would eventually be.
on edit: link to the accessibility object model I forgot to put in earlier https://wicg.github.io/aom/spec/ - it's not exactly something that they work with now, but it is under development so yes, just like specs when they are under development come out in early releases with specific browsers. At any rate, it is not designed to work with complete web pages rendered as an image.