Not having a formal cs background, the comments on HN that wax about predicate calculus and algebras always raise questions for me.
Is it a higher plane of programmer thinking or just abstract technical jargon and ideas that are suited for hard core technical cs research but bear little value in practical programming tasks?
Like knowing the Latin names and full phylogenic tree and exact relation of humans to the animal snuffling outside when all most people need to know is whether it’s a small animal like a raccoon or is large like a grizzly bear.
Like in the course of writing a feature engineering pipeline, tuning a ML model, and then deploying it as an api or as a scheduled batch predictor for another pipeline, at what point should one start thinking of parts of this in terms of “1st order predicate calculus” during the day to day tasks?
At what point in working on a feature card does someone generally think of relational algebra?
What tools don’t respect these things? What will happen by using them?
Like can anyone give an ELI5 of the day to day use of explicitly using these concepts to guide average programming tasks?
It kind of boils down (for me) to choosing a strongly typed functional language and relational database by default, justified by the theory behind those.
I can be convinced, by myself or others, that a particular project calls for something else.
Because I'm familiar with them and satisfied with the tools my defaults are F# and SQL Server.
Wait so are you saying that all that terms like predicate calculus and algebra boil down to in practical terms is use a strongly typed language and and relational database? Okay so aside from choosing databases and languages, do these concepts explicitly come up in day to day programming or are regularly thought about?
It's more like I don't need to think about these things. F# gives you immutability by default (but let's you use it if you really need it for a more efficient algorithm) and eliminates nearly all null values. Nulls can still leak in from an RDBMS. Working in an environment like this gets you to if it builds, it works (once you get used to FP thinking). You have to experience it to believe it.
Is it a higher plane of programmer thinking or just abstract technical jargon and ideas that are suited for hard core technical cs research but bear little value in practical programming tasks?
Like knowing the Latin names and full phylogenic tree and exact relation of humans to the animal snuffling outside when all most people need to know is whether it’s a small animal like a raccoon or is large like a grizzly bear.
Like in the course of writing a feature engineering pipeline, tuning a ML model, and then deploying it as an api or as a scheduled batch predictor for another pipeline, at what point should one start thinking of parts of this in terms of “1st order predicate calculus” during the day to day tasks?
At what point in working on a feature card does someone generally think of relational algebra?
What tools don’t respect these things? What will happen by using them?
Like can anyone give an ELI5 of the day to day use of explicitly using these concepts to guide average programming tasks?