Hacker News new | past | comments | ask | show | jobs | submit | potatoman22's comments login

I feel like the hard part is flashing the new firmware

I don't think that quote really supports coding and math being equivalent. To me, the quote provides an abstraction of math through a structuralist perspective. Language can also be viewed through this abstraction. I think coding could share the abstraction, but that doesn't make the three of these fields equivalent.


  > coding and math being equivalent
Please see lambda calculus. I mean equivalent in the way mathematicians do: that we can uniquely map everything from one set to another


This seems like the setup to a joke about how mathematicians don't know how to communicate to ordinary folks


Well a lot of people did wildly misunderstand the Poincare quote. To me is is obviously about abstraction and I think this is true for any mathematician. I thought it would also be natural for programmers considering we use "object" quite similarly, if not identically. So... maybe it is or maybe this is the joke.


"There are those who tell us that any choice from among theoretically-equivalent alternatives is merely a question of taste. These are the people who bring up the Strong Church-Turing Thesis in discussions of programming languages meant for use by humans. They are malicious idiots. The only punishment which could stand a chance at reforming these miscreants into decent people would be a year or two at hard labor. And not just any kind of hard labor: specifically, carrying out long division using Roman numerals." — Stanislav Datskovskiy


No one is suggesting programming with lambda calculus. But it would be naïve to think lambda calculus isn't important. They serve different purposes.

We didn't:

  bring up the Strong Church-Turing Thesis in discussions of programming languages meant for use by humans.


> Not "coding uses math", I mean it is math

> I mean equivalent in the way mathematicians do

That sounds like you're backing off from your original claim, probably because it is impossible to defend.

That you can use mathematics to describe code doesn't seem very different from using math to describe gravity, or the projected winner in an election, or how sound waves propagate.

Isn't the primary purpose of math to describe the world around us?

Then it shouldn't be surprising that it can also be used to describe programming.

In the real world, however, software engineering has nothing to do with mathematical abstractions 99% of the time


A programmer constructs a function from some data type to another while a mathematician constructs a function from witnesses of some proposition to another?

Though interpreting a CRUD app as a theorem (or collection of theorems) doesn’t result in an interesting theorem, and interpreting a typical theorem as a program… well, sometimes the result would be a useful program, but often it wouldn’t be.


Interpreting a CRUD apps (or fragments of them) as theorems is interesting (given a programming language and culture that doesn't suck)! e.g. if you have a function `A => ZIO[Any,Nothing,B]`, then you have reasonable certainty that barring catastrophic events like the machine going OOM or encountering a hardware failure (essentially, things that happen outside of the programming model), that given an A, you can run some IO operation that will produce a B and will not throw an exception or return an error. If you have an `A => B`, then you know that given an A, you can make a B. Sounds simple enough but in practice this is extremely useful!

It's not the type of thing that gets mathematicians excited, but from an engineering perspective, such theorems are great. You can often blindly code your way through things by just following the type signatures and having a vague sense of what you want to accomplish.

It's actually the halting problem that I find is not relevant to practical programming; in practice, CRUD apps are basically a trivial loop around a dispatcher into a bunch of simple functions operating on bounded data. The hard parts have been neatly tidied away into databases and operating systems (which for practical purposes, you can usually import as "axioms").


  > It's not the type of thing that gets mathematicians excited
Says who? I've certainly seen mathematicians get excited about these kinds of things. Frequently they study Programming Languages and will talk your ear off about Category Theory.

  > You can often blindly code your way through things by just following the type signatures and having a vague sense of what you want to accomplish.
Sounds like math to me. A simple and imprecise math, but still math via Poincare's description.

  > in practice, CRUD apps are basically a trivial loop around a dispatcher into a bunch of simple functions operating on bounded data
In common settings. But those settings also change. You may see those uncommon settings as not practical or useful but I'd say that studying those uncommon settings is necessary for them to become practical and useful (presumably with additional benefits that the current paradigm doesn't have).


I think we're in agreement. My comment about the halting problem was meant to refer to Rice's theorem (the name was slipping my mind), which I occasionally see people use to justify the idea that you can't prove interesting facts about real-world programs. In practice, real-world programming involves constantly proving small, useful theorems. Your useful theorem (e.g. `Map[User,Seq[Account]] => Map[User,NetWorth]`) is probably not that interesting to even the category theorists, but that's fine, and there's plenty you can learn from the theorists about how to factor the proof well (e.g. as _.map(_.map(_.balance).sum)).


  > Isn't the primary purpose of math to describe the world around us?
No, that's Physics[0]. I joke that "Physics is the subset of mathematics that reflects the observable world." This is also a jab at String Theorists[1].

Physicists use math, but that doesn't mean it is math. It's not the only language at their disposal nor do they use all of math.

  > software engineering has nothing to do with mathematical abstractions 99% of the time
I'd argue that 100% of the time it has to do with mathematical abstractions. Please read the Poincare quote again. Take a moment to digest his meaning. Determine what an "object" means. What he means by "[content] is irrelevant" and why only form matters. I'll give you a lead: a class object isn't the only type of object in programming, nor is a type object. :)

[0] Technically a specific (class of) physics, but the physics that any reasonable reader knows I'm referencing. But hey, I'll be a tad pedantic.

[1] String Theory is untestable, therefore doesn't really reflect the observable world. Even if all observable consequences could be explained through this theory it would still be indistinguishable from any other alternative theory which could do so. But we're getting too meta and this joke is rarely enjoyed outside mathematician and physicist communities.


> No, that's Physics

Going on a total tangent, if you'll forgive me, and I ask purely as a curious outsider: do you think math could have ever come into being if it weren't to fill the human need of describing and categorizing the world?

What would have been the very beginning of math, the first human thought, or word or action, that could be called "math"? Are you able to picture this?


  > do you think math could have ever come into being if it weren't to fill the human need of describing and categorizing the world?
I'm a bit confused. What exactly is the counterfactual[0] here? If it is hyper-specific to categorizing and describing then I think yes, those creatures could still invent math.

But my confusion is because I'm having a difficult time thinking where such things aren't also necessary consequences of just being a living being in general. I cannot think of a single creature that does not also have some world model, even if that model is very poor. My cat understands physics and math, even though her understandings are quite naive (also Wittgenstein[1] is quite wrong. I can understand my cat, even if not completely and even though she has a much harder time understanding me). More naive than say the Greeks, but they were also significantly more naive than your average math undergrad and I wouldn't say the Greeks "didn't do math".

It necessitates a threshold value and I'm not sure that this is useful framing. At least until we have a mutual understanding of what threshold we're concerned with. Frankly, we often place these contrived thresholds/barriers in continuous processes. They can be helpful but they also lead to a lot of confusion.

  > What would have been the very beginning of math
This too is hard to describe. Mull over the Poincare quote a bit. There's many thresholds we could pick from.

I could say when the some of the Greeks got tired of arguing with people who were just pulling shit out of their asses, but that'd ignore many times other civilizations independently did the same.

I could say when the first conscious creature arose (I don't know when this was). It needed to understand itself (an object) and its relationship to others. Other creatures, other things, other... objects.

I could also say the first living creature. As I said above, even a bad world model has some understanding that there are objects and relationships between them.

I could also say it always was. But then we get into a "tree falls in a forest and no one is around to hear it" type of thing (also with the prior one). Acoustic vibrations is a fine definition, but so is "what one hears".

I'd more put the line closer to "Greeks" (and probably conscious). The reason for this is formalization, and I think this is a sufficient point where there's near universal agreement. In quotes because I'll accept any point in time that can qualify with the intended distinction, which is really hard to pin-point. I'm certainly not a historian nor remotely qualified to point to a reasonable time lol. But this also seems to be a point in history often referenced as being near "the birth" and frankly I'm more interested in other questions/topics than really getting to the bottom of this one. It also seems unprovable, and I'm okay with that. I'm not so certain it matters when that happened.

To clarify, I do not think life itself necessitates this type of formalization though. I'm unsure what conditions are necessary for this to happen (as an ML researcher I am concerned with this question though), but it does seem the be a natural consequence of a sufficient level of intelligence.

I'll put it this way, if we meet an alien creature I would be astonished if they did not have math. I have no reason to believe that their math would look remotely similar to ours, and I do think there would be difficulties in communicating, but if we both understand Poincare's meaning then it'll surely make that process easier.

Sorry, I know that was long and probably confusing. I just don't have a great answer. Certainly I don't know the answer either. So all I can give are some of my thoughts.

[0] https://www.inference.vc/causal-inference-3-counterfactuals/

[1] https://existentialcomics.com/comic/245


>Even if all observable consequences could be explained through this theory it would still be indistinguishable from any other alternative theory which could do so.

That's not unique, all quantitative theories allow small modifications. Then you select parsimonious theory.


Coding is math in the sense that coding is a subset of math.

Mathematics is a very extensive field, and covers a vast amount of subjects.

For the same reason it can't be said that mathematics is equivalent to coding, as there are many things in mathematics that are not relevant for coding.

However, by far the most interesting parts of coding are definitely related to mathematics.


It'll be funny when someone's AI assistant buys them a different item than it said it would.


Anyone know if the paper was published?


The post mentioned the following preliminary document: https://drive.google.com/file/d/1Eo4SHrKGPErTzL1t_QmQhfZGU27...


Generating or augmenting data to train computer vision algorithms. I think a lot of defense problems have messy or low data


It could be a man, but most relationships are heterosexual


I can't tell if this is a joke


Genetic algorithms' weaknesses largely boil down to getting stuck in local extrema and premature convergence, which can be resolved by trying different values for parameters like probability of mutation, trying different genetic operators, offspring/parent ratio etc.

Meanwhile you have a whole separate discipline [1] for potential weaknesses on machine learning algorithms. Of course they may win when it comes to interdisciplinary ubiquity in CS, but any algorithm that relies on data assimilation and has little analytic formulation will suffer in robustness.

[1] https://en.wikipedia.org/wiki/Adversarial_machine_learning


There is no reason I couldn’t use the same adversarial attacks against an EA. It just doesn’t have a Wikipedia page because EA isn’t as common.


No. The point is that the attack surface is more vast for data-driven models.


What do you mean


To contextualize, I think the tool in this article correctly diagnoses 97% benign cases as benign but misdiagnosis 22% of malignant cases.


It's trendy to say "it's horrible to use AI for this" without giving specific reasons. Some reasons it could be good to use AI here:

- this can prioritize urgent patients for the severely overworked doctors

- medical error is a leading cause of death, this serves as a second-opinion (97% true-positive rate and 79% true-negative rate)

- it can be used as evidence by a nurse or physician advocating for a patient's treatment


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: