Hacker News new | past | comments | ask | show | jobs | submit | jofla_net's comments login

I tend to think yes. If the initial attack as i understand it, is from a dhcp server handing out poisoned options, then if a client is set-up as static from the get-go it'll never request a lease to begin with and, well, there you have it.


Another good reason for slow adoption is that the pushers of V6 herald it as the death of all nat, and i wager there are certain types of net admins who really like at least SOME nat. I have a longer writeup here http://www.jofla.net/?p=00000113#00000113

granted i would love for more v6, if it yielded a 1 for 1 repacement, with all features .


IPv6 certainly has its own technically legitimate uses, absolutely. thanks for my next read! (curious how many things you discuss that i hadn't even considered.)


And now more people will have thier passports pinched as they'll be opening themselves up to more opportunities to have it stolen. It'll be great to get ready for that overseas trip, or while returning, to find out you need to now visit an embassy as a forged version of it is now in use.


Thanks, this was useful clarification.


This totally depends on what is collected, if the requirements are some form of national id submission, ie. licenses or passports, then it opens all handlers up to tremendous abuse possibilities. Or at the very least paints a big sign on their backs that they handle mass quantities of offical government forms of biometric id, something I think would do much more harm than good in the long run as each company would need to be bulletproof to avoid.


Curious as to what Signal will say about this, or does money need to change hands with users before an organization is subject to this? Sad to think it has happened in America. I shall call, though doubt anything will come of it.


No argument with the particles/neurons/matter approach to the subject. It is sound and if you look at us compositionally there is nothing magic about whats going on. There is, though, something about intuition or instinctual behavior which can constantly recombine/reapply itself to a task at hand. I know many will balk at intuition, and maybe its only at the very best a heuristic, but i think we need to at least unravel what it is and how it operates before we can understand what makes something classify as human-like intelligence. Is it merely executing a process which we can put our minds into with practice, or is it demonstrating something more general, higher-level.


Well look, compared to the electrified bits of sand in my laptop i'd strongly defend pregnancy as something vastly more "magical" if those are the terms we must use.

People who thing organic adaption, sensory-motor adaption, somatosensory representation building... ie.., all those things which ooze-and-grow so that a paino player can play, or we can here type... are these magic?

Well I think it's exactly the opposite. It's a very anti-intellectual nihilism that all that need be known about the world is the electromagnetic properties of silicon-based transitors.

Those who use the word "magic" in this debate are really like atheists about the moon. It all sounds very smart to deny the moon exists, but in the end, it's actually just a lack of knowledge dressed up as enlightened cynicism.

There are more things to discover in a single cell of our body that we have ever known; and may ever know. All the theories of science needed to explain its operation would exhaust every page we have ever printed. We know a fraction of what we need to know.

And each bit of that fraction reveals an entire universe of "magical" processes unreplicated by copper wires or silicon switches.


You make good points. I think it's a typical trait of the way computer scientists and programmers tend to think. Computer science has made great strides over the decades through abstraction, as well as distillation of complex systems into simpler properties that can easily be computed.

As a result of the combination of this method of thinking and the Dunning-Kruger effect, people in our field tend to apply this to the entire world, even where it doesn't fit very well, like biology, geopolitics, sociology, psychology, etc.

You see a lot of this on HN. People who seem to think they've figured out some very deep truth about another field that can be explained in one hand-waving paragraph, when really there are lots of important details they're ignoring that make their ideas trivially wrong.

Economists have a similar thing going on, I feel. Though I'm not an economist.


As an aside both my parents are prominent economists, I myself have a degree in economics, and I have spent much of my life with a birds eye view of the economics profession and I can emphatically confirm that your feeling is correct.


Economics is zoology presented in the language of physics. Economists are monkeys who've broken into the uniform closet and are now dressed as zookeepers.

I aspire, at best, to be one of the children outside the zoo laughing. I fear I might be the monkey who stole the key...


Remember always, computer science is just discrete mathematics with some automatic whiteboards. It is not science.

And that's the heart of the problem. The CSci crowd have a somewhat well-motivated inclination to treat abstractions as real objects of study; but have been severely misdirected by learning statistics without the scientific method.

This has created a monster: the abstract objects of study are just the associations statistics makes available.

You mix those two together and you have flat-out pseudoscience.


Not sure I agree in this regard. We are after all, aiming to create a mental model which describes reproducible steps for creating general intelligence. That is, the product is ultimately going to be some set of abstractions or another.

I am not sure what more scientific method you could propose. And we can, in this field produce actual reproducible experiments. Really, more so than any other field.


There's nothing to replicate. ML models are associative statistical models of historical data.

There are no experimental conditions, no causal properties, no modelled causal mechanisms, no theories at all. "Replication" means that you can reproduce an experiment designed to validate a causal hypothesis.

Fitting a function to data isnt an experiment, it's just a way of compressing the data into a more efficient representation. That's all ML is. There are no explanations here (of the data) to assess.


I don’t think that’s true either.

Take the research into Loras for example. Surely the basic scientific method was followed when developing it. You can see that from the paper.

Obviously the results can be reproduced. Unlike in many other fields, reproducibility can be pretty trivial in CS.

Training a model isn’t really a science, but the work gone into creating the models surely is.


CS isnt science, it's discrete mathematics


All sciences are progressively more impure (eg. Applied) forms of math.


lol


Also there’s literally a causal relationship between model topology and quality of output.

This can be plainly seen when trying to get a model to replicate its input.

Some models perform better in fewer steps, some perform worse for many steps, then suddenly much better.

How is discovering these properties of statistical models NOT science?


I do think there's an empirical study of ML models and that could be a science. Its output could include things like,

"the reason prompt Q generates A1..An is because documents D1..Dn were in the training data; these documents were created by people P1..Pn for reasons R1..Rn. The answer A1..An related to D1..Dn in so-and-so way. The quality of the answers is Q1..Qn, and derives from the properties of the documents generated by people with beliefs/knowledge/etc. K1..Kn"

This explains how the distribution of the weights produces useful output by giving the causal process that leads to training data distributions.

The relationship between the weights and the training data itself is *not* causal.

Eg., X = 0,1,2,3; Y = A,A,B,B; f(x; w) = A if x <= w else B

w = 1 because the rule x <= 1 partitions Y st. P(x|w) is maximised. These are statistical and logical relationships ("partitions", "maximises").

A causal relationship is between a causal property of an object (extended in space and time) to another causal property by a physical mechanism that reliably and necessarily brings about some effect.

So, "the heat of the boiling water cooked the carrot because heat is... the energetic motion of molecules ... and cooking is .... and so heating brings about cooking necessarily because..."

heating, water, cooking, carrot, motion, molecules, etc.. -- their relationships here are not abstract; they are concretely in space and time, causally effecting each other, etc. etc.


So what do you call the process of discovering those causal properties?

Was physics not actually a science until we uncovered quarks, since we weren’t sure what caused the differences in subatomic particles? (I’m not a physicist, but I hope that illustrates my point)

Keep in mind most ML papers on arxiv are just describing phenomena we find with these large statistical models. Also there’s more to CS than ML.


You're conflating the need to use physical devices to find relationships, with the character of those relationships.

I need to use my hand, a pen and paper to draw a mathematical formula. That formula (say, 2+2=4) expresses no causal relationships.

The whole field of computer science is largely concerned with abstract (typically logical) relationships between mathematics objects; or in the case of ML, statistical ones.

Computer science has no scientific methodology for producing scientific explanations -- it isnt science. It is science in the old german sense of just "a systematic study".

Scientists conduct experiments in which they hold fixed some causal variables (ie., causally efficiacious physical properties), and vary others, according to an explanatory framework. They do this in order to explore the space of possible explanations.

I can think of no case in the whole field of csci in which there are cases where causal variables are held fixed; since there is no study of them. Computer science does not study even voltage, or silicon, or anything as physical objects with causal properties (that is electrical egnineering, physics, etc.).

Computer science ought just be called "applied discrete mathematics"


I see where you’re coming from, but I think there’s more to it than that, specifically with non determinism.

So if I observe some phenomena in a bit of software that was built to translate language, say the ability to summarize text.

Then I dig into that software and decide to change a specific portion of it, keeping same all other aspects of the software and its runtime, then I notice it’s no longer able to summarize text.

In that case I’ve discovered a causal relationship between the portion I changed and the phenomenon of text summarization. Even though the program was constructed, there are unknown aspects.

How is that not the process of science?

Sorry if this is just my question from earlier, rephrased, but I still don’t see how this isn’t a scientific method.


Intuition is a process of slight blind guesses in a system that was built/proposed by a similar organism, in a way that resembles previous systems. Once you get into things like advanced physics, advanced biology, etc, intuition evaporates. Remember these SR/GR things? How intuitive were these? I’d say the current AI is pure intuition in this Q'-ZipQA-A' sense, cause all it does is blind guessing the descent path.


intuition is a form of pattern matching without reasoning, so kinda like LLM


There are, undoubtedly, many reasons for justifying not having to use their equipment. Besides rental cost, having extraneous hardware which is unused functionality at least raises the probability that there could be something which goes wrong due to added complexity. The most compelling reason being if I insist on using bridge mode(on such a gateway), and then, after some unforseen firmware upgrade, that setting is reset, then my entire network becomes unreachable. Or at least as unreachable as it once was before the update. Having a simple bridge device like a pure modem or plain old ont, there can be no functionality to reset which would potentially alter the state of the netowrk. It either passes as a bridge or it doesnt. A lot of the friction though, at its core is the result of either, drumroll, having full access to the device providing layer 3 NAT or not. As ISPs want to smush together their on premises equipment they, due to the nature of the stack need to take control of the NAT to do so. At that point users who would like to open ports or do anything more than request a connection from the insdide are, out of luck, and it shoulnd't be accepted, as ISPs dont NEED gateways to make thier network work. Illustrated by the many smaller ones who do just fine without.


Yes, this is huge. Of all the places to shim in my ¢2, as the topic could go on forever. I remember talking to a robotics specialist and his take on efficiency was that despite the obvious speed and strength that machines have over us, you need to factor in just how little power is consumed when we move our extremities compared to what is needed for actuators and motors of even the best bots. Now this was years ago and things could have taken a huge turn, but I think given the recent statements by Altman about the looming power crush drives it home, that even if we do manage something truly monumental, it'll take a fusion reaction to pull off.


Maybe Berkshire Hathaway Energy is the smart play


I mean, I wouldn’t bet on that either. We built the “computing” empire on silicon, but we might be able to very significantly improve on this with a single discovery/invention. All the computational stuff we built/figured out will move out without trouble.


Yes, that might be too. You just never know.


what i like about the ikea model featured here is the size, can fit almost anywhere. I also bought several to replace a couple 20 year old foot-stool size units.

Yes its not a rolls royce, but it gets %85 the way there.


If it were like a Rolls Royce, the spec sheet would simply say "Performance: Adequate"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: