Hacker News new | past | comments | ask | show | jobs | submit | lngnmn's comments login

> Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.

This is especially beautiful insight giving that it has a parallel with molecular biology - proteins and other molecular structures dominate. Enzymes are made of "standard" proteins to transform particular molecular structures.

This, BTW, is also related to usually overlooked the "code is data" principle.

To be a good programmer one has to know molecular biology 101. It seems like good guy (John McCarthy, MIT Wizards, Bell labs and Erlang guys) did.


By providing a heuristic that doubling could be defined as x+x and 2*x. That one plus one is the same as one two times.


Oh, more than one infinity in the same reality and no contradiction? What a wonderful world we live in..

More than one zero, more than one infinity, why not? More than on nothingness, more than one everything..


Wrong emphasis. It must be read "flaw in open source Java software.

The problem is Java, not Open Source.


I'd leave it out of the title altogether as irrelevant. The fact that Struts is distributed with a particular license is no more important in this case than the fact that the foundation that distributes it is incorporated in Delaware.


And Ruby [0] and Python [1] and...

Nothing about Java or it's community makes it any more prone than most other languages to exposing deserialisation into arbitrary objects.

[0] https://github.com/mazen160/struts-pwn_CVE-2017-9805/blob/ma... [1] https://blog.nelhage.com/2011/03/exploiting-pickle/


Oh, they wrote their own uwsgi, based on what presumably started as nginx2. That's cool.

I hope they would avoid the Second System syndrome..


The real prerequisite for machine learning is a domain expertise.

Otherwise algorithms will learn nonsense and random correlations made from datasets of random noise.

Look at finance to see how it "works".


Yup, the ability to understand what's going on and how it fits together is critical.

But also : can work in a team and leverage relationships for insight.


Obviously, practicality should beat purity and very clever and reasobable syntax

   (x=1, y=2)
which could be used/generalized for a procedure arguments and effecient implementation in C (we have C-based arrays (lists) and hash-tables, so why not generalized records/tuples?) should be accepted.

The problem with a "pure democracy" is that majority cannot be smarter than the top 5% individuals, so really good ideas almost never getting thorough the bulk of a bell curve.


I think a strong argument against that syntax is that namedtuples are, essentially, an abstract class (or maybe a metaclass). An anonymous namedtuple doesn't really make sense. You want the type information.

As of python 3.6, this is already possible:

    class Point2D(typing.NamedTuple):
        x: int
        y: int

    p = Point2D(x=2, y=4)
If you allow anonymous namedtuples you lose one of the big values of a namedtuple, which is that if I want a Point2D, you can be sure I'm getting a Point2D, and not a Point3D. With anonymous namedtuples, there's nothing stopping you from passing a (x=2, y=3, z=4), when you wanted a (x=2, y=3). And maybe that will work fine, but maybe not (or the reverse).

All this is to say, an anonymous namedtuple is an oxymoron. NamedTuples should be named. This isn't a "really good idea".


> The problem with a "pure democracy" is that majority cannot be smarter than the top 5% individuals, so really good ideas almost never getting thorough the bulk of a bell curve.

Python is governed as "benevolent dictatorship" (with Guido as BDFL and maintainers as your "top 5%"), definitely not "pure democracy", so your argument doesn't really hold.

When it comes to the syntax, just because something is "very clever" doesn't mean that it's necessarily a good idea. Syntax changes should be carefully considered and introduced only if the benefit clearly outweighs the cost of extra complexity.

If you still believe that it's such a good idea then feel free to draft a PEP: https://www.python.org/dev/peps/


Why is (x=1, y=2) more practical than nt(x=1, y=2)? Also, we have a very efficient implementation for tuples. collections.namedtuple is quite fast and the article shows cnamedtuple (which I am the author of) which is a C implementation that is even faster. None of this needs a change to the language itself.


> Why is (x=1, y=2) more practical than nt(x=1, y=2)?

It was outlined in the linked article. Order of named arguments should not matter so new syntax would be better.


Or, perhaps: {"x": 1, "y": 2}


No. Animals don't think, because thinking requires language, based on it conceptual thinking and related brain circuitry which animals still didn't evolve.

Signaling systems and elaborate warning cries do not account for a language.

Animals only feel. They have emotions, environmental clues and heuristics, everything but a language and hence no thinking by definition. That is exactly what some call non-verbal,'animal' mind. Emotional states and behavioral patterns and learning from experience only. Look at toddlers to grasp what it means.


Unless you use an extremely narrow definition of "thinking" (yours is not commonly accepted among animal researchers, and is IMO too tautological to be useful), this is clearly disproven by decades of animal cognition research.

Animals can form future-oriented plans, engage in risk management activities, can remember inventories of items stored in safekeeping locations, develop "cultural" dialects to their vocalization patterns, and much more. You can find a good start on the current state of the research here

https://en.wikipedia.org/wiki/Animal_cognition


Definition of thinking, like definition of sex, or any other biological trait cannot be broadened. That would yield nonsense.

Thinking implies language. Without language it is feeling. Period.

Learning from experience, map making and even planning, as one might see in case of machine learning and other branches of AI, does not require thinking. It is a lower level of activity, relative to abstract reasoning, like pattern recognition.


You are being downvoted because you are making strong claims without having anything to back it up. You are free to have your own definitions of things but people like to have the same. This helps in carrying on a useful conversation.


Yes. To be fair I think most people here want animals to be able to think (and for other humans to think that animals can think) because they want animals to be protected from humans.


Google gives me the following two definitions for "think":

1. have a particular opinion, belief, or idea about someone or something.

2. direct one's mind toward someone or something; use one's mind actively to form connected ideas

Neither seems to mention language as a pre-requisite. In any case, you seem to have strong beliefs contrary to my understanding of animal research, so I will not bother you further.

Have a good day.


Google 'belief' and 'idea' too.


Why don't you just call what you are talking about "language", instead of contradicting what literally everyone else means by "thinking"?


Is this the clinical or technically precise definition of what thinking is? Or is it your own definition?


Thinking implies language.

Just plain incorrect. I will now present as much evidence for my statement as you presented for yours:

As you can see, it's a pretty ineffective technique and since you're starting from the overwhelming minority position, you're going to have to do a lot better.

Everything I'm saying here comes across as passive-aggressive childish bullshit, but in this case it's also true.


A lot of my thinking is pictorial. Are you saying I'm not thinking?


Yes, it is another kind of activity.

To ask "what animals are thinking" is exactly the same as to ask "what a self-driving car or a smartphone are thinking".


You're coming to this discussion with an unjustified assertion that thinking is very narrowly defined and then circularly showing that other kinds of things that neurons can do are not "thinking" because they don't fit the definition.

In the huge space of all possible kinds of cognition, humans have only ever occupied the tiniest sliver. There's modes of thought out there that we can't conceive of.


To ask "what is this mammal thinking" is self-evidently closer to asking "what is that mammal thinking" than it is to "what is that computer thinking".


Those are good questions with unknown answers. Another good question is "what people are thinking?"


If a man is robbed of language by some misadventure, then is she robbed of thought?


> relative to abstract reasoning, like pattern recognition.

You mean like the patterns the puffer fish creates to attract mates?

https://youtu.be/yaPmYYWsixU



Does wind recognize the pattern? You are missing the part where the female fish see the pattern and decide whether to mate with the fish...


> Animals don't think, because thinking requires language and abstract brain circuitry which animals still didn't evolve.

Animals certainly can think. And they have limited language, certainly not as complex as ours, but they are capable of communicating with each other.

Just because humans are much smarter doesn't mean animals don't think.

> Look at toddlers to grasp what it means.

Toddlers certainly think, though not on the same level as older children and adults. And certainly many animals like chimps, pigs and crows can think better than toddlers.

> Animals only feel.

If animals only felt and couldn't think, how can they think to use tools?

https://www.theatlantic.com/science/archive/2016/09/this-cro...

https://youtu.be/DDmCxUncIyc

https://en.wikipedia.org/wiki/Tool_use_by_animals

https://youtu.be/5Cp7_In7f88


AH, you again. I think I recall you went off on a long thread about this not so long ago. You posit (without proof) that if a creature has no language, it cannot think, because only the parts of the brain that deal with language are capable of thinking.

There is a large amount of experimental evidence showing some animals planning for the future, imagining what other creatures will do based on what they have seen, making tools, saving tools for the future to deal with predicted future situations, solving puzzles they've never seen before (did they learn to solve puzzles they've never seen before from experience?), beating humans in gestalt tests, communicating via sign language (how about those animals? They communicate via sign language; do they think?), and various other things that by any sensible definition are the product of mental processes everyone else here considers "thought".

It seems that as far as you are concerned, "thinking" is defined as something to do with language. Everyone else is using a different definition. If you clearly explained your definition of "thinking", perhaps this will turn out to all be a misunderstanding and you're actually saying something completely different.

Simply stating over and over that without language there is no thought does not help your case, as everyone else here considers that to be trivially falsifiable by the animals they see doing things that must be the product of thought.


It's quite convenient to think that, because we eat animals.

I'm not a vegetarian, I enjoy steak and pork chops a lot.

But I think animals, especially cows and pigs, are more conscious of their existence than we like to think.

I tend to agree that speech plays a central role in how our thoughts are organized, but I think the importance of being able to articulate oneself /with language/ as a measure of consciousness and intelligence is generally overestimated.


I don't see how it would be "convenient", since it doesn't seem necessary that an animal be conscious for us to require ourselves to not harm it out of a moral concern.

Babies are either unconscious or have a very low degree of consciousness, but infanticide is still morally abhorrent.


I agree that there is a moral aspect to eating sentient being, whether they are conscious or not.

Your argument, however, is mixing the aspects of consciousness, and cannibalism in the form of infanticide. To me, that doesn't seem to be fair.


Hey, I didn't say anything about eating the baby! You're the one thinking about it. Check yourself before you wreck yourself.

What do you find unfair about the example?


I find it funny that you're being downvoted to shit for holding an opinion which actually was quite common in the philosophy of mind, and which hasn't been quite yet superseded. Or at least I guess so, because nothing in philosophy ever really is.

The identification of thought (or rather reason) with language was explicitly advocated by Descartes in part V of the Discourse on the Method, where he suggested a test for distinguishing a perfect automaton from a human being. It's a view more often associated with him, but other early modern thinkers such as Locke, Leibniz and Kant made similar statements.

But in each case they were also more concerned with abstract thought, rather than just the general information processing that all animals do.


No one would have objected to a comment like: "One school of philosophy holds that thinking is inherently linguistic, so in that sense animals probably can't think."

People are objecting to the unsubstantiated, doctrinaire, insistence on this very narrow definition to the exclusion of all others, including the ones used by people who research animal cognition for a living.

I don't see why anyone should exclusively care what Descartes has to say about it when we have an additional 400 years of science after his death that suggests he didn't have the whole picture.


I'm not really biased one way or the other on the upthread conversation, however I think you'll find the Descartes' thinking continues to underlie a great deal of what we call modern science (or more precisely: our defining interpretation of universal phenomenon) and many of the applications that come with that.

You can find great nuggets in modern work just as you can find great nuggets in older work, too. It's not that disposable!


Why should they object to it, though? If an unsubstantiated claim can't be said to be anything other than an opinion, then it's just his opinion. Plus, it doesn't really matter whether he was being "doctrinaire" or not, because nothing was actually being imposed of me. I'm not being forced to subscribe to this view, I'm not being threatened (e.g. with getting a low grade, being fired, being burned at the stake). And I doubt he would have insisted in his view, too, if it weren't for the response it caused.

Furthermore, I don't believe anyone should exclusively care about Descartes' thought on the matter, whether in the light of contemporary science or not. I just tallied my thoughts as they came, really. But I do believe that there's a way to reply to comments which maximizes the likelihood of there being at least some degree of mutual understanding and which minimizes the likelihood of conflict, and that is to have a charitable, unassuming reading of the comment in question, and to not take offense. And that people should either have that, or not to discuss at all. Else, they might as well talk to the wind.

The whole exchange of comments we have in mind, for instance, was to little or no benefit to everyone involved, being little else than comments of "uh huh" and "nuh huh" back and forth.


OP literally started their comment with "No." and followed it up with 3–4 blanket assertions.

Slatestarcodex has a comment policy (http://slatestarcodex.com/comments/) that I think is very relevant here too because I think that's how people implicitly judge most comments—"If you make a comment here, it had better be either true and necessary, true and kind, or kind and necessary."

OP's statement was not 100% true so it should at least have been necessary / relevant AND kind / humbly put. It wasn't kind.


I didn't say he was kind.

I am good to people who are good.

I am also good to people who are not good.

Because Virtue is goodness.

People should be thought as being free to speak their mind in whatever way it suits them. Not because they should, but because they will. And when the time comes, it's up to me whether to make an issue of it or not. And in so far as I know, I'd rather not.


It has nothing to do with being kind. It has a lot do do with contributing to the discussion with something useful. Just asserting things without any evidence is not useful and thus downvoted to make more room for more useful comments nearer where people would see them, that's all.


I disagree that a comment has to be sourced to be a contribution to the discussion. Whether or not something can be a contribution depends entirely on what people expect from comments. If all I expect is to have a little bit of thought, it doesn't really need to pack on citations, and even a question might suffice.

Which might sound dilletante-ish, but isn't really out of place for anyone reading a pop-sci article on animal cognition during a Saturday evening, on their spare time.

As for downvoting or upvoting, I've never seen any actual evidence that either does anything useful.


There's a difference between a comment stating an opinion having no citations, (I think that...) and one supposedly stating facts, (You cannot have...). At least if we want HN to maintain a higher quality of discussion than 4chan.


Exactly. Information processing is information processing, and thinking is a process which is based on a language.

Otherwise it loses all meaning - cells are thinking, tissues are thinking, computers are thinking, bacterias, etc.

People seem to forget that "thinking machines" is still a metaphor which cannot be interpreted literally.


Honestly, I don't think it's such of an issue. If there's a real difference between the kind of information processing that occurs within a cell from that which occurs within the brain, it doesn't really matter what it is called.

Of course, the language within any field matters to the practitioners in that field. Things can be as easy or as hard to understand as the strictness and clarity of the language in that field allows it. But what do I mean is that, if any expression in a field ever fell in to such a degree of obscurity as to be said to have "lost all meaning", whatever it described still might be independently rediscovered later on. So meaning isn't something that needs to be formulated into a categorical imperative, or otherwise we might lose it forever. Striving towards clarity and towards a discriminating usage of words is a practical rule, not a moral one.

Plus, arguing definitions is never a sound choice.


I'll post an article that is not a serious technical discussion, but I hope you enjoy it anyway because it has a very different perspective. It's a nice story and it is reposted every other year in HN. http://www.terrybisson.com/page6/page6.html (HN discussion https://news.ycombinator.com/item?id=8152131 and https://news.ycombinator.com/item?id=3549320 )


Your ongoing mistake is that you are obsessed with a specific word choice "thinking", and not paying any attention to what the word represents.


I am using the definition from the times of Descartes. Perhaps I missed the moment when the word "thinking" began to refer to arbitrary cognitive processes.

I will try one more time.

The rules of logic require precise definitions (as best as we currently able to produce) and to follow the rule of substituting equal for equal only. The socially constructed memes cannot be substituted for definitions, no matter what hipsters would write in blog posts.

Thinking, as in "I think" or "I am" or how Descartes put it "I think therefore I am", is based on a language. Language is required for thinking and abstract thinking and reasoning. Language comes before the notion of "I". Before any abstractions.

Argumentation is quite straightforward. Human languages has been evolved together with related brain centers and this process took alot of time. There is a fundamental gap between association of sounds or cries with some sensory patterns and ability to say "I am". How long it took? Some primates are capable of learning very rudimentary sign language after years of rigorous training, but they are incapable of developing of (or even grasping) the notion of "I" on their own. Now you might see the gap.

I would argue that language comes prior to abstract reasoning and abstract thinking, and that they have been evolved together in a mutually recursive relationship. Think of the mutual recursion of #'eval and #"apply as the best example of mutual recursion I know.

Thinking, I would say, started with a primitive, rudimentary language and then related brain circuitry has been selected by evolution. Social evolution and biological evolution together.

One could see the very process of emergence of self-awareness and self-consciousness in babies as a gradual process parallel (or rather based on) language acquisition. The fundamental difference is, of course, that all necessary brain centers are already developed and encoded in DNA (but they are still has to be trained).

This line of arguments is for justifying and supporting the postulates of Descartes about fundamental difference between a human and an animal (creationist nonsense aside). To put it another way - there is not a single scientifically proven contradictions with his definitions.

I would spare you from a lecture on basic linguistics - there are much better figures in the field. The only reference I would like to make is to the postulate of Chomsky, that language acquisition is a process, similar to growing of an organ. So, the story about Evolution is applicable to a capacity as it is applicable to any organ or a subsystem.

So, thinking in its classic definition is the capacity based on a language. It is implemented in corresponding brain centers and related circuitry and is impossible as ability without underlying lower level machinery which takes long and unique path to evolve.

As far as we know, no species went to the same evolutionary pathway which humans did to evolve even a rudimentary natural language in linguistic sense (uniform, rule-governed, arbitrary composition).

So, animals cannot think the way humans do.

BTW, in this chain of reasoning I have demonstrated the kind of thinking no animal is capable of and, hopefully, why it is so.


Why does thinking require language? Besides for you defining it as such.

Ancient Inuits may have said flying requires feathers.


I believe you never had a cat and did not have a chance to see them expressing complex abstract ideas like "let me out to the patio right now or else".


Citation needed.


Philosophy 101.


More like Tautology 101: Circular Reasoning.


Only a hipster snowflake's magazine could put such question in a headline.

A crowd is a major hardwired social heuristic. When someone sees a crowd of protestors or supporters almost automatic mental processes of estimating it's size and more importantly of taking a side wether one is with it or against it are triggered. It also forms a emotionally charged long lasting memory - a hostile crowd is a major life threat. Crowd of supporters is associated with protection and change. This is freshman's social psychology 101.

Number of likes, BTW, and numbers related to social events in general works in the same way. Social heuristics are hardwired and hence easily exploited by media or sales departments. The best strategy is to convince a potential buyer that there is a presumed crowd of enthusiastic buyers behind him. Tesla is the obviously example.

Crowd of protesters is a major concern even for Trump.


I really don't get the lispm's answer about cl21. What are the hidden treasures of the genuine Common Lisp we are ignorant of?

In my humble opinion Common Lisp is the classic example of the kitchen sink syndrome, where many pieces taken from various dialects were put together in a hurry. The results is not that bad (like, say, C++) but we know, arguably, a much more refined versions of Lisp, such as the dialect from Symbolics and Arc.

So, honestly, what are these hidden gems?

cl21, it seems, was an attempt to unify some syntactic forms rather than re-define what a Common Lisp is or should be. In that respect it is very remarkable effort. It is also a package - don't use if don't like.

The fair point that Common Lisp is a truly multi-paradigm language, so it includes mutating primitives alongside with "pure" functions and supports lexical and dynamic scooping, is rather difficult to grasp, but there is a lot of possibilities at the level of syntactic forms and embedded specialized DSLs, which, arguably, is what makes a Lisp Lisp.

It is never too much of embedded DSLs and syntactic sugar.


> What are the hidden treasures of the genuine Common Lisp we are ignorant of?

Actually Common Lisp is in many places a very well documented and designed language. Other languages have copied its numeric tower, its macros, its object-system, its error handling, ...

>In my humble opinion Common Lisp is the classic example of the kitchen sink syndrome, where many pieces taken from various dialects were put together in a hurry

The 'hurry' took place from 1981 (when the work on Common Lisp started) until 1994 when the ANSI CL standard finally was published. 13 years where hundred+ persons helped to design the language and provided comments, improvements, designs, prototypical implementations, alternative designs, ...

Also the 'various dialects' were mainly only Maclisp successors and Common Lisp was designed to be successor to them, not a summary of various dialects.

CL21 has more problems than lines of code, including security problems. As an experiment, fine. As a library that should be used? Please don't.

Edit: this thread gives a bit more detail discussing a COERCE method of CL21:

https://www.reddit.com/r/lisp/comments/6snw5d/questions_for_...


I think that a future Common Lisp could profitably default to lower-case instead of upper-case, and unify argument order. It probably shouldn't default to generic functions due to the performance implications. It could more-fully specify pathnames for current platforms. It could specify something like Go's channels & goroutines.

None of that's probably worth the bother of a new spec.

CL21 was, as you note, an interesting experiment, but I wouldn't run with it.


I remember reading the Symbolics manuals from bitsavers and how I have been impressed by clarity and unity of the language, and how they incorporated CL into Zetalisp just by moving their own stuff into packages.

At least in the documentation everything looks wonderful. Compared to Symbolics CL looks like a mess at least from reading the books.


> Compared to Symbolics CL looks like a mess at least from reading the books.

Which books, may I know?

The Common Lisp Hyperspec is one of the best reference manuals for a programming language, ever. But it is not a tutorial nor a how-to guide!

Try "Pratical Common Lisp". Or "Land of Lisp" if you want to have lots of fun.

simple but refined, guaranteed to blow your mind...

https://youtu.be/HM1Zb3xmvMc


> Which books

Symbolics Common Lisp - Language Concepts (volume 2A)

It also covers parts of Zetalisp.

I am also big fan of pg's On Lisp and ANSI Common Lisp.


That does not make sense to me. Common Lisp provided lots of improvements over Zetalisp.


The major goal of Common Lisp was commonality, and that means incorporating forms from various dialects into one and to maintain compatibility, like you said. That goal has been accomplished.

I think that carefully designed dialects with emphasis on right principles and attention to details leading by people like David Moon would be aesthetically better than a good kitchen sink. It is no coincidence that Common Lisp took most of stuff from Zetalisp, according to CLtL2.

I think it is a general heuristic that a small group of disciplined devoties would produce better artifact than an vast assembly of... general public or passionate and productive but ignorant individual. Think of Python3 vs Ruby, Go vs early C++ or Scheme vs CL. Attention to details and perfectionism works - look at Haskell syntax and prelude. (All this just to illustrate validity of my heuristic).


> The major goal of Common Lisp was commonality, and that means incorporating forms from various dialects into one and to maintain compatibility, like you said. That goal has been accomplished.

That's misleading. The 'various dialects' were mostly three very similar successor implementations of Maclisp.

Common Lisp was designed as a single successor language to Maclisp. There were successors of Maclisp for different computers under development: NIL (on a super computer and the VAX), Spice Lisp (for a new breed of workstations), Lisp Machine Lisp (developed for MIT's Lisp Machine systems), ... We are talking about new implementations of basically very similar languages, with Lisp Machine Lisp as the major influence.

DARPA wanted that these and future successor implementations stayed compatible. Not every Lisp application for the military should come with its own incompatible implementation of the language. NIL then looked a lot like CL. Spice Lisp evolved into CMUCL. Lisp Machines then got a CL implementation integrated with Lisp Machine Lisp.

From CLTL:

> Common Lisp originated in an attempt to focus the work of several implementation groups, each of which was constructing successor implementations of MacLisp for different computers.

Common Lisp was not designed to be compatible with other Lisp dialects like Interlisp, Scheme, Standard Lisp, ...


You are right - Common Dialect of MacLisp Successors. Thank you!


> In my humble opinion Common Lisp is the classic example of the kitchen sink syndrome, where many pieces taken from various dialects were put together in a hurry.

Can you give a particular example?

I have read many times about people speaking about the amount of "cruft" that was left into Common Lisp, but frankly i've yet to find any cruft. Some people use as an example the fact that there is "CAR " and "CDR", but in this particular topic:

1) you could use "first" and "rest" if you wnat, instead of "car" and "cdr" 2) Using "car" and "cdr" is good because it shows that you are explicitely wanting to modify cons cells; 3) "car" and "cdr" is to Lisp what "*" is to C; that is, part of the charm!!


Its mostly about inconsistent naming, order of arguments and standard idioms. Obviously most of these are historical artifacts, like nconc and friends or the primitives for working with hash-tables.

Car and cdr are small miracles because they give us of caddr or caadr and friends. It is beautiful accident which should be appreciated and preserved.


> Its mostly about inconsistent naming, order of arguments

Do you have some examples of inconsistent order of arguments? I'm not really doubting you, I just can't think of any examples.


One of the goals of Common Lisp was backwarts compatibility. The stuff from McCarthy's Lisp is mostly present in Maclisp, Zetalisp and Common Lisp. That's why they are Lisp languages. Every language existing for a longer period of time accumulates different design approaches. A language purist may complain, but a programmer will get his stuff ported and will get work done.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: