Hacker News new | past | comments | ask | show | jobs | submit login
Pyret – A language exploring scripting and functional programming (pyret.org)
306 points by gknoy on Dec 15, 2016 | hide | past | web | favorite | 267 comments



I've never understood the idea of a 'teaching' programming language. (Modulo examples like Logo and Scratch, which offer much more than a language as part of a wider teaching/computing system.)

You spend all this time ramping up on a language, toolchain, library, etc, that you will eventually be unable to leverage beyond a certain point since it is not one used by everyday programmers. The delta in "quality learning" between a teaching language and a non-teaching language would have to be astronomically high, imho, to offset the difference in capability and the opportunity cost of learning an applicable skill to a wider problem set. Something that is just a slightly better Python or a slightly better Clojure for the purpose of illustrating computer science concepts seems like something that you'd be doing a disservice to students to teach, vs the "real thing" which they could carry for the rest of their lives into future projects ad infinitum.


Languages such as C, Java, Python, and Rust are usually designed by experienced programmers in industry, for experienced programmers in industry. Pyret, on the other hand, is designed by computer science educators, for computer science education.

This does not preclude Pyret from being a useful language for general-purpose programming; it just means that language design decisions are driven foremost by pedagogy. For example, Pyret trades performance for mathematical familiarity by using exact representations of rational numbers as its primary number type. Language like C, Java, Python, and Rust are all distinguished by different priorities that have guided their language design, too.

I disagree with your characterization of any language as being the "real thing" because it privileges some languages above others, but let's run with it for a moment: A systems programmer might call Rust or C the "real thing", in contrast to Python, and perhaps they would even consider delta "quality performance". A sysadmin might call Python the "real thing" in contrast to C, and consider delta "quality productivity". Assuming learning one programming language does not preclude you from learning another, why shouldn't a computer science education call "Pyret" the "real thing" and be asked to justify the "delta learning" of moving to language that isn't designed with pedagogic interests in mind?

(Disclosure: I am an occasional developer to Pyret, and this comment might not represent the views of the rest of the team. If skrishnamurthi or jpolitz replies, read their comment instead!)


I don't think you've responded to the most important point made by the parent, which is that any pedagogical value provided by a pedagogically-designed language over an engineering-designed language must be very substantial to offset the specified costs.

You have drawn privileged lines of your own across programming languages by suggesting that languages designed for education are better in the education context, and that languages designed for engineering are less fit for education. Do you think there is a pedagogical outcome difference arising from programming language difference?

So how much is that difference? How does one demonstrate the additional pedagogical value? Conversely, how does one demonstrate the relative disadvantages of using Python in education? Is the difference from Python -> Pyret an interesting barrier to students of computer science?

Meanwhile, the argument about the cost of maintaining tools and environment stands unchallenged, and the argument about the inertia of moving to a completely different ecosystem remains unchallenged.

PS: What pedagogical literature guides the pedagogically-focused design of this language? I couldn't find any on the website.


1. Python has strange scoping rules. They are certainly not clean and nicely orthogonal. They complicate the ability to build basic tools and make them correct (e.g., see the appendix of http://cs.brown.edu/~sk/Publications/Papers/Published/pmmwpl...).

2. Python does not offer a neat integrated testing story.

3. Python does not have a type-like annotation mechanism along with a type-checker and a type system that is not extremely complex. For some people who want to teach introductory programming, this really matters.

4. Python does not have a clean functional event-loop construct, which is central to the pedagogy we use in many application so Pyret.

5. Python does not offer a typed interface to external data. Pyret's table support is designed with types in mind, but you can also use it just fine without types. [Quick table demo: https://twitter.com/PyretLang/status/773605473824145408]

6. Pyret runs entirely in the browser. With a Stop button in the IDE. There's tremendous complexity involved in making that run. Just about no other language does that in the browser. That's a really valuable feature to a beginner whose programs may go haywire.

That's not all, but that's a start.


On 3) Why do you care about types when teaching programming? (Not being facetious, this is a genuine question).

I've always thought that the advantage of Python (or even JavaScript) as a first language is that you can defer thinking about (to the layperson) unintuitive concepts like types, and focus on the core concepts behind telling a computer to do what you tell it to. (Functions, variables, control flow, maybe classes if you're feeling fancy).

I've talked to a super-smart Hack Reactor graduate who was blown away by the concept of static typing when I explained it to him in detail; it was simply irrelevant to his understanding of functional programming, data structures, and algorithms, subjects which familiar with at quite a high level.

On 2), what's missing in Python beyond the unittest package?

On 4) and 5), are these things that you introduce early in your curriculum? They seem like quite niche aspects of programming. I've found that most python programmers haven't had cause to use an event loop in production, for example; only really if you're implementing a web server or networking stack, something that frameworks usually abstract from you.


> Why do you care about types when teaching programming?

Because programmers are always aware of types. You may not annotate that the function's `licensePlate` parameter is a string, but if you are thinking of taking a substring or string-appending it, of course you're aware of the type.

If a beginner is only vaguely/intuitively thinking that the licensePlate is a string (sometimes confusedly thinking it's a number), then there'll be problems. So allowing a beginner to mention a type is important.

So the question is more: when learning to program, do you require every name to be annotated with a type? Some people feel this is overkill, and others think it helps, for CS1. And languages with implicit conversion-rules can confuse some beginners, while converting explicitly can annoy others.

Once somebody has internalized the rules, then yes of course you want the language to make "obvious" conversions and/or declarations for you (e.g. true-ish expressions, and (for Java) auto-boxing and auto-toSTring'ing). But just because experienced programmers find it indispensable doesn't mean it's appropriate for beginners.

(Btw, in addition to learner-vs-competent programmer, there is a separate issue, about the strength of the students: some people pick up programming-way-of-thinking easily, and others need MUCH more guidance before they become competent. Different language-choices might be appropriate for those two audiences, as well.)


3. We teach types to students as young as 12. They have no trouble with the concept, because they're taught from a young age about classifying things, and types are a form of classification. So there's nothing "unintuitive" about it. The people who have the most trouble with types are adult programmers who were raised on languages with awful type systems, like Pascal, and cannot envision any other kind.

Furthermore, we use types as a central notion in our program design curriculum. The structure of the data (aka, "types") provide a hint as to the structure of the program. This helps students who are stuck get unstuck. That is, they are most helpful to the students least able to get started, which seems like quite a valuable place to focus.

Of course, having annotations and enforcing them statically are two different things. That's why Pyret lets you check annotations dynamically instead. And that makes it even simpler and more intuitive to introduce the concept.

2. Direct integration into the syntax of the language makes a huge difference. There's no library and additional complexity. For rank beginners, every new "moving part" is a source of problems. This is why most Python curricula do not emphasize writing tests early. In our curricula, students begin with writing examples of their program's behavior — expressed as tests — and use that to help figure out the way the program must work. (Just like you use examples to help figure out a more general structure in other disciplines.)

4. Event loops are central to reactive programming, i.e., animations and games. HtDP 2nd edition begins with writing an animation. The Bootstrap curriculum is focused around teaching students to write a game — but in the process learn algebra. It's niche to you because you have a particular model of programming education, which may well resemble what people were teaching in the 80s. That doesn't mean curricula haven't moved on. And we're talking about education, not production, which are two different things.

5. Combine the earlier points for this one.

Whenever making "I haven't seen", "I don't see", etc. comments, it may be helpful to keep in mind Paul Graham's Blub Paradox [http://www.paulgraham.com/avg.html].


Regarding 3 I believe it's matter of doing it sooner rather than later. You can pretend that things don't have a type, but it's a lie.

The only thing that languages like JavaScript do differently is to hide the types of variables. This forces the person reading/writing the code to keep track of what is on what variable at all times instead of having a compiler do it for him.

I understand people have difficulty with pointers and such, but not types. Without that knowledge you can't go anywhere.

From my point of view there is absolutely no advantage in hiding types when you always end up using them. Better to face the fact and teach them early on.


One thing that comes to mind is that a language that prioritizes pedagogy would probably have a vastly different perspective on backward compatibility. And historically, that's a huge constraint on how "real-world" PLs can evolve in practice. Every language I've spent any amount of time with is either deeply encumbered by suboptimal design decisions on the past or has experienced extremely painful version transitions. A language whose first priority is pedagogy can presumably evolve much more freely. I don't know enough about Pyrets to say whether that is the case here.


Not enough upvotes for this. And yes, Pyret is free to break things and move on when we figure out better ways of doing things. We've done that a few times already.


Wouldn't the value of education oriented languages be that educators can avoid specific complexities that get in way the of teaching core concepts?

Once a student has core concepts, then they have something they can build on with other languages that have more issues (potentially complex) stemming from practical problems. This prevents needing to learn two complex things at the same time.

Does pyret have easier to understand errors around syntax and typographical errors than Python? Most real languages won't search for similar identifiers to one that is misspelled and make suggestions, does pyret do anything like this (Other teaching tools do)?


I rephrase your post in my head:

The value of an education-oriented programming language is a boost to X education outcome relative to competing alternatives, and that "avoiding specific complexities" would be the mechanism by which to achieve your outcome.

Therefore,

(Pyret intervention) -> (educators don't struggle with language complexities) -> (education outcome)

But first I want to know that there's even a difference between (Pyret intervention) and (Python intervention) on (education outcome).


I think you are intentionally simplifying my statements, and the intents of educators, in an intentionally disingenuous way. If you are not being malicious, then I apologize, but the only way I can interpret you seems to indicate you have an axe to grind.

Let me provide an example of a language complexity that hinders education, presuming education cares about things common to many languages like function, types, control flow, classes and objects.

In C++ there is a strong separation of declaration and definition, this is required for silly, technical and mostly historical reasons. Teaching all of the concepts needed to make sense of why the compiler cannot elide definitions included twice is not useful when trying to teach about anything other than C++.

Another example, in Java there is much boilerplate and an implicit package scope when scope specifiers are omitted. Most languages need little boilerplate and in order to understand why the implicit package level specification is bad (or good if you disagree with me), one must understand the concept of public and private already.

For each language I know I can highlight such things. There is plenty of room for simplified teaching languages then extra courses to cover language specific things.

None of this should be taken as a general defense of how programming is taught. I just mean that whatever problems there are, seem not to stem directly from the use of education oriented languages. I even think that classes focusing on specific languages should more closely resemble the real world, but not all people are ready for such detail.


> I think you are intentionally simplifying my statements, and the intents of educators, in an intentionally disingenuous way.

You're telling me that:

(Pyret intervention) -> (educators not struggling with complexity) -> (education outcome)

was so bad you think I'm dishonest? And you think that's the "only way" you can interpret the situation? And just in case you're wrong -- sorry?

1. Proposing a causal relationship: (independent variable) -> (dependent variable)

2. Proposing complexity: (independent variable) -> (mediating factors) -> (dependent variable).

This is a fucking good start to any policy conversation. Where else do you find this online?


I wanted to give you the opportunity to rebut, but instead you have degenerated to swearing, red herrings and an inappropriate breakdown to formal style rational logic.

This discussion clearly has too many factors to be argued by formal proof. We cannot prove our premises nor prove an argument is formally sound. Therefor we must rely heavily on induction as many trades do, both software development and teaching are trades in many regards. Each often requiring action without the time or ability to formally prove and instead relying on large scale informal experimentation. The very existence of unit tests demonstrates how clearly practical programming cannot be formally proven and the existence of the whole fields of psychology and sociology ought to serve as basis for teaching impossibility to formally prove. This seems so obvious I hadn't felt the need to state it previously.

I presumed you capable and understanding and knowing this. I inferred that someone choosing to represent the argument in such way must see the situation is too complex to break down like this and must be misrepresenting the facts or they must not be competent to discuss this topic. Because I presumed your competence I felt you must be misrepresenting the situation, I now see I was mistaken.


Your point wasn't completely absurd in the beginning, but now you're wilfully misunderstanding the arguments. It seems you're hedging some sort of grudge against educators?


Asking for a statement of relationship between proposed intervention and proposed outcome is not even near absurd. Pedagogical researchers in policy do this all the time.

For example, Fiona Phelps has researched into gaze aversion in multiple contexts, one of which is pedagogical, and in there she proposes an intervention for educators, and she discusses outcomes. She is interested in making a causal link between intervention and outcome. It should also be noted that the causal model she discusses is simple, discussable, falsifiable, affirmable, has few factors and few measurement instances.

Discussing simple causal models is standard professionalism in pedagogical policy literature... or any other empirical field involved in policy.

Is it that scary to say that a Pyret or Python intervention should have a relationship to CS graduation rates, upper division performance, GPA, or some other stated policy outcome?

Not stating your intervention outcomes allows you to go statistical fishing.

One can say the same thing about iPads in schools. It's not wrong to ask what outcome variables the school intends to affect with an iPad intervention. Maybe the effect is there, but we can't begin discussion unless the proposed intervention is willing to state its predicted outcomes.

Why are you <not> pushing for a discussion of outcomes?


Since you keep coming back to Python: I already responded to you, I believe, on a comparison to Python. In case not, here it is again. https://news.ycombinator.com/item?id=13189513 Lots of educational outcomes (e.g., just about everything in the Bootstrap project [http://www.bootstrapworld.org/]) are linked to these differences, but since most people here are hackers, not educators, I'm sticking to the technical issues that lead to those differences.


> Languages such as C, Java, Python, and Rust are usually designed by experienced programmers in industry, for experienced programmers in industry. Pyret, on the other hand, is designed by computer science educators, for computer science education.

Honestly, looking at the examples on the Pyret main page makes me think that this isn't a good approach at all. I have been in software engineering for many years now and worked with quite some languages, yet I find the code practically in-understandable and the language looks very complex. So likely someone who "learned" using this language will have a lot of trouble actually working with real languages, assuming that he actually managed to learn this language as his first, which is rather unlikely I believe.

I was also involved in CS education - in first semester programming courses. And the language is really the last problem there (C was used, I think the dept. is moving to Python for that course though). Especially eg. Python is a good language overall and is already widely used in CS education.

In short, I don't see why the world needs this, and I actually think that using it would be actively harmful.


I think that's taking it a bit far. I especially don't see how being explicit about the problem you're solving (teaching computer science to newcomers in a collegiate setting) and focusing the effort towards that problem and iterating on the methods used to teach it (building better tools and iterating without importing unnecessary baggage) is "actively harmful."

Simply looking at text in a different language is not going to impart magical understanding. Things you haven't actually used in a non trivial program will seem complex at the beginning. And sure, you can rationalize that away by treating this like a special case, but it's just a general bias in our brains. And depending on your background, certain languages may seem more complex than others due to simply lack of exposure.

I'd rather propose putting it up to a damn test. Compare the computer science knowledge of outgoing students to other students in similar circumstances who were taught using a practical, heavily-used-in-industry language. Use a short test maybe, along with interviews to get at qualitative unknowns. The results of that could then be used to pick the best strategy to teaching going forward.

But also, in general, why not try to solve the problem better, without pre-supposing all the sunk costs and existing ideas? And not everything needs to be on the freaking altar of industry either. It's not like your college implementation of heap sort is gonna be mainlined in a company and used forever more. What're you optimizing, the couple of weeks it'd take for someone who already knows this to learn python? You already saved it probably by skipping the b.s. until it actually became relevant to learn about.


> I think that's taking it a bit far. I especially don't see how being explicit about the problem you're solving (teaching computer science to newcomers in a collegiate setting) and focusing the effort towards that problem and iterating on the methods used to teach it (building better tools and iterating without importing unnecessary baggage) is "actively harmful."

It's pretty clear that I was specifically referring to the language at hand and not to general idea/concept.

> ... needs this ... using it ...


  yet I find the code practically in-understandable and the 
  language looks very complex
Well, then I wonder what these 'quite some languages' that you worked with are, because the language really doesn't look that uncommon or complex to me.

  I was also involved in CS education [..]
  And the language is really the last problem there
The people that created Pyret have been in CS education for decennia and have done a lot of research on 'how can we make CS education better'. Their conclusion was that the idiosyncrasies of the various industrial languages very much are the problem. You give to much weight to your own prejudices and subjective observations. The experimental results prove otherwise.


There are so many examples of people who have learned a Lisp-family language in school and gone on to learn an industrial language later. This is a Lisp-family language with some syntactic sugar to make it look more Pythonic, basically.


Since when are Lisp-family and industrial languages not overlapping sets?


I was trying to come up with something less loaded than parent's "real languages" phrasing, to describe the sorts of languages commonly used in industry, like Java and whatnot.


Yup, I second that. Lua is much more simpler by comparison. Python is pretty easy as well. You can do lots of stuff with those two alone, not only on desktop/mobile platforms but on embedded devices as well (think eLua, Arduino, Pinguino).


What characteristics makes a programming language a "real" programming language?


The most important distinction would be that you don't have to make a transition from education contexts to production contexts.

I mean, why isn't it possible to design a language that is both quite fit for education & production? Is Python vs. Pyret really that big a difference in education outcomes?

This is important for professors & TA's, educators, or enthusiast parents who wish to teach their children. All have to master a toolset and environment prior to teaching others, and maintaining the environment is also an ongoing cost. Some of this cost will be assuredly transferred to students, so we want to be sure that a "pedagogical" language without production traction has such powerful gains that it offsets the losses.


Yes, it is a big difference, depending on what you're trying to teach. If you want to teach a conventional CS curriculum recognizable from the 80s, Python is great (though some people really prefer teaching with static types, which Python does not have and which are absurdly hard to add now). Here are several other differences: https://news.ycombinator.com/item?id=13189513


- has a standard library

- has library support for the usual stuff: strings, math, web, database, encoding, etc.

- written using a lower level language (like C, Rust, C++), self hosted (Go) or a mix of those (Dlang, Perl6)

- is actually used to build applications, has some sort of community around it (Rust, Perl, Python, Lua etc.) or some backing (Swift, Java, Go)


It's a real language if there are programs written in it where the where the main goal is what the program does when you run it, and not what the author learned writing it.


This is kinda like learning CS theory - kinda useful to keep in mind for other things, but never actually used. ie -> set theory, Finite automata, Turing machines etc...


Except there are numerous things Python can't do [https://news.ycombinator.com/item?id=13189513], and at least as much harm comes from trying to push the round Pythonic snake into a very square hole.


"For example, Pyret trades performance for mathematical familiarity by using exact representations of rational numbers as its primary number type."

(Disclaimer: I haven't used Pyret) But.. maybe the language allows the developer work out problems differently. With that in mind, perhaps asserting that the language was built for education, [so as to imply that it'll make you a better programmers in general, even in 'bloated or expert' languages], doesn't necessarily shed light on its actual benefits -- but I'll check it out later on to find out for certain.


This paper (http://cs.brown.edu/~sk/Publications/Papers/Published/fffkf-...) linked from "Why Pyret" (http://www.pyret.org/pyret-code/) is a long answer to "why teaching languages."

It may also be useful to know that Pyret exists as part of a larger project and community. We're using Pyret to develop curricula for Bootstrap (http://www.bootstrapworld.org/), which integrates computing curricula into existing middle and high school classes, like math and physics.

Part of the idea is to integrate computing into these settings to reach students who may not self-select for computing.

A lot of Pyret's design, like a dead simple animation framework (https://www.pyret.org/docs/latest/reactors.html#%28part._to-...) and interfacing with real spreadsheets (https://github.com/brownplt/pyret-lang/wiki/TM028-Pyret-and-...), supports these low-friction uses of computing in these curricula.

Having these features and concepts baked into the language, supported by the IDE, and easily accessible goes a long way to building something that a non-CS teacher can pick up and use in an existing class.


I also belong to the skeptic crowd as far as fancy CS teaching languages are concerned. I think there's much greater value in a first language that's three things: dead simple, useful, and which lets you poke through the abstractions.

A lot of us hackers here today cut our teeth on BASIC, even me as a 90's kid. I remember the joy at having automated away rote chemistry or physics homework, and in hindsight realise that writing a working BASIC program made me learn stuff better and deeper and quicker than the homework I avoided.

I have a four-year-old now, and the other day she asked if I could count to a thousand million. I answered "Nope, but I'll show you Mr. Fortran, he can." Give kids a DO loop, some arrays and WRITE statements and they will conquer the world. For learning, it's hard to beat the simplicity of

    program count
    do i=1,1000000000
      if i % 1000000 == 0 then
        write(*,*) i," Million"
      end if
    end do
    end program
And I'm sorry to report, your examples don't convince me further of Pyret. (I'm a Python fan, BTW.) The docs you link for the "dead simple animation framework" assumes I understand templates; that's hardly dead simple. And while cloud integration is all cool for spreadsheets, if you did the same with Pandas in Python it would take no more lines of code, be no more complicated, but would start you on a tool that tons of people use for Real Work every day in a wide range of industries and sciences. Not just limited to interacting with half-baked Google Sheets data, you could read anything from HDF5 produced at CERN to SQL data from NASDAQ.


There are no "templates" in the language, so the docs couldn't possibly have required you to "understand templates". You may not have understood what you read, I'll grant you that. That may or may not be a fault of the documentation or the language. It could even be the fault of a hasty, pre-judging reader…


Thanks for the feedback.

> The docs you link for the "dead simple animation framework" assumes I understand templates; that's hardly dead simple

The idea of "templates" comes with a lot of baggage from C++-land. If we were doing this with C++ templates, I'd agree, because the errors are abysmal and the types are difficult to write.

As an example, we teach students to pick "the type of their reactor's state" as a useful pedagogic step in physics simulations, which is almost always a (maybe nested) tuple of numbers. But writing out the types (the instantiation of "a" in to-draw, on-tick, etc) is a useful and quite concrete activity. Since we have them working with types in their contracts already, they're equipped to think through data definitions that way.

That kind of type-based reasoning is a kind of thinking that isn't just loops and arithmetic, also needs language and curricular support, and is something we try to support early and often.

> if you did the same with Pandas in Python...

We could do more to integrate with existing, powerful libraries. Since we've found it useful to be browser-based to make things widely accessible, we'd probably be looking for similar things in JavaScript, but the point is well-taken.

That said, tables do more than work with just "half-baked" Google spreadsheets. The interface is generic to support other formats – the Google Sheets import is a library, and a Pyret/JS programmer can write library code to import from other sources (this is of course work, and won't magically appear, but the language supports more than Google Sheets by design). In addition, tabular data is built-in to the language enough to use it for _testing_ example tables (and functions over tables) with the same primitives as testing other functions. And the animation framework can spit out tables that trace execution of the animation, so students can generate their own data from simulations.

All of these things live together inside the same language that has other features we've found useful for beginners, like more authentic math (not just bignums but rationals), easy image creation, example/testing support, carefully-crafted error messages targeted at beginner vocabulary, and so on.

> I have a four-year-old now, and the other day she asked if I could count to a thousand million. I answered "Nope, but I'll show you Mr. Fortran, he can."

That's just awesome :-)


Thanks for the response!

> The idea of "templates" comes with a lot of baggage from C++-land. If we were doing this with C++ templates, I'd agree, because the errors are abysmal and the types are difficult to write.

You're spot on. I suspect most people familiar with template-ish expressions come from C++, where you encounter quite a few projects that are "templates all the way down" (looking at you, OpenFOAM).


So Pyret is attempting to help solve the very real problems engaging kids and supporting teachers through creating a toolchain and teaching materials based on real research.

Is there a way to make a donation to this project?


Yes! You can contribute to the Bootstrap project directly:

http://www.bootstrapworld.org/contribute/index.shtml


What about a 'teaching' OS/compiler/toolchain vs. the 'real' thing? e.g., a stripped-down OS kernel with minimal clean code optimized for teaching vs. throwing someone into Linux source. or a teaching-oriented compiler vs. wading into GCC/LLVM?

Teaching languages/OSes/compilers/toolchains (if done well) lower the barrier to entry while providing a conceptual transition path to learning 'real' stuff later on the job. Plus, the 'real' thing that we may be teaching in school now may not even be relevant in 4 years in industry when students get out into the working world, and they will have to pick up new languages/toolchains/etc. constantly on the job anyways. There is definitely a good argument for authenticity, but also one for paring down extrinsic complexity to illustrate core computer science concepts. e.g., try explaining parsing using full-blown C++ as a case study -- that seems pretty daunting. But one of these teaching languages (if done well, of course) is much easier to write parsers for.


In my OS course we were taught using Linux.


Sure, there are advantages to both approaches, if done well. But I bet that diving into Linux first would make it really hard for beginners to get a deep grasp of core concepts; may be fine for more advanced learners. e.g., the scheduler is probably 10,000,000,000 lines of complexity when the core scheduling algorithm probably fits on one page of code.


The idea is that the concepts of CS are universal, regardless of programming language. So instead of using a programming language that has been designed for real world use (carved with the footprints of a thousand language hackers, full of awkward inconsistencies, warts, and good-enough-for-nows), teach in a teaching language - a programming language that is very regular, with very simple semantics, not necessarily designed for speed.

However, this doesn't exclude it from being a Real Language. Scheme is both a very popular teaching language (perhaps the prototypical teaching language, being used in SICP), and has seen actual use, with a dedicated community around the various implementations of the language. For a teaching language, there are a lot of things built atop it. Like, say, this site (PG's Arc runs atop an old version of Racket, from back before Racket diverged quite so much from Scheme, and was still called Scheme).


If they are universal, what is wrong by using a reall word language?

>"full of awkward inconsistencies, warts, and good-enough-for-nows"

can not agree with 'full'. but Is there a teaching language better? I do not think so.


A teaching language doesn't have mounds of legacy code constraining its evolution.


Just mounds of legacy documentation and teaching materials.


Legacy teaching materials can be as much a curse as a boon. Look at all the people here who can't seem to wrap their heads around why some of the features of Pyret would be necessary for teaching programming. That's because their model of teaching programming is stuck in the 80s.


True, but the language that documentation and those materials teach is still a cleaner language than most.


Compare CL to Scheme. One was designed for Real Work, the other for PL research and teaching. The difference is stark.


And yet Racket grew out of Scheme and is now every much a Real Work language as CL, while avoiding CL's warts. So your point is…?


Racket is still immature, still has a focus on education, and has more than enough warts of its own.

For that matter, so does Scheme: It just has less of them. Which has its problems.

Oh, and Racket is in no way as good for Real Work as CL. It doesn't have the libraries, and it sure as heck doesn't have perf.


Says you, and many disagree, including Racket's commercial users. But of course it's pointless arguing with a CL fanatic (I learned that two decades ago on c.l.s, and looks like nothing has changed), so please do go ahead and have the last word.


Gladly.

>including Racket's commercial users.

A language doesn't need maturity to have commercial use. Rust had commercial adoption pre-1.0, and that adoption is increasing. Rust is also nowhere near mature.

Racket being built in part for education is also undeniable. Look it up.

Racket's library support vs CL's is highly debatable. Neithet is ideal, but both are "good enough," so let's leave it at that.

As for Racket's perf, that's just a fact. Look at Racket, then at SBCL. Then at Racket. Then at SBCL. Sadly, it isn't SBCL. But if it started doing native code compilation, maybe it could be a bit more like SBCL.

This is also true of Scheme: Some of the Schemes are much faster than Racket, for many of the same reasons (many of them aren't as fast as SBCL, as I understand it, although I haven't run the benches, so I can't be sure).

This doesn't make Racket a bad language by any stretch. I don't happen to like it, but that's just preferential: there's nothing unambiguously bad about it AFAICT. In fact, it's quite well designed.

>a CL fanatic

Crap. You got me. All that time I spent arguing with lispm that yes, Scheme is a Lisp, explaining to people what Scheme is and why it's cool, explaining that yes, Hygienic Macros are a good idea, and no, that doesn't just mean declarative macros ala syntax-rules, there are imperative systems that do the same thing, discussing the differences between Racket and Scheme, and why I think they matter, but you can choose either, depending, writing CHICKEN Scheme macros, barely tested (because I was busy), on my cellphone, and so on, was just a ruse. I've really been a CL user all along. /s.

While there are things I love about CL (and I've even considered moving to it for various reasons, despite it being an uglier language - although in some ways also a more practical one), and things I hate about the Schemes - CHICKEN in particular (familiarity really does breed contempt! At least a little...), I still think that Scheme is a beautiful language.

While I don't think I'm a fanatic for anything, if I am a fanatic, I'm a Scheme fanatic, not a CL one. I have, as I've mentioned above, argued a lot with CL fanatics on This Very Forum.

Idiot, I'd take with a grin. Incompetant, with a smile (I may well be). Possibly misinformed? I'll downright admit it.

But a CL Fanatic? That's not just a stretch, it's flat-out wrong.


> Racket being built in part for education is also undeniable. Look it up.

You realize who you said that to?


[flagged]



[flagged]


> Did I mention I'm not a CL fanatic?

The converted ones are usually the most fanatic.


> designed for real world use (carved with the footprints of a thousand language hackers, full of awkward inconsistencies, warts, and good-enough-for-nows)

Everything you mention here is a consequence of age not intent for real use.

An old teaching language would be just as bad if those are your only criteria.


But this doesn't happen to teaching languages. Because they don't have to work with real applications, they can change, and do their best to stay clean, and they be very minimalist (like Scheme), and only specify the bare minimum, ensuring a clean learning environment.


The first language that I learned was a "teaching" language: BASIC. That was in ~ 1981. A potential advantage at the time was that BASIC tended to be a completely self contained environment, meaning that you didn't have to assemble things like build tools and libraries. "Hello world" was one line of code, you entered it, and it ran. And at that time, just dealing with the tools -- typically on a mainframe -- was the first couple weeks of the typical introductory programming course.

This is not entirely unusual, even today. I was curious about one of the toolchains for phone app development. For one particular tool, the tutorial for "Hello World" was many pages of instructions: Installing the tools, creating an empty application, setting its characteristics, adding code, building, etc. Even as an experienced programmer, I found it to be rather forbidding.

Another possible reason for a teaching language might be to excite the interests of particular age or social groups. For instance, Logo was oriented towards making interesting graphics. Scratch eliminates the need to learn syntax, and has easy-to-program graphical output.

On the other hand, some languages come pretty darn close to that old BASIC in terms of the ease of getting started, and one of those languages is Python. It seems to me that the sheer size of Python shouldn't be a deterrent to using it for teaching: Just ignore the advanced features at first.


"Ignore" is easier said than done. Students make mistakes. They type the wrong character. They put something in thew wrong place. If you don't enforce your sub-language, you are effectively throwing them into the open, and they suffer. We've seen numerous examples of this, which is why DrRacket offers teaching languages that are subsets of the full language, with customized errors that match the student's vocabulary and level of understanding, etc.


I first started getting into programming using flash and actionscript. I think having a self-contained environment is incredibly important for beginners. When I started trying to learn rails a few years later I got completely hung up on getting a dev environment up and running since I was new to unix-like environments, and I didn't come back to it for several more years. I can't imagine trying to get into front-end development as it is today with all its constantly shifting dependencies while also trying to learn how to code and how to stand up a server.


Baisc is pretty cool. Very easy to learn and quite low level. It was the OS of older computers like the ZX Spectrum and then some later era handled calculators were programable in Basic. I used it later to write code for AVR microcontrolers. You can also program current Maximite computers or Duinomite clones using Basic. Lua and Python are comparable modern alternatives.


Languages take a while to develop. Pyret will develop over a long time. Over time, it will surely grow to be much more than what it is now. But while it's growing, we need to chart a growth path. That path is dictated by the constraints of teaching computer science to beginners and slightly-more-than-beginners.

Every language you name and love and use had a growth path. This, ironically, includes Python, which also evolved from ABC: a teaching language.


It does have one benefit: it forces the student to learn a second programming language sooner. I wouldn't want to work with someone who only knows the language they started out with, and likely still carries many bad habits from.


I've had some experiences where it really helped. My supervisor uses a teaching stack based VM for his compilers class - it is slow as all hell but very simple, easy to get running, and it gives very good debugging info. Similarly, I can see the value in a simplified graphics API that let's you focus on the basic theory, or a simplified language with Hoare logic for writing loop invariants in an algorithms class.


I can't say for pyret per se. But learning a language can drown a newbie. The PLT guys under PLT scheme days and Racket even, had the notion of tiny interpreters that made you grow the set of possibilities of a languages, from arithmetic to booleans, then lexical scope, functions, lambdas etc etc.

Each level allows you to think to the limit of that world. Only then you see where a new trait fits, how it relates to others. You don't just drown into heavy manuals right of the bat.


> You spend all this time ramping up on a language, toolchain, library, etc, that you will eventually be unable to leverage beyond a certain point since it is not one used by everyday programmers.

Teaching languages that are even modestly successful in that role tend to achieve significant use outside of teaching, as well, so that tends not to be true. Lots of languages that end up having broad general use are created with a minimum ore specific motivating use, and teaching is no less legitimate than any other motivating use. Pascal, for instance, was a teaching language, and it for quite a while had enormous industrial uptake.


I think teaching LISP (Clojure) is an excellent option to demonstrate how easily you can solves complex problems by chopping up the problem to simple idempotent functions. Teaching students to use a complex OOP language with lots of gotchas and tricks is much harder in my opinion. Focusing on the concepts of data structures, recursion, time complexity is great for most newcomers to software.


Yes, this should not be one of those training wheels, but a fully fledged solution.

Even Pascal grew up into Object Pascal. Yes, it is still used in production.


Just curious, are you still programming in your first language then?


Example from the OP:

        fun to-celsius(f):
          (f - 32) * (5 / 9)
        end

What's the justification for allowing the use of hyphens/dashes in reference names, when underscores would seemingly provide the same purpose? One of the most common problems I notice in newbie code is inconsistency in using whitespace, e.g. `a-b` vs `a - b`, though in that situation, for Python and virtually every other language I've used, those both result in subtracting b from a. But in Pyret, it seems that `a-b` would be interpreted as a reference? That seems needlessly confusing for no benefit in readability over using snakecase.

Beginners frequently have issues with grammar, that is, they are pretty much unaware of it when trying to figure out the many other things they have to learn about programming. Even in SQL, I forget that it must be quite confusing for beginners to parse out the meaning of the `` in something as simple as:

     SELECT mytable.*, (a * b) AS myproduct, 
          COUNT(*) AS the count
     FROM mytable ...
Doesn't seem worth it to make the hyphen, which is so frequently associated with subtraction and negative numbers, have just a decorative meaning when used between word tokens (as opposed to subtraction of values represented by variables).


Pyret inherits hyphens-in-identifiers from Scheme/Racket. One of Pyret's largest users, Bootstrap [1], has long-used a dialect of Scheme, and has recently introduced a curriculum that uses Pyret. For these users, "virtually every other language" allows for hyphens in identifiers.

Scheme mitigates the spacing issue with s-expressions. Pyret attempts to mitigate it by enforcing that operators should be separated from their operand with at least one space.

(Disclosure: I am an occasional developer of Pyret.)

[1] http://www.bootstrapworld.org/


Scheme and Racket are both LISPs, with very different syntax. Hyphens in identifiers aren't a hazard, because binary operators don't go between two other identifiers, and operators must be separated with spaces, e.g. subtraction looks like `(- 2 1)` instead of `2 - 1`. This means that using hyphens in identifiers is quite safe.

But Pyret is using more traditional syntax, where binary operators go between their operands, e.g. `2 - 1`, which means using hyphens in identifiers is a hazard as the parent comment describes.


The problem is overblown. It's not going to cause any deep, hard-to-debug errors. If you accidentally type `a-b`, you'll get an error message that says "a-b is not defined". Fix it and you're done.


While I agree that the problem is not a very big deal, I think that you might be surprised how difficult understanding compiler errors can be for newcomers.


The Pyret design team has done perhaps the most in-depth research into error messages of anyone [e.g.: http://cs.brown.edu/~sk/Publications/Papers/Published/mfk-me..., http://cs.brown.edu/~sk/Publications/Papers/Published/mfk-mi...]. We're deeply aware of the effect of error messages. This isn't a problem.


I regard that as a valuable teaching moment: "This, kids, is a great example of what you'll have to deal with all the time. Computers are not smart and neither are compilers. Forget a semicolon or some whitespace, and it's your job to clean up the mess after your compiler shits its bed."


Having been a tutor of first-year beginner programming students many years ago, my experience has been that its not quite as simple as this.

The number of people who forgot even the most simple syntax rules towards the end of the semester, even after having shown them during each lab session, was surprisingly high.

You tell them that the computer is super dumb and pedantic and they understand this. So you tell them the rules (semi colon at the end of line perhaps) and they understand. Then an hour later and they're staring blankly at the screen wondering why they got the exact same error again.

Obviously not all students are like this and almost all did get it after a while, but you'll be repeating your statement to them enough times that you wished the language didn't have whatever warts is causing the confusion.

I found Java particularly bad, because you have to tell beginners to ignore 90% of the hello world program (class ... public static void main ... - you have to tell them "hey, ignore class, public, static, void etc for now, we will cover it later"), which isn't satisfying at all and in my experience caused more confusion still...

Yes, you've got to treat it as a learning experience, but the students usually see it as a roadblock and cause of frustration. Often one that doesn't make sense to them and they question why its even a thing. ("why can I use '_' in identifiers and not '-'?", not too hard to explain of course but its yet another bit of incidental complexity to them)


I don't recall having real trouble with category errors like this as a beginner. Do you think that is a serious problem?


Well as dumb as it sounds, one reason to allow hyphens instead of requiring underscores is that the former takes one keystroke to type, whereas the latter takes two. This is kind of nice when dealing with longer function names.

It's also worth noting that pyret requires spaces between all math operators, not just subtraction. In other words, 3+4 is not valid; you would need to type 3 + 4. So at least it's internally consistent in that sense.


Not dumb at all...it's just that assuming that we spend more time re-reading code than we do in the initial writing of it, saving the keystroke from hyphen-to-understroke seems like premature optimization :).

Good to know that Pyret requires spacing between the other operators. That does help a lot, and an argument could be made that it forces folks to not do `a-b` when they mean `a - b`.


Personally I find that hyphen-separated identifiers are more readable than when they have underscores. When dabbling in perl6 I got accustomed to then very quickly. And forcing people to put spaces around binary operators is definitely a plus.


It's not dumb at all. It's one of the things we had in mind when we designed the syntax!


If you're used to the syntax of identifiers from Lispy languages, you'll love the hyphens in the names (what some people call kabob-case). Because the syntax requires spaces between operators, there is no ambiguity. We have had several hundreds of Pyret programmers and there has literally never been any confusion about the syntax: in fact, people didn't even _notice_ they were using `-` in two different ways until it was pointed out to them. Empirical data rocks!


As a Clojure programmer, I learned to love hyphens, so much more readable. But since Clojure is a Lisp, is does not suffer this whitespace problem/confusion.

    (- a b)


An infix language could also require whitespace around operators, so that a-b could be a an identifier.

Like in a Lisp infix macro: (infix (a + b * c) / d).

This kind of thing is quite habitable: we hardly lose sleep over (3.4) the list of one float versus (3 . 4) the dotted pair of integers.


We're discussing an infix language that does exactly that. In Pyret, binary operators must be surrounded by spaces. Therefore, `a-b` is not ambiguous.


I'd say that white space requirements around operators even seems implicitly consistent with the Pythonic idiom of semantic white space and the Pythonic dictum of only one way to do things.


Python has many ways to do just about everything (e.g., list comprehensions are completely redundant). It's a nice marketing slogan, though.


Yes, there's even two Pythons.


I'd say there is some room for confusion, even if it's not in the grammar.

I have spent quite some time chasing bugs of the type (- a -1) where the second minus ended up there because of either a copy/paste-error or a brain fart due to the original expression was a - 1...


I'm not sure how (- a - 1) would work, unless you've named something "-"?


- is a function. Because Closure is a Lisp1, substraction function being passed as an argument like that is very probable. In Common Lisp you'd pass #'-, so it's clear that it's a function being passed.


I love hyphenated symbols. It's so much easier to read than camel or Pascal-casing. As another poster pointed out, using whitespace between subtraction variables and the minus sign is just a non-issue in practice.


Part of the reason whitespace is inconsistent around operators is because nothing about most languages forces you to care. It's a lot like case sensitivity in that regard.


There's more to it; making whitespace significant makes it harder to read in print and when presented with non-monospace fonts on-screen. Even without these caveat, it's still a bit harder to visually process. It's like the = vs == problems. It's easy to get right, you almost always get it right, but it's close enough to cause errors and requires extra concentration effort.

Other lisp-ism: gratuitous abbreviations. Is 'lam' instead of 'lambda'. Given its teaching claim, that's two layer of understanding required of students: to know what a lamba is and to connect that lam stands for lambda.


it's kind of interesting because originally my comment was a stronger defense of hyphens then when I compared it to Case sensitivity I actually back tracked.

Ada for instance specifically considered then rejected case sensitivity because they had a (at least one) study that said case sensitivity was a source of errors.

I don't know how well instrumented the pyret courses are but I would be very if they have studies on this.


Pyret's implementation is by design very privacy-preserving (we don't just quietly make a copy of everybody's code, people have to explicitly share copies — we don't want to freak out students by surveilling them). But we do have some data that students have given us. In those data, we do not find evidence of case-sensitivity being a problem.


Pyret enforces what everyone agrees is good style anyways, and consistently requires whitespace in other places similar-looking languages don't.

This therefore is perfectly natural; non-programmers are used to using spaces to delimit words and this is simply doing the same for tokens.


Hyphens are much better for names than underscores, and easier to type.


> What's the justification for allowing the use of hyphens/dashes in reference names, when underscores would seemingly provide the same purpose?

Hyphens commonly are used to join words into compoubds in English, underscores are not; the latter are used in many programming languages as a result of the fact that not allowing hyphens to serve that role when they also serve as an operator in other contexts simplifies automated parsing (and allows syntax that supports more compact expressions, and there was a time when bytes of source code had a greater cost than they do now.)


People mess up the other way as well, thinking "a-b" is a valid identifier in Python.


On the other end, I love that Calca (mac and iPhone app and language) allows to use spaces in function or variable names. It's so natural.


As someone who teaches coding to beginners for a living (I founded One Month and I teach Python to business students at Columbia University), this language looks really intimidating to beginners.

Maybe Pyret isn't for beginners, and it's intended to teach people who already have some basic knowledge more advanced concepts like functional programming. That's fine.

But to a total beginner, the syntax of Pyret is definitely a step back in terms of readability.

In what world is:

  fun square(n :: Number) -> Number:
    n * n
  end
...easier for a beginner to understand than:

  def square(n):
    return n * n
I get that the intention of this language seems to be to help beginners avoid some of the more common pitfalls that they may run into (ex. unexpected parameters and return values), but it seems like they do it at the expense of the total beginner's ability to understand and keep track of lots of new concepts when they're just starting out.


Pyret is inspired from Racket, which starts at step 1 with functional programming. It's not about readability or getting people to learn as fast as possible to write basic code, but to try and teach core program design principles easily and build a solid base within a single semester. The syntax is not as clean as Python, but it offers much more clarity in terms of testing, signatures, and offers a very interesting method of writing loops with map/filter/foldr.

I have a bias here, as I help teach an introductory course using DrRacket, and many of my colleagues are very aware of Pyret, some even helping to develop it. In one semester, however, we have students with a full understanding of recursion, linked lists, many other common data structures, anonymous functions, functions as data, and understanding of map/filter/fold and how to use them. The class is specifically aimed at people who do not have prior programming skills, and works very well in my experience. Yes, it is a lot, but it builds an incredible base, and if taught to build on itself slowly, is actually very minimal conceptually. The optional features in Pyret are probably allowed with that in mind.

One of its weaknesses, arguably, is that it doesn't look like much else out there in common use, since it's a Scheme. This syntax is an attempt to try and make the same ideas translate to other languages easier, such as Python/Java.


> arguably, is that it doesn't look like much else out there in common use, since it's a Scheme.

That is a beauty of Lisp especially Racket. Every thought is explicitly inside a bracket.

I am a self-taught programmer started with Assembly Z80, Pascal etc.. I mostly work with R now a days and I was struggling with R to get to the next level. I picked up a book and learned Racket and the convergence of R and Racket was totally unexpected. I missed all the Scheme influence in R and most R programmer in the past didn't use that part of the language. So as a learner them brackets are golden especially for learning concepts.


If the goal is to teach functional programming with testing and signatures, is the awkward syntax (that isn't really very close to C#, Java, or Python) better than a set of macros on top of typed Racket?

It seems like it'd be easy to define a typedracket-derived #lang where you had to write:

    (define (sum a b) : [-> Integer Integer Integer]
      (where (= (sum 0 1) 1)
             (= (sum 2 2) 4))
      (+ a b))
And make the type signature and where clause non-optional with at least a single test.


I think that's a very valid question. I think Pyret is attempting to reach out to people who may not view Lisp's favorably and still convey many of the valuable concepts. Whether or not it is the right approach or if it will translate properly is yet to be tested.


It's not better [1], it's different. There's Typed Racket for people who want that. But many people suffer from visceral negative reactions when confronted with any kind of parenthetical syntax. Pyret was initially designed for them, and has since added on several of its own innovations.

[1] I, personally, love parenthetical syntax.


The syntax looks really Haskell to me, so it's not that awkward.


Can we stop assuming that only the syntaxes we are accustomed to are the only "readable" options?


This needs to be higher. So many people in these comments claiming that X, Y, Z are "more readable" or "friendlier to beginners" and providing no argument or evidence other than the implication that it's what they are used to because of their personal background.


Whether some is readable is always going to be subject to who's reading it. That's simply how readable/non-readable work. Now you can learn to read new languages (human or machine), but I suspect someone who knows a romance language will find other romance languages for more readable than Chinese. Romance languages share words, alphabets, and many syntactic features. You can draw from an existing body of knowledge and patterns when learning a new romance language. There's a lot less to draw from if you're learning Chinese.

The same applies to programming languages. The fact is most programmers learn popular languages. Languages that deviate from popular languages tend to become less readable.

So the answer is no. If you are accustomed to a syntax, it naturally follows you are most likely going to languages that adopt those syntactic features will initially be more "readable" because things only become readable when you learn how to read them.


However, when you apply your "I know these programming languages and their syntax" bias to evaluating the readability of a teaching language, you're rather missing the point.


Pyret draws from traditional algebra notation, Python, and OCaml/Haskell. If you do not know any of these, you must be a pretty weak programmer. If you do most of these, you ought to be able to recognize almost every part of it. Anyway, the good news is that programming beginners — who don't know any languages — have no real trouble with it, so any difficulties may say more about the reader's blind-spots than about the language itself.


Annotations are optional. This is equally valid:

  fun square(n):
    n * n
  end
Not that different after all! And I say this as a lover of python and significant whitespace, but not having it at first would be easier.


There should be optional syntax that allows:

    fun square(n):
      party in here
    unfun
    // no longer having fun here


Perhaps `unfun` can be incorporated into this:

http://blog.brownplt.org/2014/04/01/var-vs-yar.html


"def" is an abbreviation non-programmers are more likely to understand than "fun".


OK: opinion time. Some folks in this thread don't understand how pedagogy works.

When these people talk about easy-to-learn, they don't mean what a lot of people in this thread mean. Def / fun is such an easy thing that it's a non-issue. Anyone can learn programming language syntax pretty quickly.

It's the semantics that's hard. When you first start learning, the things that actually trip you up and that you can't just look up in a cheatsheet in two seconds are things like: Unexpected behavior, bad error messages, hidden rules, hidden complexity, complicated control flow, code that doesn't do much but must be there anyway, arcane things that you must do to make the compiler happy.

In that way, you don't have to assume that beginners are stupid. A lot of these things happen because languages are badly designed, or because they evolve to have too much cruft over time in an effort to be general purpose and catch up with the times. For learning, this just gets in the way. With better tools, learners can tackle much more complex concepts much quicker. Once those take place, you can introduce the arcane weirdness of other programming languages, and it makes a lot more sense.


You're right! To DEFine a FUNction, you should naturally type `def`, not `fun`. By the way, I tried to define a variable and got

  >>> def x = 3
    File "<stdin>", line 1
      def x = 3
            ^
  SyntaxError: invalid syntax
What's up with that? Is that not also a DEFinition?


Yeah, that should work (well, "def x: 3" should work). In my preferred language it does work.


But that's no fun. ;-)

Seriously, fun is obvious for function to anyone. (also, defun in some other languages fits the bill) In contrast, def doesn't explain what is defined. (See Groovy where it is a real problem or JavaScript where you can use var for everything.)


I remember Apache Groovy's use of the verb def to match the verbs and adverbs of the other statement-level keywords (e.g. switch, if, while, return) instead of a noun like any to match the type names it stands in for (e.g. int, bool, String, null) was, er, discussed at length on their mailing list back around 2005-06. The Nabble interface to those discussions has since been obfuscated by being redirected and embedded within a page from the groovy-lang.org website so I can't easily find a link to those discussions.


> In contrast, def doesn't explain what is defined. (See Groovy where it is a real problem or JavaScript where you can use var for everything.)

Does that matter? I'm used to Scala where `def x = 4` is just as good as `val x = 4` as far as a beginner's concerned.


How dare you suggest non-programmers can't understand fun! XD (I'm only teasing of course.)


The type annotations are optional, so you can write:

    fun square(n):
      n * n
    end
In fact, I believe the major motivation for making the type system optional was to not force beginning students to write types.


In what world is:

  def square(n):
    return n * n
... easier for a beginner to understand than:

  square n = n * n


Is square a function or object/data structure result of operation?

Welcome to the Real World. Please make different entities look different.


Usually when one starts to learn to program already have learned some math. So knows only this notation for funcions: square(x) = x*x

"Return" seems meaningless.

So that is the "real world" function declaration (if this phrase makes any sense for a math notation convention) for a nonprogrammer.

Is that what you mean? Because then I agree...


In the same world is easier to understand for advanced developers.

You know n is a number and the result is a number. You don't need to "type check in your head".

Remember Pascal? Being explicit is good for education and for regular sanity.

P.D: I like python very much, but all the time I'm wondering "ok, what is the meaning of this code, let's find elsewhere so I can remember what is supposed this return"

This also happens to me with F# (type inference is weird: The compiler know, but it hide to me what it know!).


Use IDE which runs type inference like the compiler when that's a problem. Can't do it in Python though.


Programmers use a variety of editors these days. They often don't want to be locked down to one single IDE. So "let the IDE do it" is not a universally satisfying answer either.


Don't fear or hasten to assume no expertise, Pyret is designed by professional educators. Those type annotations are optional, so this is not a fair comparison.


>> Maybe Pyret isn't for beginners, and it's intended to teach people who already have some basic knowledge more advanced concepts like functional programming <<

I often hear people say that functional programming is advanced concept. I think students are able to learn functional programming as their first model of computation, before they are even exposed to Turing machine computing. It's not like you have to first learn C, then some java and only after you become familiar with OO, you have enough knowledge to grasp functions as first class citizens.


Many of our functional programming students are ~12 years old, and have not yet been exposed to any other textual programming language. They handle it just fine. They don't write lambdas, but they do actually use functions-as-values (without fully realizing it). It's really not a big deal, except for people who were taught otherwise and don't realize the fog that permeates their thinking!


The Pyret style much more closely follows the abstract (divorces from any particular PL) analytical, contract-based problem solving approach taught as central to programming and as properly preceding actually writing code in, e.g. How to Design Programs (which uses Racket as the concrete programming language.)

So, for a curriculum based on that basic approach that starts with the analytical methods and then adds concrete programming on top of it, the Pyret approach -- which requires and gives effect to elements of the analytical product that Python would not require and provides no convenient means to clean express that also gives it concrete effect -- is probably both simpler (as the two-way correspondence between code and analysis is better) and more productive.

It's probably not particularly useful to discuss the quality of a pedagogically-focussed language outside of the context of an approach to pedagogy.


You saw one example, misunderstood it, and jumped to various unsubstantiated conclusions from there. So there's not much to reply to here, except to point out that the annotations are optional, and the following is an equally valid way of writing that same function in Pyret:

  fun square(n):
    n * n
  end
[Speaking of beginners and new concepts and so on: they wrote functions in school in math all the time, and didn't write `return` there either.]


The first version looks more like something you'd see in a math book, which is where a non-programmer's notion of what a "function" is would likely come from (assuming they have a notion at all).


Yarr! Pyret always seemed like a great idea to me. It's still quite-scheme-y, and so can introduce functional programming ideas easily, without the hangups people normally have about syntax. It also has minimal weird-language-cruft that is hard to explain to beginners, that you'd normally have to be like "ok ignore the int argc, char argv for now". And it runs in the browser. Awesome. Can't wait to get a chance to dig in further.

Also, try code samples in here: https://code.pyret.org/editor

Also, scroll down to see the comparison vs other languages: http://www.pyret.org/#examples


I really like this sensible policy:

Some thoughts on syntax

We believe indentation is critical for readable code, but we don't want the whitespace of the program to determine its meaning. Rather, the meaning of the program should determine its indentation structure. Indentation becomes just another context-sensitive rule.

Unambiguous syntax (the reason for explicit end delimiters) means you can copy-and-paste code from email or the Web, and its meaning won't change. Your IDE can help you reindent code without worrying that doing so will change the meaning of the program.


So, as someone without too much experience in language development, the concept of having unit tests being an extension of the functions themselves seems very interesting. Clearly this doesn't work for all scenarios—multiple functions interacting, any sort of GUI interaction—but that's a very intriguing concept.


I've written a lot of tests using Clojure's with-test, which is slightly less elegant, but the same style:

https://clojuredocs.org/clojure.test/with-test

I've found it wonderful and productive for low-level bits of functionality, especially working on small data structures and primitives. With Emacs set up to reload the namespace and run all the tests on every save, it's a great feedback look with very little cognitive overhead.

It breaks down a bit for more involved tests, as you say, and it can be taxing knowing when to refactor, how much test setup code to include along with the unit under test, and also once you start splitting things into other namespaces, where to look to get a full view of a module's functionality.


They have a check clause which is detached from the functions themselves. This is where they would want you to do tests with multiple functions interacting.


Agreed. It probably helps enforce pure functions, which are one of the big pros touted about functional programming. Ideally most functions should be pure and isolatable from the rest of the program.


Actually, it does work for interaction if you program interactions Pyret-style, which effectively separate model from view. Models are updated functionally, so it's trivial to write tests for the model-updaters. Animations and games are obtained by having the underlying event loop infrastructure automatically compose these functions. This is the same methodology we use in Racket and in the Bootstrap curriculum [http://www.bootstrapworld.org/].


My MPWTest[1] Objective-C unit testing framework has unit tests as categories of the class under test. For me it

- avoids the duplicate class hierarchies you get with xUnit,

- avoids having to expose class internals for testing,

- makes tests an integral part of code, instead of an external add-on.

[1] https://github.com/mpw/MPWTest


It's pretty awesome. This is the feature that stands out to me for Pyret. Also see how Elixir does it: http://elixir-lang.org/getting-started/mix-otp/docs-tests-an...


Is there something that prevents you from making functions that only check how other functions interact?


No, not at all. The point is that there's specific syntax in the language for making this part of the function definition, as it were. New concept for me.


Scoping rules probably get in the way in some cases.


I agree


Honestly imo Python is the best language to start with for learning.

It has all the basic constructs of both functional and object oriented programming in plain english without a bunch of syntax to worry about. I personally think it's the most readable language. it's also quite intuitive imo. Lastly, there's a fantastic community and instant help is available for all skill levels but especially beginners on #python on freenode irc

Pyret... isnt very readable, doesnt seem very intuitive, has somewhat odd syntax, and generally seems like a terrible choice for learning. Doubt it has the resources quality python has (community, docs, tutorials)

i dont get it.


The readability, unintuitive, and "odd syntax" comments are somewhat subjective, though I'd curious to hear what parts are odd and unintuitive for you.

I linked to a perspective on where Pyret fits in in another similar answer:

https://news.ycombinator.com/item?id=13187379

We also may have different definitions of what a "beginner" needs for community and help.

Pyret is used in middle and high school classes (along with the undergraduate level). Those students, nor their teachers, are likely to know what IRC is or how to seek help on it. We do a lot of curriculum development, and professional development workshops in the summer are a major way we interact with these teachers and set them up with a high-bandwidth way to communicate with us over the year (which is direct email or mailing lists).

Prompt and detailed help is certainly available for teachers (who then help their students), and it's coming from the folks who designed the curriculum and the language together. Those mailing lists are public (pyret-discuss [1] and bootstrap-teachers [2]), and anyone is welcome to ask away.

[1] https://groups.google.com/forum/#!forum/pyret-discuss [2] https://groups.google.com/forum/#!forum/bootstrap-discuss

Of course, Pyret doesn't have the huge community, Stack Overflow presence, documentation maturity, etc. of Python (or other major languages) for general programming. That takes time and consistent effort, but the lack of it doesn't negate the reasons Pyret is good for its current use cases.


Isn't readable?

It has mostly Python syntax. Some bits inspired by Standard ML, Ocaml, F# and Haskell. Nothing odd here. End marker prevents accidental scope overrun and is from Pascal I think.

Community is hard to get for any new language. If this got popular, there would be no such problems.

You're arguing that Python is Good Enough. But then, so was assembly.


>has somewhat odd syntax

says the programmer using significant whitespace

runs for cover


i think the pyret syntax is wonderful, personally. it takes a lot of good bits from my two favourite languages, ruby and ocaml, and blends them into a nicely coherent whole. there are a few minor things i'd probably have done differently, but on the whole i find pyret code very pleasant to read.


> there are a few minor things i'd probably have done differently, but on the whole i find pyret code very pleasant to read.

Curious to know what would you have done differently. I'm working on a programming language based on OCaml with Ruby-like syntax.


just bikesheddy stuff like

* |> rather than ^ for the pipe operator (more consistent with several other languages)

* [list| 1, 2, 3] rather than [list: 1, 2, 3] (just for better visual distinction, though the current syntax does look pretty clean and uncluttered)

* having a separate code block for methods of individual variants rather than interspersing them with data definitions (because i think it's important that the basic data definition be as uncluttered as possible)

if you'd like some syntax feedback on your language i'd be happy to take a peek :)


I'd disagree with the second one. `[list: 1, 2, 3]` reads more like in English: "I'm going to define a list: one, two, three…". Whereas `[list| 1, 2, 3]` has no obvious vocalization. (Vocalization is an important characteristic for us; it's how `ask` came to have its syntax.) In addition, we usually use `|` in programming languages in a separator kind of way (even `||` for "or" separates expressions), which it's certainly not in the list expression.


"list where the elements are 1, 2, 3", by analogy with set builder notation or list comprehensions :) but i agree the current one reads better as english.

it's the separator aspect i was getting at with my proposal though - visually it's [ datatype | elements ] whereas the current one looks more like the colon is part of the first element - [(list : 1), 2, 3]


Not once you learn to vocalize it, the way I just showed you how (-:.

Frankly, I'm really trying, and I'm having a very hard time suffering from the "list: 1" confusion you claim. I admit, that's just me.

Also, `|` as a separator suggests a kind of symmetry. But the two sides are not symmetric at all. Whereas in English, `:` does not imply symmetry.


No, Python doesn't actually have a lot of things you need in education. See https://news.ycombinator.com/item?id=13189513


"We need better languages for introductory computing." and "One of the enduring lessons from the Racket project is that no full-blown, general-purpose programming language is particularly appropriate for introductory education."

Well not really, we need better teachers and methods for teaching programming and computing. A good language on its own is not gonna lift the interest.


> Well not really, we need better teachers and methods for teaching programming and computing.

I think most of the Pyret developers would agree wholeheartedly! To this end, the Pyret development team works closely (and in some cases, overlaps) with Bootstrap [1], a nation-wide program to teach programming to middle school students. The feedback we get from these teachers (and the process of teaching teachers) directly informs language design.

Furthermore, at Brown University, where many of Pyret's developers have ties, Pyret is dogfooded on two courses: an "Accelerated Introduction to Computer Science" [2] and "Programming Languages" [3].

(Disclosure: I am a developer of Pyret.)

[1] http://www.bootstrapworld.org/ [2] https://cs.brown.edu/courses/cs019/2016/ [3] https://cs.brown.edu/courses/cs173/2016/


Is there a good quality MOOC on Pyret? I am not in high school but am not a programmer either. I started using DrRacket MOOC fairly recently on Edx and had no idea about Pyret until today. My interest in finding good quality teaching, led me to Racket, all the way from the hype of Python/Ruby, and anyone else in between. I am trying to follow Racket but now I am confused between Racket and Pyret.


There is no MOOC I know of that uses Pyret (yet).

Racket is great! If you're already deep into Racket, I wouldn't say stop to try Pyret unless you particularly have the goal of learning more languages, or there's something about Pyret that really gets you interested.

We taught a programming languages course that uses Racket back in 2012, but all the material is still online:

https://cs.brown.edu/courses/cs173/2012/

So if you're looking for online content that we recommend, that could be an interesting next step that makes use of the momentum you have in following Racket.

The Pyret book PAPL is free online, and it's possible to dive in and follow that if you're curious about learning Pyret, though it doesn't have the infrastructure of assignments with autograders, etc, that a MOOC would have:

http://papl.cs.brown.edu/2015/


Thank you. I am not deep into Racket yet, but it requires concentration which I lack at the moment. Based on your suggestion, I'm going to stay focused and attempt to complete, instead of distracting myself.

I started reading HtDP2e before finding the Edx. The MOOC helped reinforce what I was reading. I realized that through Racket, learning to program is a superior experience. This, compared to any other (popular) languages, people are teaching (or coding on the screen). I spent too much time watching people code. People attempt to teach online, but most of them lack the traditional pedagogy. They know how to code, but they don't necessarily know how to teach.

The only exception I would make to above (as far as what I know right now), is Harvard's CS50.


Yep. Stay where you are. Work through the MOOC. You'll learn a ton.


Confused? Don't be! If you're happy with Racket, stay there. Racket is absolutely awesome. If you're using Gregor Kiczales's MOOC, you're learning from an absolute master. Pyret also has many things to offer, but you can sample them after you're done with the EdX MOOC. Come over to the Pyret list [http://www.pyret.org/discuss/] and drop us a note, and we'd be happy to guide you through.


Those of us who overlap the Pyret and Racket team have been working on better curricula for over twenty years (e.g., http://htdp.org/).


I strongly believe that people who are learning to program should learn in a language that can actually be used to make things.

This may run counter to the 'work smarter, not harder' ethos I see here but I feel that programming competency in practice is much more a function of literal hours writing code than any other factor.*

*with the caveat that competency tends to follow a sub-linear growth curve and people vary significantly with respect to their growth rate so this statement is more accurate when comparing individuals who are early in their growth curve aka beginners.


People who learn to program have to get used to the idea that programming languages are many, and they have to attain certain competence in several of them.

A typical frontend dev uses Javascript, CSS, HTML, likely some occasional bash.

A backend developer probably faces the preferred backend language (Java / Python / Ruby / PHP / whatever), SQL, and has to have at least a nebulous understanding of HTML, CSS, and JS.

A devops person has to face a shell language, the Unix tools mini-languages, have enough Python or Ruby chops to handle things like Ansible / Salt / Chef, and likely some SQL to keep an eye on the databases.

This does not include smaller languages like regexps, or narrow-use languages like XSLT, JSON Schema, etc.

There's no way you learn the One Practical Language That Matters and can be limited to it. You could in past, with a Spectrum or a TRS-80 and Basic, but the times have changed.


I agree that everyone likely has to learn more than one language in the course of their life, but the difficulty of learning language N is typically more than learning language N+1.

Therefore I believe that it is productive to start beginners off with a language which is:

1) as approachable as possible 2) as useful as possible

Such that the value of that first, most difficult learning experience is maximized and the challenge is minimized.


> There's no way you learn the One Practical Language That Matters and can be limited to it.

Most mainstream languages have very similar semantics overall, differing wildly from FP-like approaches. So learning a mainstream-ish (imperative-ish, OO-ish) language will enable mastering other mainstream languages with greater ease.

If my first language is Lisp-1944, I'll likely have a hard time learning C or Java. If, on the other hand, my first language was C, I'll likely have much more and quicker success with Java, Python, C#...


If the first language you learned was C, and you never rethought programming from different angles, you'll miss the best parts of JavaScript, Python, and a serious part of C# at least.

OTOH if your first language was Lisp (or even Python, to a lesser extent), and it was competently taught, the syntax of Java or C# would feel alien at first, but you'd find out that many of the concepts are already known to you; chances are, you even have an idea how they work under the hood.

(Disclaimer: I majored in embedded systems, my first languages were Fortran IV and PDP-11 assembly.)


I responded to a similar comment in another thread from a pedagogic perspective, I hope this answer gives some perspective on how we approach this point:

https://news.ycombinator.com/item?id=13186216

Also, Pyret can be used to make things – Pyret's compiler is written in Pyret!

As the language matures, the experience of writing small Pyret programs that grow into real applications will only get more authentic.


Seems like a great functional scripting language with above-retard type system. This is really cool, no sarcasm.

Just please don't label it as "educational" or "academic" or industry devs will avoid it out of pure macho-ism :)

And find a way to marry it to a popular ecosystem, maybe a transpiler to js (for nodejs ecosystem) or python or go. 'Cause it really looks like a language I'd enjoy writing production code in if I could just `require` some of my favorite libs and hit the ground running!


Pyret's runtime is entirely built on JavaScript (seriously: you can load https://code.pyret.org/editor, then turn off the network, and your IDE still works), and Pyret programs compile to JavaScript. At the command-line, it builds standalone JavaScript files that run under node.

It's entirely possible to write pure JavaScript libraries that interface with the language (the compiler expects AMD style and requires that you write some small wrapping code). That's how the image library (https://www.pyret.org/docs/latest/image.html) works, for instance, and how we connect to Google Sheets (https://github.com/brownplt/pyret-lang/wiki/TM028-Pyret-and-...).

Industry devs should be suspicious of Pyret's performance right now, though that will improve as time goes on. We've been spending most of our effort supporting our student and teacher users, so things that really build the traditional out-of-the-box language experience aren't where a typical application developer would hope.

I think that's fine since we've made our intentions and goals clear (the educational audience and a coherent curricular design), and would be less fine if we were offering Pyret as a general-purpose language today. Offering it as general-purpose will become more and more reasonable as time goes on.


Unexplained important things:

* concurrency, is there any nice built in syntax like Python async? Any kind of threading? Multiprocessing support?

* error handling, how is it done? Where are exceptions?

* standard and file io, string operations, serialisation?

* no builtin higher math types (matrix etc.), is math done on decimal floating point numbers?

* foreign functions, interfacing with other languages, embedding?


Great points! These are important things, but not necessarily important to Pyret. Pyret is not designed to be a general-purpose programming language, so the design of the language is directed and constrained by the needs of the teachers who use it.

Pyret's number representation is one consequence of this: Pyret numbers are stored as either exact rational numbers:

  > num-sqrt(4.41)
  21/10
or, when an exact representation isn't possible, as "roughnums" (which are backed by a Javascript floating-point number:

  > num-sqrt(2)
  ~1.4142135623730951
Pyret's users overwhelmingly use Code.Pyret.Org, an in-browser development environment. Unfortunately, Javascript is a very hostile compilation target. For the time being, this rules out support for threading.

Pyret does not have FFI from an end-user perspective, but several of the standard library modules are implemented in hand-written Javascript.

Pyret has file I/O, but not in its online development environment, since the web page does not have filesystem access.

(Disclosure: I am a developer of Pyret!)


Judging from the red/blue colors in the logo and that the website was generated with Frog, I assume that once you feel too big for Pyret's shoes you'll discover that you've actually been using Racket all along and will start using the Racket language directly.


http://www.pyret.org/pyret-code/

> One of the enduring lessons from the Racket project is that no full-blown, general-purpose programming language is particularly appropriate for introductory education. By the time a language grows to be useful for building large-scale systems, it tends to have accumulated too many warts, odd corners, and complex features, all of which trip up students. The journal paper for DrScheme (the old name for DrRacket) explains this in some detail.

In that respect, the closest fellow travelers of us Pyreteers are the Racketeers (see how that works?). In fact, the first version of Pyret was merely a #lang in Racket. Nevertheless, Pyret represents a departure from Racket (for now and for the near future, at least) for several reasons:


Except note that Pyret is not implemented atop Racket. It used to be (a #lang) but that proved not suitable for our needs, so we built a very sophisticated run-time system on top of JavaScript. Pyret now compiles to JavaScript entirely in the browser (and Pyret's compiler is written in Pyret itself, so it's all bootstrapped goodness).

That said, Racket is indeed a fantastic next language after Pyret. I myself constantly switch between the two.


But why? Best languages are truly general. This should aim to replace Racket, not be bait for a switch.


Racket is both a programming language and a framework for building programming languages.

http://queue.acm.org/detail.cfm?id=2068896

Racket is the most fun I have had in Programming and the more I "get it" the more I enjoy it. I can't say that for any other language.


Racket is a platform for running many languages. It's different than the usual "one language for everything!" mantra. You can Pyret for one part of a program, Racket for another, Typed Racket for another, etc.


Also:

* metaprogramming - templates, macros, generic programming?


Shhh! That's on the horizon!

One of the frustrations of using Racket—which is macros all-the-way-down—is debugging. When you see an error message or use a stepper-debugger, you want it to be in terms of the code you wrote; not in terms of the code generated by some complicated macro which was glued together by a sleep-deprived graduate student.

One of the graduate students here in the PLT group at Brown, justinpombrio, is working on resugaring—taking a sequence of evaluation steps in terms of a post-macro-expansed program, and presenting them back to the user in terms of the surface-syntax of the language. (More on this at his website, http://justinpombrio.net/) We're just starting to prod at implementing Pyret's existing syntactic sugar in this manner, which will be a first step to a proper macro system.


When I was student, my favorite language syntax was Caml, the ancestor of OCaml. I am surprise there was comparison with many languages but not OCaml.


Pyret is similar to the ML family in many ways.

Some would say Pyret's syntax is an improvement over that of OCaml. (Others might not.)

A critical difference is that Pyret does not force static typing. The type-checker is optional, so beginning students, in particular, do not need to deal with the overhead of type errors and the vagaries of type inference. But Pyret's design, and pedagogy, aim to push students towards an ML style of programming with ML-like types (e.g., the language does not have untagged unions).

In fact, type inference in Pyret is done not from the code but from the tests the programmer has written. This is a very different philosophy of not just inference but program development.


I was also surprised. The syntax they use is basically Standard ML. Also I'd really like to see a paper on their static type checking, because they mix objects and algebraic data types, which is not trivial at all.


Hi, I'm actually the person who wrote the static type checker. Essentially we get around the difficulties of mixing objects and ADTs by having very limited support for subtyping. This way we can still carry out unification-based inference. The subtyping bounds are not propagated during unification so we don't need to worry about calculating closures. This does mean that type inference is incomplete with respect to subtyping, but I haven't found a satisfactory system that would let us do otherwise.

I would also like to note that we do also have a new feature where types can be inferred if you provide tests in a "where:" block attached to a function. This infers a function's type solely from the examples of usage provided.

If you have any other questions feel free to ask!


Hi, thanks. Primarily I wondered about actual mixing of ADTs and objects, which is limited even in OCaml, see e.g.:

http://caml.inria.fr/pub/ml-archives/caml-list/2003/06/bed28...

But does Pyret always translate an ADT to a class hierarchy behind the scenes? This example sure looks like it does:

  data Animal:
    | elephant(name, weight)
    | tiger(name, stripes)
    | horse(name, races-won)
    ...
  end

  fun animal-name(a :: Animal):
    a.name
  end


Good question! Pyret does not translate to a class hierarchy. Pyret dynamically allows dot lookups on both objects and ADTs. In the type checker this means that an ADT permits dot lookup on any field that all variants have (in this case, `name`), as this will always be safe.

Additionally, Pyret's type system internally tracks each of the different constructors as a refinement on the type. With the animal example this would be represented internally as something like

  Animal%(is-elephant)
if the type is known to only be an elephant. Though this is not used everywhere it could be right now it is part of the system. This means we have room to do things like if-splitting e.g.

  if is-elephant(x): ...
would be able to treat x as an

  Animal%(is-elephant)
in the body rather than just an `Animal`.


This is rather nice. You get a lot of convenience for that one type annotation!


To add to @mkolosick's reply: Pyret doesn't have classes and hence class "hierarchies".


For what it's worth, I'm a student at the University at which J Politz teaches. I haven't taken the programming languages course which is taught in Pyret, but it's worth noting that it's an upper level course here and everyone I know who's taken it is a fairly experienced programmer (relative to university CS standards, of course). So, if the fear is that Pyret is an unrealistic/overly complex/non transferable language to learn, it's definitely not marketted towards beginners here. By the time you're taking this class, you should already be well versed in Python and C++.


Hey fellow Swattie! [Actually, I'm at UC San Diego as of this fall, so I'm a former fellow Swattie, I suppose :-) ]

The programming languages course in question (https://www.cs.swarthmore.edu/~jpolitz/cs91/s15/) indeed used Pyret, and that course is a heavy functional programming implementation course (writing interpreters, type inference, GC, etc).

That's using Pyret for more complex programming, and is the upper end of where we use it pedagogically right now. Ditto for CS019 (the advanced introduction to data structures), and CS173 (PL) at Brown. In all of those cases, students _are_ expected to have some prior programming experience, so they get a faster introduction to Pyret.

In the other direction, Pyret is in active use at the middle and high school level in math, cs, and physics classes, and this is where much of our active design work is tailored (tables, reactors, etc have some explicit curricular goals they target). They have scaffolding and workbooks written for their grade level, use features suitable for the context they are in, and so on.

The use in CS91 in particular is

(1) appropriate in the first place, since many functional languages could work in that setting, (2) eating my own dogfood, and (3) an excellent opportunity to discuss design decisions of a language that students are learning in a PL course, while it is still having design decisions made about it!


What's the point? One would hope they start with a description of why this is better for education than something like Python. I just gave it a quick glance, and it looks like many other programming languages - a bunch of odd text gibberish. I don't think the problem dividing people into "gets it" and "doesn't get it" has anything to do with nuances in syntax or semantics.


Despite what the down voters think, I actually found this comment useful because I had the exact opposite reaction. I took a quick look at the examples, saw familiar syntax and features and quickly concluded it looks like a nice toy, but I'd probably never use it because there are way too many unknowns (Performance, runtime, buggy, etc. etc.).

I was actually a little flabbergasted when you said it looked like "odd text gibberish", but then I took another look at the the first example and saw it.

    data BinTree:
      | leaf
      | node(value, left, right)
    end
Unless I'm familiar with union types, what part of that code tells me this is a parameterized enum? Why should I assume we're even talking about data types?

I'm relatively flexible when it comes to syntax, but clearly syntax is a huge hurdle that almost always gets in the way of adoption. I understand Pyret isn't looking to be the next Python, but for an educational language I would argue it's not living up to Python's Principle of Least Surprise. It's great if you know Haskell, OCaml, Python and other languages, but it's not so great if you're an unsuspecting student who's sharpened your teeth on Java/Python.

I suspect most people could pick up the syntax pretty quickly, but I suspect most people still need some hand-holding on the first page. Most programmers still don't know what this short paragraph means:

    Pyret allows for concise, expressive, recursive data declarations. Type annotations 
    are optional and can be added incrementally, to serve a variety of pedagogic 
    styles and curricular needs.


It isn't designed to be for student who have “sharpened” their teeth on Java or Python. Nor does it claim to be. And that paragraph you quote is written for educators, not for students.


>> Unless I'm familiar with union types, what part of that code tells me this is a parameterized enum? Why should I assume we're even talking about data types?

I have fond memories of learning to program as a kid. Concepts like trees, nodes, and data types have no place in an introductory programming for kids. The initial idea to teach is the notion of getting the machine to follow a set of instructions. IMHO even functions should not be forced until that notion of writing instructions for the computer to follow is mastered. Imperative is the word I'm looking for. You need to start with that. Then move on to loops, arrays (or list but don't try to explain the difference for a while), functions. You don't want to require any of the concepts that we all take for granted just to get started. Type systems? structures? Objects? Those need to be introduced through examples where they are used to solve specific problems.

I have some examples of how an absolute beginner may think.

I was learning about BASIC and had worked up (yeah up) to using FOR loops, IF and GOTO. Then I read about the PLOT(h,v) function which would light up a pixel on the screen - awesomeness was about to ensue. But I had the notion that - as the instruction said - I had to pass the variable h and v to that function, not whatever pair I choose. That would mean copying values from other variables (i.e. h=x, v=y, plot(h,v) and then h=i, v=k, plot(h,v)). Fortunately that notion got cleared up very quickly, but it illustrates the point that a noob may take something quite literally and not understand the simplest abstractions that we all take for granted.

On another occasion I wanted to write Pac-Man so I had variables (we didn't even had structures available) for the player and I figured I'd use x1,y1, x2,y2, x3,y3, x4,y4 to track the positions of the ghosts. I could see in advance that I'd have to replicate the ghost logic 4 times using different variables in each instance (or write a subroutine and copy variables around like I suggested above). What I wanted was to put the logic inside a loop (for n=1 to 4) and reference the ghost variable as xn, yn but that wouldn't work because those are just different 2-letter variables. So I asked someone if there was a way to solve this and was pointed to arrays. The notion that a construct existed that matched exactly what I wanted to do was amazing and I felt like I really must understand something if my ideas about what should be possible were already there. Try to imagine that type of discovery compared to being presented a bunch of such concepts and being expected to understand form the start.

When I got to college I figured I already knew programming. Hah! The data structures course was incredible, I could already see how all of that stuff could be applied to different things. It made perfect sense because I could already imagine how to use it in all kinds of situations.

Even the most simple concepts need to be taught slowly at first, and preferably to solve a specific problem. If your language requires too much boilerplate, type definitions, or anything else it's too much.

So when I said this language looked like a bunch of gibberish, I was looking at it through the lens of a kid who doesn't know shit about any of this. Because for some reason I can remember what its like to think that way.


I too learned BASIC first. It was fine for what it was. And computing has come a long, long way from there, and our pedagogy has evolved substantially. In fact, kids have no trouble with concepts like lists and trees — I see more trouble with it from people who learned programming poorly before than people who didn't learn to program before at all.

Also, nobody said Pyret is for kids (defined as pre-teen). It's not.


This is useful feedback. I think this pair of comments combines to give a decent answer:

https://news.ycombinator.com/item?id=13186997

https://news.ycombinator.com/item?id=13187379


If you want to understand better the comparison to Python, see https://news.ycombinator.com/edit?id=13189513


Every time I see a project pop up touted as being simple and a good "teaching language", it has some weirdness or pet features thrown in that turns enough people off to keep adoption low.

If you want a good teaching language which will receive substantial adoption, do this:

* syntax-wise, use curlies and semicolons like C, Java, Perl, JS, etc.

* use many of the good function names that Perl uses.

* have all variables be refs to objects, like Python does.

* use nice normal lexical scoping like Scheme et al

* this shouldn't have to be mentioned, but provide data literals for lists, maps, and sets

* keep it small, and written in C, like Lua.

* resist the temptation to complicate things by adding this really amazing advanced feature that it's just got to have to distinguish itself.

* plan for it to become used for general purpose programming, as it very likely will.

And some "don'ts":

* don't worry about performance, that can come later

* don't worry about lack of libs --- your implementation is in C, and you should get good ffi down the road. And with the substantial adoption you'll see, libs will come later anyway.

* don't have it be on the JVM, LLVM, or any other existing VM. Keep it simple. Even an interpreter is fine for now.

That's it. That's all you need to do for amazing success in a teaching language that will also see serious adoption.

That said, no one does this. Presumably because if you're skilled enough to implement it, you have some neat but obscure features you'd like to add, or a unique syntax you'd prefer, which ... turns users off and keeps adoption low.


Some of these are good ideas (e.g., nice lexical scoping). Pyret has most of these. Pyret also lacks any complicated, advanced, amazing features, for probably the very same reason you made that suggestion.

Some of these lead to endless bikeshedding (e.g., curly syntax) without much of a way to resolve it. Pyret has one position on these, others may have others.

And some of these are just plain invalid in some contexts (e.g., implement it in C). Our target audience is browser-bound: many of the schools we work with cannot install software on their desktop (so no compiler, IDE, etc.). Implementing in C is therefore a non-starter.

Pyret is therefore built atop JavaScript, targeting JavaScript. However, Pyret is built entirely in Pyret, so if we were to build a different back-end codegen, it would be straightforward to port it.


Thanks for the advice! What are some examples where a teaching language has followed these points and succeeded?

What is the weirdness and/or pet features you see in Pyret?


I'm an old curmudgeonly programmer (more hobby than pro), have kids in middle and high school, and have some meager amount of teaching experience. Here's my take: your students (especially in middle and high school) are generally a captive audience. Maybe later, if they have more interest in comp sci, they may be interested in using a more alternative language, but right now they want you to give them something that looks mostly like the current lingua franca and then get out of their way.

For the language I described above:

* it's like JS but with warts removed

* it's like C or C++, but higher level

* it's like Java but doesn't require types or the JVM

* it's like Perl but without the context rules and other zaniness

* it's like Python but with better names, more familiar syntax, and better scoping

(It's even a little like Scheme but with more conventional syntax, and regular lists, maps, and sets.)

Students can use it in your class, then if they want to pursue programming in earnest, can very easily segue to any of those common languages above. Not only that:

* They can easily re-type their classwork programs in any of the above langs, and not feel like their teacher had them using something that was only applicable to their particular class.

* They can show their classwork programs to any JS/C/C++/Java/Perl/Python programmer and it will be easy for that person to make sense of what they're looking at (with no previous experience with the lang described above).

I have no experience with Pyret, and don't want to cast any judgement on it or the people working on it. Maybe it's great and will benefit students even when they move on from their class which uses it.


Your initial list pretty much exactly describes Pyret. It doesn't have JS's warts, but still runs in the browser entirely. It's higher-level than C and C++. It doesn't require types or a JVM. It isn't like Perl in any of the zany ways. It's a bit like Python and with totally sane scoping. And it's very much like Scheme but with conventional (infix) syntax and lists, sets, maps… And it's used in several high schools already.


A new language created: branchly


Sure. Alas, I don't know a flex from a bison, so I won't be implementing it any time soon.

Seems like language implementors are like carpenters: once you're a skilled pro, you're not interested in making a plain simple bookshelf. It's got to have dovetail joints, beveled edges, countersunk fasteners with plugs over them, light sanding between multiple coats of finish... then it's great, but not affordable and I'm back to looking for a simple bookshelf.


No offense but I really don't think it's a good idea to teach beginners a language with such a goofy syntax. It's fine if you want to make a language with a unique and "innovative" syntax. But beginners are best served by learning something mainstream, and ideally simple. They can expand into crazy stuff as they are ready for new languages.


The syntax is the converse of innovative. Standard ML is very old. End keyword telegraphs Pascal and Algol.

About the only new thing here is the testing, documentation and contract support and it is great to have.


These comments surprise my so much. Pyret is designed to have as much of a familiar scripty-looking syntax as is possible while not being a shit language. Just like Rust has lousy C-like syntax to not alienate that crowd.


For me, this looks MUCH more like Haskell than Python and I think that this is symptomatic of an erroneous claim that some functional programming proponents seem to constantly make: That declarative, "mathy","deconstructive" programming is somehow easier to learn than imperative, "algorithmic", step-by-step constructive programming.

In reality, most people in the world struggle with abstract mathematics and find it much more intuitive to think about a program as a series of steps and in terms of control flow. Programming beginners want to draw nice pictures on their computer, not struggle with understanding and implementing some recursive function.

This is of course not to say that recursion is not an important concept or that functional languages have no merits. They are however complicated abstractions over simple concretions that are much easier to understand for a beginner.


This is very difficult to argue empirically, and I don't claim to know what's going on in students' minds. In particular, I don't know what "most people in the world" struggle with, just those students in the middle, high school, and undergraduate courses that I've been involved in.

As a concrete counterpoint of _friction_ between what students may have as background and what many languages do that relates to language choice:

Many math teachers we work with use the word "variable" to refer to a name that can change per-instantiation of a function or expression by substituting a value. But it doesn't make sense mathematically for x to be 5 "now" and 6 "later". So writing

    x = 5
    x = 6
is in some sense expressing an unsolvable system of equations.

Math variables and traditional (mutable) computer science variables are very different things. This provides an _immediate_ tripping point in vocabulary if you want to, say, teach computing in a math or physics class (or leverage that background). So we make a conscious choice in Pyret to dissuade multiple definition of the same variable, or mutability of a variable, to hew close to definitions we can leverage without creating extra conflict and confusion between concepts.

That particular point may differ in different contexts, and deeper studies are definitely needed to figure out what kinds of "notional machines" are easiest or best for students to build up. There may even be more to say about addressing this particular example. But this is a little perspective on concrete reasons for some of these choices.


Look at the examples.

Math is useful not because it's abstract, but because it's concise and descriptive.

Pattern matching is a much more natural way of thinking for a human than tracing the execution flow; did you ever write Basic on 8-bit computers?

It seems that Pyret is much closer to OCaml (there's a direct comparison in the examples), but with a less mathy syntax. It's also not statically typed. So it should feel much more like Scheme, or, well, Python: you can easily write imperative code, with explicit loops when you want to, etc. But it gives you the nicer data model, the one that combines the natural feel of "objects" and the conveniences of Hindley-Milner-ish types.


Do you have any measurable (preferably peer-reviewed) evidence that suggests imperative programming is inherently easier? That sounds like a challenging claim to make.


> a challenging claim to make

...or more like the obvious intuition and wisdom of 99% of working software developers. The intuition is not backed up by any peer-reviewed evidence but by the real world fact that most production code ends up written in C++, Java, C#, JavaScript, PHP, Python,Ruby etc. and not Haskell, OCaml, F#, Clojure etc.

Since they made the first stone tools and cave drawings, humans have always defaulted to "how to" reasoning and not "what is" reasoning. You may have the opposite intuition for the simple reason that 99% of what you read, write or see on TV is produced by the the 0.01% percent of people with decent communication skills and a larger percent of them have better developed "what is" type thinking. Also, "what is" type knowledge is easier to communicate in compact form.

Also, most people who tend to learn programming are "makers" types, like me, which have a huge bias towards "process based intuition": for examples, for us, intuitively, an "ellipse" is firstly "the abstract equivalent of what you get by drawing a curve constrained by a string constrained by 2 pins" (http://www.sems.und.edu/FrameA/SolarEclipses/ellipse.jpg), not "the points whose sum of the distances from 2 fixed points is constant", and a "bubble sort" is "what you get by walking over an array and for each..." etc.

And the obvious companion to "how to" or "procedure based" reasoning is extensive mutability because... this is how the physical world seems to work at our level: even writing an equation on paper is actually just "mutating the position of particles of ink from a pen to precise distinct locations on the piece of paper", and results in "changing the information stored at particular 'addresses' of the paper. Even pointers ended up being used not because they are a very smart or appropriate concept or anything, but because they map obviously to the intuition of "pointing at things scribbled on a piece of paper or a blackboard"...

But yeah, "how to" reasoning scales horribly, creates huge misunderstanding and communication problems, and is extremely hard to debug, and we should start moving software engineering past the "cave drawing" stage... but this doesn't mean we should ignore where we're starting from :)


Based on ancient literature and holy texts, people from early cultures seemed to be obsessed with their lineage, stating "I am X son of Y, who was son of Z" etc. Do you think they thought about it in terms of procedures for determining lineages or just definitions of relationships? The latter would be easier to model in languages like Prolog rather than C++ or Java.


Their kings were. Imho there's a certain minority of people, maybe 0.001%, who are definitely not what I call "makers", with waaaay above average "communication/leadership/manipulation + sword swindling" skills who end up kings/emperors/etc. (at least the first time; then it's just because they were born in the right bloodline - hence another reason for "bloodline obsession"). And their scribes (the guys with good communication but bad sword swindling skills and of the wrong bloodline).

The rest of the people were more obsessed with "what steps to follow to plant my seeds in the ground so as to increase the probability of crop yield" or "what steps to take when preserving food so as not to get really sick" or "what steps to do in what order when building my house do it does not fall down on me at the next storm" etc.

Oh, and lineage is pretty much retrospective "how-to knowledge" anyway: it describes what the sequences of actions where that resulted in someone existing today: 'X1 and X2 had a baby", then "the son of X1 and X2 married Y0" and "moved to village Q" after "fighting in war Z" etc. History is pretty much recorded procedure.

Not until we got to philosophy, geometry and other types of math did we really have clear what-is knowledge... (Except maybe for religion, and no wonder that smart priests and monks were pretty damn good at math.)

As an example, go to any remote village with and ask them "where is X" and what they'll inevitable tell you is "what actions to perform to get from here to place X". (The infuriating part is that some of the modern humans kept this thought pattern and good luck getting indications to how to get anywhere from them... thank god for the "what is" knowledge provided easily by google maps :) )


> obvious intuition

I don't mean to throw it out, but I try to be skeptical of obvious intuition. I just feel like people were saying "the Earth is obviously flat" (and plenty of other examples). Which from a practical point of view for a lot of people, the Earth was effectively flat. I'm certainly not saying "FP-Master-Race!" But I try to be wary of the fact that history is important but imperfect. Imperative programming is terrifically important, but that does not necessarily mean it is inherently more humane.


> Programming beginners want to draw nice pictures on their computer, not struggle with understanding and implementing some recursive function.

One of the links under "set sail": https://code.pyret.org/editor#share=0B32bNEogmncOMTg5T2plV19...


I really like the idea of tests right beside the code. I don't like how it's code though. Muddles what the actual implementation is a bit. Maybe with more usage the where statements start to blur and you don't notice them as much.

I like Elixir's approach better where your test cases live as documentation examples in comment form, right above the function declaration. It's fantastic and free. Your mix test command runs tests, in addition to any examples you have as tests.

http://elixir-lang.org/getting-started/mix-otp/docs-tests-an...

You put your tests as comments below an ## Example header. Oh and these examples go into your auto generated documentation as well. Ridiculously effective and free. You feel bad not using them.


Pedagogically, the nice thing about the examples/tests just being code for beginners is that they are... just code!

Once we've taught students about function calls and values, it's a small jump to write an example with "is" in the middle. The syntax errors, static errors about unbound identifiers, and dynamic behavior all act the same as everywhere else in the program, so it's less friction to write the first test.

For getting off the ground writing that first test, I've been really happy with what Pyret lets us do. There is more involved in getting better testing/reporting options as systems scale up (we have lots of test-only files for the compiler), but pedagogically, the testing block infrastructure has worked well.


Again, use IDE with code folding if that is ever a problem. At least it is in the same file.

Cannot do that with elixir style tests easily.


I've long thought that Haskell is much more intuitive than any engineering languages out there for people with no programming experience, since it builds on mathematics that everyone has already learned. It's great to see functional programming placed at the center of intro-level programming teaching.


> "while exploring the confluence of scripting and functional programming"

Scripting and functional go amazing together, as long as it's (mostly) dynamic typing. In general, types really just get in the way when scripting or prototyping or trying to do anything fast, especially when you don't know the types of data you're going to be working with ahead of time, e.g. when it comes from a file or from an external API, etc.

But what is BinTree in the following example on the website? Is it a type? Is it a function, or constructor, or what? I'm confused.

    data BinTree:
      | leaf
      | node(value, left, right)
    end


> "In general, types really just get in the way when scripting or prototyping or trying to do anything fast"

I've found the opposite of that claim to be true. Types are absurdly helpful in prototyping or trying to do anything fast, especially when exploring data coming from a file or an external API.

Anecdotes aside, BinTree in the example is both a data type and a type testing function ("detector") of signature `Any -> Bool`. See http://www.pyret.org/docs/latest/Declarations.html#%28part._... for details.


Types are great unless you are forced to be explicit instead of the compiler doing automated type deduction.


My intuition from other languages is that it's a type and a set of constructors. leaf constructs an instance of type BinTree, as does node(4, leaf, leaf). This also matches how it's used in the where clause.

The documentation seems to verify my intuition: http://www.pyret.org/docs/latest/A_Tour_of_Pyret.html#%28par...


Pretty sure its a union type.


Tagged union type, so no casting.


`BinTree` is a type. There are two kinds of BinTrees. There are `leaf` BinTrees and there are `node` BinTrees. A leaf has no fields. A node has three fields. Thus, here are some BinTrees:

  leaf
  node(1, leaf, leaf)
  node(2, leaf, node(1, leaf, leaf))
and so on.


The docs at http://www.pyret.org/docs/latest/ link to the outdated book. Update the link?


Yes, thanks. We weren't expecting to end up on HN today. (-:


Fixed, thanks.


"We need better languages for introductory computing. A good introductory language makes good compromises between expressiveness and performance, and between simplicity and feature-richness. Pyret is our evolving experiment in this space."

Object Pascal is the best language I know for education.

Programming education first must be about understansing algorithms I think.

However, learning something old or not useful in real world can be harmful. The best I think is to learn modern tech. The development world evolves too and other skills become more relevant.


Why not just use the "easy" parts of Haskell as a teaching language for functional programming?

No monads, applicatives, etc. No category theory. Just immutability, laziness, pattern matching, and type classes. I feel like that makes as much sense as developing an entirely new language that most likely will only be used by a complete novice. Conversely, one can grow with a language like Haskell for an entire lifetime. Just my two cents.


Because we also use Pyret to teach program complexity, and big-O is a whole 'nother can of worms in Haskell (the cost model is totally non-traditional). Because we use Pyret to introduce students to state, and even basic equality is a whole 'nother can of worms in Haskell. Because we teach basic algorithms in Pyret, and even something like DFS in Haskell is complicated (because of the preceding issues).


> even basic equality is a whole 'nother can of worms in Haskell

How so?


I'm afraid I don't have time here to explain Haskell. Think about how graph reduction and reference equality would interact.


I know a fair amount of Haskell. For instance I know Haskell doesn't use reference equality.


And the lack of it makes it hard to test for things like node equality when doing graph algorithms. There's an extra layer of encoding you need to be able to simulate reference equality. And even if you addressed the equality issue, I brought up two other issues that you didn't address.


codygman and I both understand Haskell well, but neither of us (apparently) can work out what you mean. There's no "can of worms" around equality in Haskell as far as I can see.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: